uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,477,468,750,581 | arxiv | \section{Introduction}
Oncolytic viruses constitute a class of targeted anticancer agents that have unique mechanisms of action compared with other therapies. The premise is to genetically engineer viral particles to selectively replicate in and lyse tumour cells. Over the past decade, hundreds of patients with cancer have been treated in clinical trials with oncolytic viruses \cite{liu2007clinical}. Unfortunately, due to the heterogeneous nature of cancer, success is elusive, and there is a growing need to quantify the dependency of treatment outcome on cancer characteristics.
A number of mathematical models have been constructed to understand the dynamics of proliferation and diffusion of such viruses in cancerous and healthy tissues. Zurakowski and Wodarz \cite{zurakowski2007model} developed a mathematical model for the \textit{in vitro} behaviour of the oncolytic virus ONYX-015 infecting cancer cells. The virus, administered in conjunction with a drug that up-regulates a tumour cell's ability to intake viral particles, shows two distinct behaviours: a locally stable steady state and a nonlinear periodic cycle between viral particles and tumour cells, in agreement with dynamics that have been experimentally observed. Their model has also allowed them to suggest strategies that alter the amplitude of oscillations and drive tumour size to low levels. Bajzer \textit{et al.} \cite{bajzer2008modeling} introduced a mathematical model that also exhibits stable oscillations between virus and tumour. Crivelli \textit{et al.} \cite{crivelli2012mathematical} instead derived a cycle-specific age-structured model for virotherapy where cell-cycle-specific activity of viruses has been investigated. Through analysis and simulation of the model, the authors have described how varying minimum cycling time and aspects of viral dynamics may lead to complete eradication of the tumour. Other authors have also focused their modelling on the delay occurring between the initial virus infection of tumour cells and the second wave of infections when viruses burst \cite{cassidy2018mathematical,jenner2018heter}. This phenomenon has also been accounted for by a delay differential equation model for the lapse in the second generation viral infection \cite{kim2018hopf}.
From the experimental point view, particularly relevant to the present work are the findings by Kim \textit{et al.} \cite{KimPH2011}. These authors have developed an oncolytic virus, modified with immunogenic polymer polyethylene-glycol (PEG) and monoclonal antibody Herceptin, which has exhibited potent anti-tumour behaviour in murine models. While the treatment appears unable to eliminate the tumour completely, it significantly reduces the growth rate of cancer cells with respect to the case of an untreated tumour. Aiming to provide an explanation as to why this treatment seems to fully eradicate a tumour, we have recently proposed a mathematical model calibrated to these experimental results \cite{jenner2018mathematical}. Analysis of common treatment protocols has unveiled some drawbacks of existing strategies and suggested optimal scheduling to maximise therapeutic benefits.
In this work, we conduct a further investigation of this mathematical model with a particular emphasis on the biologically relevant parameters that control virus-tumour dynamics. After a discussion of the basis for our approach, we introduce a non-dimensional version of the system, which allows us to conduct local stability analysis and analytically determine which parameters can lead to incomplete tumour eradication, as often observed in experimental settings. Then, we present a bifurcation analysis of the model, leading to some nontrivial and, in some case, counterintuitive findings about the viral characteristics that drive a complete tumour eradication. By examining perturbations of the viral dosage and their effect on different dynamical regions, we show how we can achieve a better application of treatment protocols. In particular, the dynamical states displayed by the model when therapies are administered strongly affect the final outcomes. Finally, a discussion of the advantages and limitations of our approach concludes the paper.
\section{Model development}
The dynamics of a tumour treatment administered via a PEG-modified adenovirus conjugated with Herceptin can be captured using a minimal mathematical frameworks as explained in Ref.~\cite{jenner2018mathematical}. Assuming that the immune response is negligible and does not need to be incorporated in the equations, the following model can be proposed:
\begin{align}
\frac{dU}{d\tau} & = r\ln \left(\frac{K}{U}\right)U -\frac{\beta U\hat{V}}{U+I},\label{AEqs1}\\
\frac{dI}{d\tau} & = \frac{\beta U\hat{V}}{U+I}-d_II,\\
\frac{d\hat{V}}{d\tau} & = -d_V\hat{V}+\alpha d_I I, \label{AEqs4}
\end{align}
\noindent where $\tau$ is time, $\hat{V}$ represents the density of virus particles at the tumour site, $U$ is the density of susceptible but virus-uninfected tumour cells, $I$ is the density of virus-infected tumour cells and the term $U+I$ corresponds to the total tumour cell population.
Tumour growth is controlled by nutrients and spatial limitations and is described by a Gompertz function, i.e. $g(U) = r\ln(K/U)U$. Here, $K$ represents the carrying capacity of the tumour and $r$ is its growth rate. This type of expression is well-known to reproduce the experimentally observed evolution of a number of proliferating tumours quite accurately~\cite{laird1964dynamics}. In our framework, the likelihood of a virus infecting a tumour cell is assumed to depend on the number of tumour cells available to infect. To model this, a frequency-dependent function, rather than a simple mass-action term, is introduced: virus particles at the tumour site infect susceptible tumour cells according to the expression $\beta U\hat{V}/(U+I)$, where $\beta$ is the infectivity rate. This also differentiates this model from existing, well-known virus dynamics models, such as systems used for influenza or HIV modelling~\cite{DeLeenheer20031313, Wang200644}.
\begin{figure}[h!]
\centering
\includegraphics[width=80mm]{Schem11}
\caption{Flow diagram for the interaction between a population of uninfected tumour cells, $U$; virus-infected tumour cells, $I$; and virus particles, $V$. The diagram lists parameters relating to the original model Eqs.~(\ref{AEqs1})-(\ref{AEqs4}), in grey boxes and parameters relating to the non-dimensional form of the model, Eqs.~(\ref{E4})-(\ref{E6}), in blue boxes.}
\label{Schem11}
\end{figure}
After initial injection, any virus subsequently produced via replication within tumour cells will not have the PEG or Herceptin modifications. To account for this, only single average infectivity and decay rates $\beta$ and $d_V$ are used for the combined populations of original and replicated virus, noting that the population is dominated by naked (replicated) virus over the majority of the time course of the experiments. Fig.~\ref{Schem11} depicts the flow diagram of the three populations described in the equations, and we refer to the original study \cite{jenner2018mathematical} for a discussion on biologically relevant ranges of values of the parameters.
To proceed with our mathematical analysis, an appropriate change of variables detailed in Appendix A is used to scale the above equations into dimensionless form. The final result is given as follows:
\begin{align}
\frac{dU}{dt} &= m\ln\left(\frac{K}{U}\right)U-\frac{UV}{U+I},\label{E4}\\
\frac{dI}{dt} &= \frac{UV}{U+I}-\xi I,\\
\frac{dV}{dt} &= -\gamma V+\xi I\label{E6}
\end{align}
\noindent where $m = \displaystyle\frac{r}{\beta}, \xi = \displaystyle\frac{d_I}{\beta}, \gamma =\displaystyle \frac{d_V}{\beta}$ and $\hat{\beta} = \displaystyle\beta \alpha$ are dimensionless parameters, and $t$ represents a dimensionless ``time''. This model still follows the schematic given in Fig.~\ref{Schem11} and is the object of the present study. The three parameters $m, \xi$ and $\gamma$ that regulate the behaviour of the system represent tumour growth rate, viral death rate and viral potency (or infectivity), respectively. As a result of the non-dimensionalisation process, where parameters are all scaled by the infectivity rate (see \ref{sec:appendix}), the rate of conversion of uninfected cells $U$ to infected cells $I$ due to the viral load $V$, i.e. the term $\pm\displaystyle \frac{UV}{U+I}$, is not affected by any parameter.
\section{Local stability analysis}
A local stability analysis of Eqs.~(\ref{E4})-(\ref{E6}) shows a number of interesting features. Of particular relevance is the existence of a stable equilibrium corresponding to eradication, which is characterised by a singular Jacobian matrix. This solution can coexist with other equilibria, for example a stable spiral or a stable node, which instead corresponds to incomplete eradication of the tumour. As we will show shortly, this occurrence can give rise to bistability for some biologically relevant parameter ranges.
\subsection{Equilibrium solutions}
Setting the right-hand-side of Eqs.~(\ref{E4})-(\ref{E6}) to zero, three equilibria are found: (a) a solution at a value for the uninfected cells equalling the carrying capacity, indicating a treatment with no effect; (b) a non-zero solution representing incomplete eradication, characterised by a quiescent tumour despite the viral load being constant and non zero; and (c) an equilibrium at the origin corresponding to complete eradication of the tumour. The populations corresponding to such cases are
\begin{align*}
(a)& \quad U= K, \ \ I=0, \ \ \ V=0; \\
(b)& \quad U= K\exp\left(\displaystyle\frac{\xi}{m\gamma}(\gamma-1) \right)=U^*, \ \ I= \frac{K}{\gamma}(1-\gamma)\exp\left(\displaystyle\frac{\xi}{m\gamma}(\gamma-1) \right) =I^*, \\
&\quad V = \frac{K\xi}{\gamma^2}(1-\gamma)\exp\left(\displaystyle\frac{\xi}{m\gamma}(\gamma-1) \right)= V^*;\\
(c)& \quad U = 0, \ \ \ I=0, \ \ \ V=0.\\~\label{nonzeroequil}
\end{align*}
\noindent The Jacobian of the system is given by
\begin{align}
J = \left(\begin{array}{ccc}
m\ln\left(\displaystyle\frac{K}{U}\right)-m-\displaystyle\frac{VI}{(U+I)^2} & \displaystyle\frac{UV}{(U+I)^2} & -\displaystyle\frac{U}{(U+I)}\\[8pt]
\displaystyle\frac{VI}{(U+I)^2} & -\xi-\displaystyle\frac{UV}{(U+I)^2} &\displaystyle\frac{U}{U+I}\\[8pt]
0 & \xi& -\gamma
\end{array}\right),
\end{align}
and we discuss the character of the eigenvalues for the above equilibria below.
\subsection{Stability of ineffective treatment equilibrium: $U=K$, $I=0$, $V=0$}
The first equilibrium (a) corresponds to a failed treatment where uninfected tumour cells $U$ grow to the system's carrying capacity $K$ and no viral particle survives. Evaluating the Jacobian at this point gives
\begin{align*}
J = \left(\begin{array}{ccc}
-m & 0 & -1\\
0 & -\xi &1\\
0 & \xi& -\gamma
\end{array}\right),
\end{align*}
which gives rise to the following characteristic equation:
\begin{equation}\label{ch-eq-partial}
\rho(\lambda; m, \gamma, \xi) = -(\lambda+m)\left(\lambda^2+(\xi+\gamma)\lambda+\xi(\gamma-1)\right).
\end{equation}
The overall stability of this equilibrium depends on the roots $\lambda_2$ and $\lambda_3$ of the quadratic factor, because the root $\lambda_1 = -m$ of the linear factor is negative, since the growth rate $m>0$. After calculating $\lambda_2$ and $\lambda_3$, we find that the equilibrium is either a stable node or stable focus when $\xi+\gamma>0$ and $\xi(\gamma-1)>0$. Since the parameter values in this model are considered to be always positive, the first condition holds. The second condition implies that, if $\gamma<1$, the equilibrium is unstable, and vice versa for $\gamma>1$. As we will show shortly, a one-parameter continuation in $\gamma$ shows that at $\gamma = 1$ a branch point is present and the treatment is always ineffective for a decay rate $\gamma>1$. Intuitively, if the virus dies too quickly, no infection can occur.
\begin{figure}[h!]
\centering
\includegraphics[width=0.44\textwidth]{new1second}
\includegraphics[width=0.44\textwidth]{new2second}\\[5pt]
\includegraphics[width=0.44\textwidth]{new3second}
\includegraphics[width=0.44\textwidth]{paramspaceasfuncofUstar2}
\caption{Regions representing the stability of the nonzero equilibrium, (a)-(c), and the influence of system parameters on tumour cell numbers at the equilibrium value $U^*$. In (a), the section of parameter space where the non-zero equilibrium is stable is shown. Note that (b) represents the volume in $(\xi,m,\gamma)$ giving rise to a stable node solution for the equilibrium $(U^*, I^*, V^*)$, whereas (c) is the section for a stable spiral. Combining the regions in (b) and (c) gives the volume in (a). Plot (d) is the stable parameter space for different values of $U^*$, within the following intervals: orange for $0.2<U^*<0.25$, yellow for $0.35<U^*<0.4$, green for $0.5<U^* < 0.55$ and blue for $0.65<U^*<0.7$. Note that these ``slices'' are almost symmetrical.}\label{nsregion}
\end{figure}
\subsection{Stability of partial eradication solution: $U=U^*$, $I=I^*$, $V=V^*$}
The model admits a second, non-zero equilibrium where a small tumour mass coexists with virus particles. The characteristic equation for this solution, after substituting $U^*,I^*,V^*$ in the Jacobian, is given by
\begin{equation}\label{characeqn}
\rho(\lambda) = -\lambda^3-\lambda^2(\gamma+m+\xi)+\lambda\left(\gamma m(\xi-1)+\frac{\xi^2}{\gamma}-\xi(2m+\xi)\right)+\gamma m \xi (\gamma-1).
\end{equation}
\noindent For this cubic, the Routh-Hurwitz criterion is used to deduce the parameter values that produce three roots with negative real part. This criterion states that, given a general cubic of the form $\rho(\lambda)=a_0\lambda^3+a_1\lambda^2+a_2\lambda+a_3$, two conditions need to be met simultaneously for all roots to have negative real parts, i.e.
\begin{equation*} (a)~\frac{a_1a_2-a_0a_3}{a_1}<0 \qquad \text{and} \qquad (b)~a_3<0
\end{equation*}
with, in our case, $a_0 = -1$, $a_1=-(\gamma+m+\xi)$, $a_2= \left(\gamma m(\xi-1)+\frac{\xi^2}{\gamma}-\xi(2m+\xi)\right)$ and $a_3=\gamma m \xi (\gamma-1)$.
Condition (b) is easily satisfied for $0<\gamma<1$, given that all parameters are assumed to be positive. Condition (a) requires that $a_1a_2>a_0a_3$, since $a_1<0$. The region in the $\xi,m,\gamma$ parameter space that satisfies this condition can be numerically computed and is depicted in Fig.~\ref{nsregion}(a). Using the discriminant of Eq.~(\ref{characeqn}) and imposing the appropriate conditions, subsections of that region corresponding to a stable node or stable spiral are illustrated in Fig~\ref{nsregion}(b, c). Note that all regions are smooth and connected.
It is also interesting to consider which parameter regimes result in a low tumour burden (or threshold) $U_T$. To visualise how the value of the equilibrium $U^*$ changes as a function of parameter values, we can compute the regions of parameters space satisfying the following equality for a given threshold $U_T$:
\begin{equation}
\xi = \frac{m}{\gamma-1}\ln\left(\frac{U_T}{K}\right).\label{Ustarcontoursurfaces}
\end{equation}
\noindent Plots for four different $U_T$, varying within intervals, are shown in Fig.~\ref{nsregion}(d). The regions are roughly symmetric, with parameter $\gamma$ being the major contributor to changes in $U^*$ values. For example, when $\gamma \lessapprox 0.5$, there is a set of $\xi$ and $m$ values resulting in $0.20\lessapprox U^*\lessapprox 0.25$. Since $m$ represents the growth rate of tumours and $U^*$ is almost insensitive to its variations, our analysis indicates that a value of $\xi$ can always be chosen to decrease the volume of the tumour, as long as the decay rate $\gamma$ is low (i.e. the virus does not decay too quickly).
\subsubsection{Stability of full eradication solution: $U=0$, $I=0$, $V=0$}
The last equilibrium of the model represents the case of complete eradication, where all variables are zero. As anticipated, the Jacobian is singular due to the presence of logarithmic and rational terms in $U$ and $(U+I)$ respectively. An analytical treatment is not possible and, in particular, the presence of logarithmic terms $m\ln(K/U)$ or its source in Eq.~(\ref{E4}), i.e. $m\ln(K/U)U$, is not treatable with straightforward expansions for $U\to 0$. A different approach based on numerical integration and computation of eigenvalues under specific assumptions on $U$, $I$ and $V$ is instead used and will be discussed in detail in the next section.
As far as the equilibrium's stability is concerned, it turns out that the eradication solution can be stable or unstable, depending on the value of model parameters. As a general rule, we observe that parameter sets where $\xi$ is high, corresponding to a potent viral load, tend to yield a stable equilibrium as long as the decay rate $\gamma$ is not excessive. This suggests that the engineered virus has to be potent and sufficiently resilient: one characteristic alone is not sufficient. If, for example, the virus has potency $\xi$ but dies too fast, then the equilibrium turns into an unstable point and no eradication is possible. A clear picture of how eradication depends on viral characteristics will emerge with the aid of bifurcation plots, which are discussed in the next section.
\section{Characteristic dynamical regimes}
\label{section:4}
The model supports a number of dynamical regimes that are interesting both from the biological and mathematical point of view. In Fig.~\ref{Fdyn}, four distinctive behaviours associated with the equilibria previously described are presented. Case (1) is an example of an equilibrium solution where the virus co-exists with uninfected and infected tumour cells, i.e. the case $U=U^*$, $V=V^*$ and $I=I^*$. The time series is for an attracting node, but similar long-term dynamics exists for the case of an attracting spiral, with the only difference being an initial, oscillatory transient that then damps down to a plateau. Note how the uninfected cells $U$ are the first to reach the equilibrium $U^* = K\exp(\frac{\xi}{m\gamma}(\gamma-1))$, which corresponds, for the chosen parameters, to $U^* \approx 40.65$. Case (2) corresponds to stable oscillations, characterised also by an almost quiescent phase where the system variables are close to zero and periods of growth and decay of cells and virus. Generally, we observe that this ``refractory'' state tends to have a longer duration than the active phase. Also in this case, the uninfected cells $U$ are the first to grow, with a subsequent increase in the infected cells $I$ and then in the virus load $V$. As we will see shortly with a bifurcation analysis, the duration of the ``rest'' and ``active'' phases of oscillations depends on the system parameters and changes continuously from case (2) to the limiting case (3). This is an extreme scenario where the system oscillates between two long plateaus of quasi-complete eradication (i.e. $U=I=V\approx 0$) and quasi-ineffective treatment (i.e. $U\approx K=100, ~I=V\approx 0$). The inset shows the almost square-wave appearance of the system's trajectories on a long time scale, whereas the switch from the two states is illustrated in the main figure, showing how the growth in $I$ and $V$ causes the uninfected cell numbers to decrease. It is important to note that the system cannot stabilise on either equilibra, because for the parameters chosen and as it will be evident shortly from bifurcation results, both equilibria are unstable.
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{panel1final}
\includegraphics[width=0.48\textwidth]{panel2final}\\[6pt]
\includegraphics[width=0.48\textwidth]{panel3final}
\includegraphics[width=0.48\textwidth]{panel4final}
\caption{Numerical simulations of Eqs.~(\ref{E4})-(\ref{E6}) demonstrating different types of dynamics, for initial conditions $U(0)=50, I(0) = 10, V(0)= 10$ and fixed parameters $m = 0.1, \gamma = 0.1$, whereas values for $\xi$ are increasing from case (1) to (4). Type (1) corresponds to a stable co-existence of virus and tumour due to incomplete eradication, occurring at $\xi = 0.01$, (2) depicts a stable oscillatory solution for $\xi = 0.06$, (3) shows stable long period oscillations of almost ``square wave'' shape for $\xi = 0.097$ and (4) is a case of complete eradication for $\xi=0.12$. Note that the carrying capacity is chosen as $K=100$.}\label{Fdyn}
\end{figure}
Finally, a complete eradication solution is depicted in case (4). Although for the chosen initial conditions and parameters the model shows a monotonic decline to zero for $U$, other examples have been found where $U$ first shows a maximum, followed by an exponential decrease. Also in this final case, as for the other three scenarios just discussed, we observe that $U$ is the fastest to reach its equilibrium value, with $I$ and $V$ following.
To appreciate where these regimes occur and how the parameters influence their existence, two bifurcation plots with respect to system variables $\xi$ and $\gamma$ versus $U$ are presented in Fig.~\ref{Fbif}. In both plots, stable branches are indicated with continuous lines, whereas unstable ones are dashed. The two black branches at $U=0$ and $U=K=100$ indicate the full eradication and failed treatment solutions, respectively. The red line indicates the partial eradication case, where a non-zero value for the tumour volume and the viral load is present. Numbers point to areas where the typical dynamics just discussed can be found.
For the case of a codimension one plot with respect to $\xi$ (Fig.~\ref{Fbif}(a)), two branch points are present: one at $U=100$ and $\xi=0$, where the partial eradication solution coalesces with the failed treatment case, and a second at $U=100$ and $\xi \approx 0.098$ where the oscillatory, stable branch (green line) terminates. This branch originates from a supercritical Hopf bifurcation (HB), which causes the initial partial eradication branch to lose its stability. Note how, at this value of $\xi$, a change in the stability of the eradication solution $U=0$ (black line) also happens, with a saddle-node bifurcation (SN) occurring and a stable, fully eradicating regime appearing for $\xi>\xi_{SN} \approx 0.098$. This eradication solution branch regains its stability at $\xi =0$ through another saddle-node bifurcation (SN). Note also that the partial and full eradication branches (i.e. red and black lines, respectively) do not intersect. Finally, let us remind to the reader that solutions for parameter values that are negative do not bear any biological value.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\textwidth]{bif_A}
\includegraphics[width=0.49\textwidth]{bif_B}
\caption{Examples of typical bifurcation plots in one parameter for the model, with the case of $\xi$ in (a) and $\gamma$ in (b), both versus $U$. Numbers correspond to the dynamical regimes illustrated in Fig.~\ref{Fdyn} and, for the case of periodic orbits originating from a Hopf bifurcation, only the maximum value of $U$ is shown. For (a), the other model parameter are $m=0.1$, $\gamma = 0.1$. Note that the switch to regime $4$ (complete eradication) occurs when the branch of periodic orbits (in green) ceases to exist, for a value $\xi \approx 0.098$. Similar results for a continuation in $\gamma$ are shown in (b), with the switch to case $(4)$ dynamics also occurring in correspondance of a branch point for the periodic orbit, at $\gamma \approx 0.0103$. An inset with a magnification on the area that shows the richest dynamical variability is also shown. The value of the other, fixed parameters are given in this case by $m=0.1$, $\xi = 0.01$. In both cases, solutions for negative $\xi$ and $\gamma$ have been included for reasons of consistency, but do not correspond to any biologically meaningful state.}\label{Fbif}
\end{figure}
It is worth noting that, along the red branch of coexisting solutions, $U$ can span a large range of values, with $U$ increasing as the viral potency $\xi$ decreases. For example, close to the HB, which occurs at $\xi_{HB}\approx 0.042$, a partial eradication solution for $\xi=0.04$ gives a tumour burden $U\approx 2$. Note also the extension of the plateau of the periodic branch (green) close to the $U=100$ unstable equilibrium, before the branch point. This indicates that a ``square wave'' type of oscillations can be present for a moderately extended parameter interval in $\xi$.
Although not shown in the diagram, the switch between the node and the spiral equilibrium typical of the partial eradication solution takes place along the red branch. For the chosen parameters in Fig.~\ref{Fbif}(a), this happens at $\hat{\xi} \approx 0.01675$, with spirals existing for a value $\xi$ such that $\hat{\xi}<\xi <\xi_{HB}$. Generally speaking and as shown in Fig.~\ref{nsregion}(b)-(c), the value at which the equilibrium type changes depends also on the other parameters $m,\gamma$ of the model.
The system's behaviour also shows a strong, nonlinear dependence on viral death rate $\gamma$, as illustrated in Fig.~\ref{Fbif}(b). With respect to the case of $\xi$, the sensitivity of the model to $\gamma$ is somewhat reversed: intuitively, a surge in potency should act on the model in a similar way as a reduction in death rate and vice versa. For example, the branch of oscillatory solutions (green) out of the supercritical Hopf bifurcation (HB) shows an increasing maximum in $U$ as $\gamma$ decreases, opposite to what happens for $\xi$ (see the inset, in particular).
The stable, impartial eradication solution branch (red) shows higher tumour volumes with increasing $\gamma$, and coalesces with the unstable $U=100$ branch (in black) at $\gamma = 1$. For $\gamma>1$, the ineffective treatment solution is stable, as previously found from the analysis of the characteristic equation corresponding to this solution, i.e. Eq.~(\ref{ch-eq-partial}). A virus with a decay rate $\gamma > 1$ has no effect on the tumour. It is important to note that a mechanism identical to what we observe in the bifurcation plot for $\xi$ allows the existence of case (4) solutions, i.e. complete eradication. At a value of $\gamma \approx 0.0103$, the inset shows the termination of the oscillatory solutions (in green) and the occurence of a saddle-node point in the full eradication branch, making complete destruction of the tumour possible. From the biological perspective, this indicates that a right balance between the potency of the virus and its mortality must be achieved for an eradication to occur, depending on the growth rate $m$ of the tumour. In particular, as $\gamma$ is increased from zero, the model goes from full eradication to oscillations with an amplitude that decreases with $\gamma$, and subsequently to incomplete eradication up until $\gamma = 1$.
As previously mentioned, the full eradication solution gives rise to a singular Jacobian, making a purely numerical approach to continuation impossible. For solutions where $U\neq 0$, results have been obtained by using AUTO~\cite{AUTO07} and XPPAUT~\cite{Bard2002} softwares. For the case of solutions occurring for $U=0$, a combination of numerical methods and symmetry arguments have been employed. We assume that $U < I < V$, as exemplified by case (4) shown in Fig.~\ref{Fdyn}. If $\epsilon > 0$ and small, and we impose that $U\to \epsilon^n$, $V\to \epsilon^m$ and $I \to \epsilon^l$ with $n>m>l$, then the eigenvalues of the Jacobian $J$ can be numerically computed with an increasing approximation for growing $n,m$ and $l$.
For example, in determining the stability of the full eradication branch in Fig.~\ref{Fbif}(a), we consider $U = 10^{-7}$, $V = 10^{-5}$ and $I = 10^{-4}$, substitute these values in the Jacobian and numerically evaluate the eigenvalues. For $\xi>\xi_{SN}\approx 0.0975$, all three eigenvalues turn out to be negative and real, whereas for $\xi<\xi_{SN}$ two are positive and one is negative. For example, choosing $\xi = 0.095$ gives eigenvalues $\lambda_1 \approx -0.15$, $\lambda_2 \approx -0.06$ and $\lambda_3 \approx 8\cdot 10^{-5}$. For the case $\xi = 0.099$, the first two eigenvalues are almost unchanged, but the last one changes sign and is $\lambda_3 \approx -2 \cdot 10^{-3}$. Similar results hold for the SN on the eradication branch for continuation in $\gamma$ (see Fig.~\ref{Fbif}(b)), and the method is consistent for all the parameters $m,\gamma$ and $\xi$ we have tested (not all shown here). These results have also been checked by integrating the equations of motion with XPPAUT, and confirming that the solution is indeed attracting when stable or repelling when unstable, as reported in the bifurcation diagrams.
One important feature of the model is that it does not support stable oscillations for all biologically meaningful combinations of parameters. For some choices, a different structure of bifurcation plots emerges, with significant consequences from the biological perspective. In this sense, a typical example for a continuation in $\xi$ is illustrated in Fig.~\ref{Fbista}(a). An unstable periodic branch (green) originates from a subcritical Hopf bifurcation (HB) and maintains its unstable character until it collapses with the $U=K=100$ (black) branch. For this diagram, viral potency $\gamma$ is the same as in Fig.~\ref{Fbif}(b), but a value of $m=0.5$ (moderately high growth rate) is chosen, whereas both previous diagrams have been obtained with a $m=0.1$ (moderate growth rate). A more aggressive tumour, assuming that the potency of the virus is the same, does not engage in oscillatory behaviour with the virus, but only partial or full eradication are possible (i.e. black and red lines).
It is interesting to stress that in this case, as shown in Fig.~\ref{Fbista}(b), the saddle-node (SN) on the full eradication $U=0$ branch (in black) occurs for a value $\xi_{SN}$ that is less than the value $\xi_{HB}$ at which the subcritical Hopf (HB) originates. This occurrence is due to the fact that the periodic branch shows increasing values of max $U$ for decreasing values of $\xi$ when it is unstable. This is the opposite of what happens for the stable periodic branch described in Fig.~\ref{Fbif}(a), where $\xi_{HB} < \xi_{SN}$ and the stability of the eradicated solution does not switch in this way.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\textwidth]{unst_1}
\includegraphics[width=0.49\textwidth]{unst_2}\\[5pt]
\includegraphics[width=0.49\textwidth]{bi1_sp}
\includegraphics[width=0.49\textwidth]{bi1_er}
\caption{Bifurcation plots and bistable solutions for fixed parameter values $m=0.5$, $\gamma=0.1$. The rectangle in (b) shows the area where two solutions of different nature coexist, delimited by $\xi_{SN} \approx 0.1359$ and $\xi_{HB} \approx 0.1388$. A spiralling solution to an incomplete eradication is shown in (c) and occurs for initial conditions $U(0)=60$, $I(0) = 10$, $V(0)=40$, for a parameter $\xi_{SN} < \xi = 0.136 < \xi_{HB}$ . A fully eradicated solution is shown in (d) and instead occurs for $U(0)=40$, $I(0) = 10$, $V(0)=5$, for the same value $\xi = 0.136$. Nullclines, i.e. the loci of points corresponding to $U'=0$ and $V'=0$, are in red and green.}\label{Fbista}
\end{figure}
The change in the order in which the SN and HB emerge as $\xi$ is increased is responsible for the generation of a region of bistability, where two separate and distinct equilibria exist for an interval of potency values. For values of $\xi$ in this region, different initial conditions can lead to different outcomes, as shown in Fig.~\ref{Fbista}(c)-(d): the initial dosage of viral load and the numbers of infected and uninfected tumour cells can strongly influence the final fate of the system and, as it will be clear shortly, lead to somewhat unexpected results. In the first case (Fig.~\ref{Fbista}(c)), a spiralling solution achieves an incomplete eradication, which belongs to the red branch in Fig.~\ref{Fbista}(b). Conversely, the second case shows a complete eradication to a vanishing tumour, after traversing two maxima in $U$ and $V$ respectively, corresponding to the black branch in Fig.~\ref{Fbista}(b). A small variation in the initial conditions can hence result in the therapy being effective or instead giving rise to a partial eradication.
The existence of this area of bistability is associated with the presence of a subcritical Hopf bifurcation whose loci of points in $\xi$ and $m$, and for different values of $\gamma$, are plotted in Fig.~\ref{2par}(a). Generalised Hopf points (GH) separate subcritical Hopf points (dashed lines) from supercritical Hopf bifurcations (continuous lines). Note that, if the the growth $m$ is sufficiently small, no Hopf bifurcation can be present and the system does not support oscillations, either stable or unstable. For example and as a result of the interruption of the Hopf branches shown in the inset of Fig.~\ref{2par}(a), any one-parameter bifurcation plot in $\xi$ for a fixed $\gamma =0.1$ and values of $m \lessapprox 0.008$, does not contain a stable or unstable oscillatory branch, since no Hopf point exists for such values. Biologically this indicates that there is a lower bound on the tumour growth rate for oscillations (stable or unstable) to exist, implying that a very slow growth in general leads to a complete eradication for sufficiently potent virus, as long as its death rate is not excessively pronounced.
A numerical analysis of the model for a range of $\xi,\gamma$ and $m$ values shows that limit cycle amplitudes for $U$ do not follow a clear pattern, as captured by Fig.~\ref{2par}(b). Oscillations of different amplitudes can be achieved by the system and, as observed previously, depending on the growth rate of the tumour they can be enhanced by increases in $\xi$ and decreases in $\gamma$.
\begin{figure}[h!]
\centering
\includegraphics[width=.45\textwidth]{2p_collections}
\includegraphics[width=.45\textwidth]{limitcycamp2}
\caption{Different, two-parameters continuations in (a) for $m$ and $\xi$ for branches of Hopf bifurcations at different values of $\gamma$. Branches of supercritical Hopf bifurcations are shown in continuous lines, whereas those for subcritical bifurcations are in dashed lines. Generalised Hopf points are indicated by GH. Note that the branches cease to exist for low values of $(m, \xi)$, indicating the system cannot support either stable or unstable oscillations when parameters are sufficiently small (see the inset). Plot of the corresponding amplitude of stable limit cycles for points in the $\xi,m,\gamma$ parameter space are in (b). The colour of the point corresponds to the maximal value of the amplitude of the limit cycle in $U$.}\label{2par}
\end{figure}
\section{The effect of dosage applications and their optimisation}
As shown, there are large sections of parameter space that give birth to regimes with dormant tumours or tumour-virus oscillations, which can give rise to different outcomes when coupled with clinical therapies. For the purpose of the present study, one of the simplest possible therapeutic practices is considered: the administration of constant dosages of viral loads via external injections and at given time intervals. If the treatment is over the course of $n$ injections with $\kappa$ number of days between injections, a virus injection protocol $u_V(t)$ can be summarised by the following, generic schedule:
\begin{equation}
u_V(t) = \left\{\begin{array}{cc} \displaystyle\frac{D_0}{n} \qquad & \qquad t = (i-1)\kappa, \hspace{0.5cm}
\text{where }i = 1, \ldots, n, \\[6pt] 0\qquad & \text{otherwise.} \end{array} \right.\label{Eq5}
\end{equation}
\noindent
Given this simple scheme, let us consider how dosage perturbations affect regions of the bifurcation diagrams and if they can result in either tumour eradication or a stable tumour size below a given threshold. The two typical scenarios we consider are oscillations and bistability.
\subsection{Effects of injections on a stable, oscillatory trajectory}
After exploring different areas of the parameter space that give rise to oscillations, the main finding reported in this study is that simple therapies given by Eq.~(\ref{Eq5}) do not alter the long term behaviour of the model independently of the amplitude or period of oscillations. If an oscillatory, stable state exists between virus and tumours, an increment in the viral load through injections does not achieve complete eradication to zero. From the dynamical point of view, an increase in viral load via external perturbation cannot force the system out of the basin of attraction of a stable limit cycle. Nonetheless, transient phenomena do exist and are worth discussing.
Let us consider two injections, i.e. $n=2$, for a system already in a stable oscillatory state. The number of days $\kappa$ between injections alters the size of the tumour and virus populations as the system returns to its stable state. In Fig.~\ref{D0sims}, a single period of two different stable limit cycles of the model is shown, with arrows representing the instants at which injections that increase the viral load have been administered. The corresponding maximum and minimum tumour size, along with the maximum virus count reached, is also presented.
\begin{figure}[h!]
\includegraphics[scale=.45]{injectionmap
\includegraphics[scale=.45]{injectionmap2}\\[5pt]
\includegraphics[scale=.45]{injectionmap3
\includegraphics[scale=.45]{injectionmap4
\caption{Perturbations in the days between two treatments $\kappa$. Two different limit cycle regimes have been plotted for $\gamma = 0.1, m = 0.2$ (a) $\xi = 0.06915$ and (c) $\xi = 0.06993$. The maximum and minimum uninfected cell number is plotted in (b) and (d) for the corresponding value of $\kappa$ represented by an upward arrow in (a) and (c). Note the different scales, since the oscillations considered in (a) and (c) have different amplitudes.}
\label{D0sims}
\end{figure}
Injections that occur at different phases of the cycles have different outcomes. In particular and for large or small oscillations, as can be seen in Fig.~\ref{D0sims}(b)-(d) for the red and magenta curves around $t\approx 62$, dosing the virus close to the minimum in tumour population provides a typical outcome: the tumour initially responds to the injection by undergoing the lowest resulting minimum, but this is followed by a rebounds that causes $U$ to reach the highest value (max $U$ in the plot) of all other tested injections. Note that, in some cases and for sufficiently high dosages, the minima achieved by $U$ can be pushed to values so low to become experimentally undetectable. Injections at other instants within one oscillation period yield rebounds proportional to the original amplitude of the limit cycle, with best results occurring for the lowest amplitudes.
Perturbing the number of days between injections $\kappa$, the total injection amount $D_0$ or the number of injections $n$ does not affect the long term dynamics (not shown here), which remains oscillatory in the long term.
\subsection{Effects of injections on a trajectory in the bistable region}
For a solution in a bistable region, the final outcome of any injection is highly dependent on the initial tumour size and viral load. In particular, due to the complex structure of the basin of attraction of the two competing solutions, i.e. full eradication and an incomplete quiescent state, doses that are higher than a specific threshold, which is in turn highly dependent on the system parameters, can lead to a partial eradication rather than a complete one.
As typical scenarios, consider the administration of single injections of increasing dosage as depicted in Fig.~\ref{bistabsim}. Depending on the initial uninfected tumour $U(0)$, injections can lead to different outcomes or even have no effect on the final state. Considering the case of a high tumour size (Fig.~\ref{bistabsim}(a), $U(0) = 100$), different dosages always result in final eradication. Some dosages can lead to transient oscillations in the $U-V$ plane, but eventually eradication is achieved for all plotted trajectories. If, instead, the initial tumour size is smaller (Fig.~\ref{bistabsim}(b), $U(0) = 50$), a full eradication is obtained only if the dose is either sufficiently low or sufficiently high, whereas there is a considerable interval of possible doses that push the system to a stable spiral corresponding to a dormant state, where eradication is not complete. Note that the first two low dosage injections, i.e. injections 1 and 2 in Fig.~\ref{bistabsim}(b), also lead to a final eradication state after few oscillations on the $U-V$ plane.
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{bi-dose1}
\includegraphics[width=0.48\textwidth]{bi-dose2}
\caption{Typical cases of dependence on injected viral dosage $D_0$ for a system in a bistable scenario. Examples of two injections with increasing dosage (i.e. injections 1 and 2) are also sketched. The effect of these injections is to push the starting point to larger values of $V(0)$, depending on the dose that is administered. For the same initial tumour size, different dosages result in either tumour eradication or tumour stabilisation. Initial fixed conditions in (a) are given by $U(0)=100$, $I(0)=10$ and by $U(0)=50$, $I(0)=10$ for (b). In both cases, $V(0)$ varies from a minimum of $20$ to a maximum of $120$ in constant steps and the parameters are $m = 0.5$, $\gamma =0.1$ and $\xi = 0.138$.} \label{bistabsim}
\end{figure}
This result is interesting, as it suggests that, for given initial tumour size and characteristics of the virus, there can be a unique interval of dosage sizes that does not result in treatment success. Boosting the amount of virus does not always guarantee a successful outcome.
\section{Discussion}
The model proposed in this work shows a number of interesting features, both from the mathematical and the biological points of view. Firstly, a range of possible dynamical outcomes, based on the value of the model parameters and, in some cases, of the initial conditions, have been found. A number of nontrivial bifurcation scenarios have also emerged, with the presence of an important system equilibrium (i.e. full tumour eradication) that is characterised by a singular Jacobian. This occurrence has required the use of a hybrid combination of numerical continuation, symmetry considerations and integration of the model to map out the dynamics as a function of relevant model parameters.
The model provides a few insights into the interactions between an oncolytic virus and a tumour growing with a realistic proliferation law. One of the main limitations of the present approach is the endless influx of viral load that occurs in the model: once the viral cycle is set into motion, and unless viral death rate is excessive (i.e. $\gamma > 1$), there is no natural stopping mechanism for viral infections to continue endlessly. This simplification is, for example, responsible for the appearance of dormant, partially eradicated tumours, which, after an initial transient, perpetually coexist with a constant viral load. These dynamics are common for models with unlimited reservoirs of populations~\cite{Wilkie2013201} .
Another important constraint is represented by the limited number of parameters used and their inherent inability to fully account for tumour-virus dynamics in detail. For example, we have introduced two general terms $\xi$ and $\gamma$ that aim to capture the potency and death rate of the virus and depend on the virus infectivity rate, $\beta$. These parameters are meant to encapsulate a large variety of different viral characteristics and can be associated to features as diverse as burst size, reproduction rate, spreading ability, and diffusivity. A similar observation must be made for the growth rate $m$: this value condenses a large number of often independent and highly variable features of tumour growth, which are highly sensitive to nutrients, vascularisation, extra cellular matrix characteristics, and so on.
Notwithstanding these limits, the model shows that, for a given rate of growth, a tumour responds in different ways to viral particles that have different, generic characteristics. As shown in Fig.~\ref{Fbif}, an increase in viral potency $\xi$ or a decrease in viral death rate $\gamma$ drives the system through similar stages of typical dynamics, from partial eradication to tumour-virus oscillations. At sufficiently large values of $\xi$ and $\gamma$, for instance $\gamma > 1$, the scenarios are instead opposite, with full eradication and inefficient treatment, respectively.
A metastable regime that appears somewhat counterintuitive is represented by the so-called ``square-wave'' oscillations, which are observed in a small interval of biologically relevant parameters (see Fig.~\ref{Fdyn}(c)). Given the limitations of the model proposed here and the size of the parameter space where this dynamics takes place, it may be unlikely that such an extreme tumour expansions can be directly observed in a clinical setting. Nonetheless, the switch between a quasi-eradicated to a quasi-ineffective treatment regime points to the importance of achieving a complete wipe out of the tumour if a sudden resurgence is to be avoided.
The existence of an extended area of the parameter space where oscillations among system variables arise is also worth noticing. These regimes, which also tend to respond nonlinearly to external injections (see Fig.~\ref{bistabsim}), have been known for quite some time in clinical settings {\bf XXX add REF if existing - Adri, please}. One major finding for this model is that virotherapy can prevent oscillations from occurring if the potency is sufficiently strong or, alternatively, the virus tends to survive for sufficiently long times in the infected population. Furthermore, and this is particularly interesting, oscillations tend to have larger amplitudes and periods for increasing $\xi$ (or decreasing $\gamma$), before they disappear completely for sufficiently high (or low) values. This is worth reflecting on, especially from the clinical perspective. Designing a potent virus that is still not sufficiently resilient may turn out to be a riskier strategy, since it could trigger larger fluctuations in the tumour population. These oscillations also occur at relatively distant time intervals from each other and long periods of tumour inactivity may be misinterpreted as successful eradication. Looking at Fig.~\ref{Fbif}(a) and assuming that a low value of uninfected tumour cells $U$ represents a good outcome, a less potent virus, say with a $\xi \approx 0.04$, results in a quiescent tumour of a smaller size than the amplitude of oscillations caused by a highly potent virus with, for instance $\xi \approx 0.08$ (i.e. twice as potent). This is also true from the point of view of resilience, see in particular the inset of Fig.~\ref{Fbif}(b): a virus that remains active for longer, say $\gamma \approx 0.015$, produces oscillations with very high values of $U$, whereas a virus decaying twice as fast, say with $\gamma \approx 0.03$, produces a stable, silent tumour of a smaller size. All this shows that therapeutic strategies must be chosen carefully and thoughtfully, and that optimal design of an oncolytic virus must be targeted on the tumour characteristics, in particularly its proliferation rate. It could be quite interesting to test these theoretical findings in vitro.
Note also that, even when external interventions with extra viral dosages are taken into account, the answers provided by our analysis do not appear trivial. Firstly, the existence of bistability and dependence from initial conditions has important effects. As seen in Fig.~\ref{bistabsim}, different initial viral loads and dosages can result in different outcomes, often in unpredictable ways. It is not true that a larger initial viral load always results in eradication: there is a large interval of values of dosages for which eradication is not possible and, quite interestingly, the system privileges either a sufficiently high or sufficiently low viral load for eradication. Starting at a smaller viral load is successful because it first allows the tumour to initially grow to a larger size, which thus elicits a stronger viral response. This response can wipe out the tumour completely, with no risk of ending in a dormant phase. Although this feature has been observed previously, for example, in simpler systems in tumour-immune dynamics and predator-prey models~\cite{Davis1962, frascoli2013}, it is the first time, as far as the authors are aware, that it is noted in virotherapy modelling. Clearly, the fact that our model hypothesises that the virus can penetrate the tumour and diffuse within its cells with no hindrances, has to be taken into consideration and is one of the drivers of this effect. Notwithstanding this, the result points to the existence of a preferred threshold in the size, for some values of the system parameters, where a limited quantity of viral load is preferable over a larger amount. Although strategies resulting even in a controlled and partial growth of a tumour have to be evaluated and considered with extreme care, the fact that a low viral load can still produce positive outcomes should be investigated further in laboratory and clinical settings. We remind the reader that the present model does not allow for a thorough description of the dynamics of virus penetration and diffusion, which certainly play a fundamental role in the success of virotherapy.
Secondly, therapies that couple with external injections of viral loads could have very different outcomes depending on the state of the system. Not only, as we have just highlighted, they can perturb a trajectory that was meant to be of full eradication into a dormant state, but, as shown for oscillations in Fig.~\ref{D0sims}, they can have a transient, often negative effect on the whole system. If administered when the system resides on a stable oscillating state, these injections, depending on when in the cycle are provided, tend to increase the amplitude of few cycles of oscillations before the system goes back to its original fluctuations, with no ability of driving the model out of this phase. There is generally no positive relevant effect in reducing the magnitude of periodic behaviour in the long term. Strategies that instead optimise the quality of the oncolytic virus seem to be preferable, as (see Fig.~\ref{Fbif}) oscillations can be reduced or damped to zero either by increasing the potency or the life span of virus at the right amount.
In this sense and as also Fig.~\ref{2par}(a) explains, the finding that oscillations that exist for different values of the parameters are suppressed when the growth rate $m$ is sufficiently small is very relevant and informative for therapeutic choices. Rather than complex injection schedules or larger amounts of externally provided virus, this model seems to promote pharmacological interventions that aim at blocking or reducing the growth of the tumour. It will also be interesting, for further studies, to establish whether combination therapies or interventions specifically targeted at boosting the immune response (not modelled here) could also improve outcomes, and how changes in the diffusion and penetration efficiency of infection waves may change the trends observed in the present model.
\section*{Acknowledgements}
ALJ, FF and PSK gratefully acknowledge support for this work through the Australian Research Council Discovery Project DP180101512, ``Dynamical systems theory and mathematical modelling of viral infections''.
|
1,477,468,750,582 | arxiv | \section{Introduction}
The generalized Chaplygin gas (GCG) model has lately drawn some
attention, mainly because it allows for a unified description of
dark matter and dark energy, the first dominating at early times
and gradually transferring energy to the dark energy component
\cite{Kamenshchik, Bertolami1}. The GCG model is consistent with
various classes of cosmological tests, such as the Cosmic
Microwave Background Radiation \cite{Bertolami2}, supernovae
\cite{Bertolami3}, gravitational lensing \cite{Bertolami4} and
gamma-ray bursts \cite{Bertolami5}. As with other competing
candidates to explain the overwhelming energy density of the
present Universe, GCG is naturally constrained through
cosmological observables.
It is quite interesting that the GCG equation of state is that of
a polytropic gas \cite{Bhatia}, although one with a negative
polytropic index. This hints that one could look for astrophysical
implications of the model, and hence hope for yet another approach
to the problem of constraining the allowed space for its
parameters (see, e.g. Ref. \cite{Paramos}). In this work we argue
that a GCG dark star may arise from a density fluctuation in the
cosmological GCG background. In what follows we shall characterize
these objects, look at their evolution and account for their
initial probability of appearance within the GCG background.
\section{The generalized Chaplygin gas}
The GCG model is based on the equation of state
\begin{equation} P_{ch} = -{A \over \rho_{ch}^\alpha}~~, \label{state} \end{equation}
\noindent where $A$ and $\alpha$ are positive constants and $0 \leq
\alpha \leq 1$ (see however Ref. \cite{Bertolami7} for reasons to
consider $\alpha >1$); the negative pressure hints that the GCG is
related to a cosmological constant. The case $\alpha=1$ corresponds
to the Chaplygin gas \cite{Chaplygin}. In a
Friedmann-Robertson-Walker cosmology, the relativistic energy
conservation yields
\begin{equation} \rho_{ch} = \left[ A + {B \over a^{3(1+\alpha)} } \right]^{1
\over 1 + \alpha}~~, \label{rhoch} \end{equation}
\noindent where $a$ is the scale factor of the Universe and $B$ a
positive integration constant. This result shows a striking
property of the GCG, namely that at early times it behaves as
non-relativistic dark matter ($\rho_{ch} \propto a^{-3}$), while
at late times it acts as a cosmological constant ($\rho_{ch}
\simeq const.$). One can algebraically decompose the GCG into a
dark matter and a dark energy component, which evolve such that
the transferring of energy occurs from the former to latter
\cite{Bertolami6}. This can be used to show that, while the dark
energy component is spatially homogeneous, the dark matter
component allows for structure formation at early times, when it
dominates \cite{Bertolami1,Bertolami6,Bilic}.
For convenience, one defines the parameter $ A_s \equiv A /
\rho_{ch0}^{1+\alpha}$, where $\Omega_{de0}$ is the dark energy density,
$\rho_{cr0}$ the critical density and $\rho_{ch0}$ the GCG energy
density, all at the present. Assuming, as observations suggest,
the condition $\Omega_{dm0} + \Omega_{de0} = 1$, where $\Omega_{dm0}$ is
the dark matter density (dropping the small baryon contribution),
and taking $a_0=1$, yields the constraint $ B = \Omega_{dm0}
\rho_{cr0} \rho_{ch0}^\alpha$.
\section{Polytropic stars}
In order to deal with stellar structure in general relativity, one
considers that the spherical body behaves as a perfect fluid,
characterized by the energy-momentum tensor
\begin{equation} T^{\mu \nu} = (\rho + P)u^\mu u^\nu + P g^{\mu \nu} ~~,\end{equation}
\noindent where $g^{\mu\nu}$ is a spherically symmetric Birkhoff
metric. The Bianchi identity implies that $\nabla_\mu T^{\mu \nu}
= 0$, from which follows the relativistic
Tolman-Oppenheimer-Volkov equation \cite{Bhatia},
\begin{equation} {d P \over d r } = -{G( P + \rho) \over r^2} \left[m+4 \pi
r^3 P \right]\left[1 - {2G m\over r} \right]^{-1}~~, \label{TOV}
\end{equation}
\noindent where $ m(r) = 4 \pi \int_0^r \rho(r') r'^2 dr'$. This
equation collapses to the classical Newtonian hydrostatic
equilibrium equation
\begin{equation} {d P \over dr } = -{4 G m(r) \rho \over r^2} ~~,
\label{hydro} \end{equation}
\noindent if the following conditions are satisfied:
\begin{eqnarray} && G m(r) /r \ll 1~~, \label{newta} \\ && 4 \pi r^3 P(r)
\ll m(r) ~~, \label{newtb} \\ && P(r) \ll \rho(r)~~. \label{newtc}
\end{eqnarray}
The polytropic gas model for stellar like structure assumes an
equation of state of the form $P = K \rho^{n+1/n}$, where $n$ is
the polytropic index, which defines intermediate cases between
isothermic and adiabatic thermodynamical processes, and $K$ is the
polytropic constant, defined as
\begin{equation} K = N_n G M^{(n-1)/n} R^{(3-n)/n} ~~ \label{k} \end{equation}
\noindent with
\begin{equation} N_n = \left[{n+1 \over (4\pi)^{1/n}} \xi^{(3-n)/n}
\left(-\xi^2 {d \theta \over d \xi}
\right)^{(n-1)/n)}\right]^{-1}_{\xi_1}~~,\end{equation}
\noindent where $R$ is the star's radius, $M$ its mass and
$\xi_1$, defined by $\theta(\xi_1) \equiv 0$, corresponds to the
surface of the star (cf. below). Actually, this definition states
that all quantities tend to zero as one approaches the surface.
This assumption leads to several scaling laws for the relevant
thermodynamical quantities,
\begin{eqnarray} \rho & = & \rho_c \theta^n(\xi)~~,~~~~ \label{defrho} \\
T & = & T_c \theta(\xi)~~,~~~~ \\ P & = & P_c \theta(\xi)^{n+1}~~,~~~~
\label{defP} \end{eqnarray}
\noindent where $\rho_c$, $T_c$ and $P_c$ are the density,
temperature and pressure at the center of the star \cite{Bhatia}.
Notice that the scaling law for temperature requires the
assumption that the gas behaves as an ideal one. This is not the
case of the GCG.
The function $\theta$ is a dimensionless function of the
dimensionless variable $\xi$, related to the physical distance to
the star's center by $r = \beta \xi$, where
\begin{equation} \beta = \left[{(n+1)K \over 4 \pi
G}\rho_c^{(1-n)/n}\right]^{1/2} ~~. \label{beta} \end{equation}
\noindent The function $\theta(\xi)$ obeys a differential equation
arising from the equilibrium condition of Eq.(\ref{hydro}), the
Lane-Emden equation:
\begin{equation} {1 \over \xi^2} {\partial \over \partial \xi} \left(\xi^2
{\partial \theta \over \partial \xi} \right) = -\theta^n~~. \label{lec}
\end{equation}
\noindent Notice that the physical radius and mass of the
spherical body appear only in the polytropic constant, and hence
the behavior of the scaling function $\theta(\xi)$ is unaffected by
these. Therefore, the stability of a star is independent of its
size or mass, and different types of stars correspond to different
polytropic indices $n$. This scale-independence manifests in the
symmetry of the Lane-Emden equation (\ref{lec}) under homology
transformations. The first solar model ever considered, developed
by Eddington in 1926, was that of an $n=3$ polytropic star.
Although somewhat incomplete, this simplified model gives rise to
relevant constraints on the physical quantities.
In what follows, we shall use the Lane-Emden equation to derive
the properties of a generalized Chaplygin dark star, given that
conditions (\ref{newta})-(\ref{newtc}) are shown to be fulfilled.
This enables the use of the Newtonian approximation (\ref{hydro}),
which asides its simplicity allows for a prompt interpretation of
the GCG as a polytropic gas subject to the Lane-Emden equation of
motion. The generality of this procedure can be used in various
cases of physical interest, as for instance, when studying the
effect of scalar fields on the stellar structure \cite{Paramos}.
\section{The generalized Chaplygin dark star}
As already discussed, the GCG model is cosmological in nature, and
most bounds on the parameters $\alpha$ and $A_s$ are derived from
cosmological tests. However, a quick look at the GCG equation of
state (\ref{state}) indicates that the it corresponds to a
polytrope with a negative polytropic constant and a negative
pressure. At first glance, this seems to indicate that no valid
analysis of a GCG at an astrophysical context can proceed, since a
spherical body constituted by such exotic gas would experience an
outward pressure that would prevent it from being stable. However,
the following argument shows that such an objection is
circumvented by the presence of the cosmological GCG background.
The first logical step for the construction of a symmetric body
with the GCG equation of state should be, as for all polytropes,
the solution of the related Lane-Emden equation. Firstly one notes
that this stems from the hydrostatic equation (\ref{hydro}). As
already seen, this is directly derived from the general relativity
equations, assuming the energy-momentum tensor of a perfect fluid:
no assumption whatsoever is made concerning the pressure or
density, nor the equation of state relating these quantities.
Hence, one can use this equation and, through the usual
derivation, the related Lane-Emden equation; as stated before the
only concern is if one can neglect the higher-order relativistic
terms present in Eq.(\ref{TOV}), thus working in the Newtonian
limit. This will be explicitly shown in the case under
investigation.
The polytropic equation of state $P = \rho^{n+1/n}$ shows that the
GCG can be assigned a negative polytropic index $n = -1/(1 +
\alpha)$. Next, a comparison with Eq. (\ref{state}) yields $K = - A$,
since the pressure is negative. This requires some caution:
indeed, the direct application of the coordinate transformation
between the physical radial coordinate $r$ and the coordinate
$\xi$ given by by $r = \beta \xi$, with $\beta$ defined in Eq.
(\ref{beta}) yields
\begin{equation} \beta \equiv D \left[ ( 1 + n) K \right]^{1/2} = D \left( {\alpha
\over 1 + \alpha} K \right)^{1/2} ~~, \end{equation}
\noindent where $D=[\rho_c^{(1-n)/n} / 4 \pi G]^{1/2}$; since $\alpha
> 0$ and $K < 0$, the above quantity is imaginary. To avoid this, we
define the coordinate $\xi$ through the same equation $r = \beta
\xi$, but with $K$ replaced by $|K| = A > 0$ in Eq. (\ref{beta}),
obtaining
\begin{equation} \beta = \left[{A \over 4 \pi G } {\alpha \over 1 + \alpha}
\right]^{1/2} \rho_c^{-(1+\alpha/2)}~~. \label{betag} \end{equation}
The negative sign of $K$ will, of course, manifest itself in the
terms of the Lane-Emden equation; explicitly, one gets
\begin{equation} {1 \over \xi^2} {\partial \over \partial \xi} \left(\xi^2
{\partial \theta \over \partial \xi} \right) = \theta^{n}~~. \label{leg}
\end{equation}
As in the usual Lane-Emden equation, one has as boundary
conditions $\theta(0)=1$ and $\theta'(0)=0$. This gives rise to a
positive derivative for $\xi>0$, indicating that $\theta$ is a
smoothly increasing function. This is related not to the negative
polytropic index, but to the negative pressure of the GCG; as a
consequence, Eqs. (\ref{defrho}) and (\ref{defP}) indicate that
the pressure inside the dark star increases (in absolute value),
while the density decreases. This is key to our study, since it
shows that a GCG spherical body accretes, as expected for a star.
In the usual Lane-Emden equation, the criteria concerning the size
of a star is given by $\theta(\xi_1) \equiv 0$, corresponding to a
surface of zero density, pressure and temperature. In a GCG dark
star, the question is more convoluted: a vanishing density yields
infinite pressure (and conversely), which are rather unphysical
choices for the boundary of any astrophysical object. Furthermore,
since the function $\theta$ is increasing, the density $ \rho \propto
\theta^{n}$ vanishes at an infinite distance, while the pressure $P
\propto - \theta^{1+n}$ does not vanish at all.
\begin{figure}
\epsfysize=6.8cm \epsffile{figure3.eps} \caption{The function
$\theta(\xi) / \delta $ for a relative density $\delta = 10$, $100$, $1000$
(dashed, dot-dashed and full lines respectively) as a function of
$\xi / \delta$, assuming $\alpha=0.2$ and $A_s=0.7$.} \label{fig}
\end{figure}
As a solution for this issue, one recalls that the GCG object is
embedded in a cosmological background. Hence, a GCG dark star
should not be taken as an isolated body, but rather as a spike on
the overall cosmological background of density $\rho_{ch}$ and
negative pressure $P_{ch}$. Therefore, its boundary should be
signalled by the matching of the inner and outer pressures and
densities, as indicated in Fig. \ref{fig}. Both conditions are, of
course, equivalent, given the GCG equation of state (\ref{state}).
From Eqs. (\ref{defrho}) and (\ref{defP}), this equates to
\begin{equation} \theta(\xi_1) \equiv \delta^{-1/n} = \delta^{1+\alpha} ~~, \label{x1}
\end{equation}
\noindent where one defines the ratio between central and the
background density, $\delta \equiv \rho_c / \rho_{ch}$. Hence, one
gets a correspondence between the central density of the dark
star, that is, the height of the energy density fluctuation, and
its radius. This argument shows that the Chaplygin dark star
cannot be taken merely as an isolated body having a common
equation of state with the GCG, but must be viewed instead as a
perturbation to the flat GCG background. This is advantageous,
since one can use the constraints available on the GCG model to
ascertain its properties. The only constraint affecting this
quantity is $\delta \gg 1$, since we assume that the density
perturbation must be large enough so we can view it as a physical
object, not merely a small fluctuation on the GCG background.
Notice that, since the scaling function $\theta(\xi)$ is completely
specified by the two boundary conditions $\theta(0)=1$ and
$\theta'(0)=0$, neither $\rho_c$ nor $\rho_{ch}$ affect each other:
$\rho_c$ is ``scaled out'' of the problem through the definition
(\ref{defrho}) and(\ref{defP}) and $\rho_{ch}$ merely sets the
criteria for the surface of the star, through Eq. (\ref{x1}).
Hence, although $\rho_{ch}$ varies with time, there is no
contradiction in assuming a constant central density $\rho_c$;
this simplifies our study, since Eq. (\ref{betag}) then yields a
constant coordinate scaling coefficient $\beta$. Furthermore, given
that the cosmological background density $\rho_{ch}$ is
decreasing, a constant central density $\rho_c$ indicates that the
density ratio $\delta$ increases and, therefore, $\xi_1$ expands
towards a final radius $r_{1 \infty} = \beta \xi_{1 \infty}$,
acquiring mass in the process. One can argue that, due to energy
conservation, the background density should decrease in order to
compensate this, but this is a minor effect that can be neglected.
Also, since the Chaplygin dark star is not an isolated object, it
is reasonable to assume that neither its mass nor radius should be
held constant, but instead must vary as it dilutes itself on the
overall cosmological background; instead, the central density
$\rho_c$ arises as the natural candidate for distinguishing
between these objects.
The absolute magnitude of the perturbation results from the
dynamics ruling the generation of a perturbation, via a
probability law arising from the fundamental physics underneath
the GCG model; this, of course, should naturally disfavor very
large perturbations. Furthermore, since any relative perturbation
to the homogeneous energy density profile is local, its occurrence
should not depend on cosmological quantities and the probability
distribution should depend only on the relative perturbation
$\delta$, and not explicitly on the scale factor $a$. A putative
candidate could be a normal probability distribution given by
\begin{equation} f(\delta) = f_0 \exp\left[-g \delta^2 \right]~~, \label{prob} \end{equation}
\noindent where $f_0$ is a normalization factor, $g$ a parameter
dependent on $A_s$ and $\alpha$. Of course, more complicated
expressions for $f(\delta)$ are possible, depending on the inner
workings of the fundamental physics behind the GCG model.
The expansion velocity of a dark star can be shown to be given by
\begin{eqnarray} v_e & \equiv & \dot{r}_1 = {3 (1+\alpha) \beta B \over
\theta'(\xi_1) } \left[{\rho_c \over \rho_{ch}^2} \right]^{1+\alpha} {H
\over a^{3(1 + \alpha})}~~, \label{vexp} \end{eqnarray}
\noindent where the prime denotes differentiation with respect to
$\xi$ and $H=\dot{a}/a$ is the rate of expansion of the
cosmological background. The Friedmann equation allows one to
write the latter as
\begin{equation} H^2 = H_0^2 \left[ \Omega_{de0} + \Omega_{dm0} a^{-3(1+\alpha)}
\right]^{1 \over 1 + \alpha}~~. \label{exprate} \end{equation}
\noindent One can see that, as $\rho_{ch}$ approaches a constant
value at late times, the expansion tends to zero. Also, one can
derive the dependence of $v_e$ on $\delta$ by noticing that the
scaling coefficient $\beta$ runs with $\delta^{-(1+\alpha/2)}$ and the
term in brackets in Eq. (\ref{vexp}) scales with $\delta$, while
$\theta'(\xi_1)$ is found numerically to always be of order unity.
Hence, one concludes that the expansion velocity depends weakly on
$\delta$, $v_e \propto \delta^{-\alpha/2} \sim \delta^{-0.1}$, given the
chosen value of $\alpha=0.2$ \cite{Bertolami6,Bertolami8}. By the
same token, since numerically one finds that $\xi_1 \propto \delta$,
it can be concluded that $r_1 = \beta \xi_1 \propto \delta^{-\alpha/2}
\sim \delta^{-0.1}$.
Given that the GCG tends to a smooth distribution over space, most
density perturbations tend to be flattened within a timescale
related to their initial size and the characteristic speed of
sound $v_s = (
\partial P /
\partial \rho )^{1/2}$. Inside the dark star, the equation of
state (\ref{state}) and the definitions (\ref{defrho}) and
(\ref{defP}) yield
\begin{equation} v_{s, in} = \sqrt{\alpha A \theta(\xi) \over \rho_c^{1 + \alpha}}
\equiv \sqrt{\alpha A_s {\theta(\xi) \over \theta(\xi_1)}}
\left({\rho_{ch0} \over \rho_{ch}}\right)^{(1+\alpha)/2} ~~, \end{equation}
\noindent which at the surface amounts to
\begin{equation} v_{s, ch} = \sqrt{\alpha A \over \rho_{ch}^{1 + \alpha}} \equiv
\sqrt{\alpha A_s} \left({\rho_{ch0} \over
\rho_{ch}}\right)^{(1+\alpha)/2}~~. \label{vsurf} \end{equation}
\noindent One sees that the maximum sound velocity occurs at the
surface of the star; since $\alpha \lesssim 0.6$ and $ 0.6 \leq A_s
\leq 0.8$ (see first references in \cite{Bertolami2}), this should
be smaller than the present value of $v_{s, max} = 0.693$ (in
units of $c$). A plausible criteria for the survival of an initial
perturbation is given by $ v_{s, ch}< v_{e, 0}$, where $v_{e, 0}$
is the initial expansion velocity. Equating Eqs. (\ref{vexp}) and
(\ref{vsurf}) yields, after a little algebra
\begin{equation} \theta'(\xi_1) \delta^{-\alpha /2} < {3 \over 2} \sqrt{1 + \alpha \over
\pi G \rho_{ch} } \left[ H_0 \over H \right]^{1+ 2\alpha} {\Omega_{dm0}
H_0 \over a^{3(1+\alpha)} } ~~. \label{cond} \end{equation}
\noindent Numerically, one finds that the left-hand side of Eq.
(\ref{cond}) is approximately constant. At early times, the GCG
behaves as cold dark matter, with $\rho_{ch} \propto a^{-3}$ and
$H \propto a^{-3/2}$, and the right hand side of condition
(\ref{cond}) is constant. At late times, when the GCG acts as a
cosmological constant, both $\rho_{ch} $ and the expansion rate
$H$ are constant, and the {\textit rhs} then scales as
$a^{-3(1+\alpha)}$, and thus decreases with cosmic time. Thus, most
dark stars are created at early times; this is consistent with the
usual interpretation of the GCG as a model where dark matter
dominates at early times, allowing for structure formation, while
at late times the dark energy component takes over.
Eq.(\ref{vsurf}) and the GCG equation of state (\ref{state})
allows one to rewrite condition (\ref{newtc}) as
\begin{eqnarray} & & \left|{P(r) \over \rho(r)}\right| = {A \over
\rho(r)^{1+\alpha}} = A_s \left(\rho_{ch0} \over \rho(r)
\right)^{1+\alpha} \\ \nonumber && < A_s \left(\rho_{ch0} \over
\rho_{ch} \right)^{1+\alpha} < 1 ~~, \end{eqnarray}
\noindent where we have used $\rho(r) > \rho_{ch} >
\rho_{ch\infty}$ for any redshift $z$, with $\rho_{ch\infty} =
A^{1/1+\alpha}$ the limit for the background cosmological density
when $a \rightarrow \infty$. Relativistic corrections could be
important if the pressure and density are of the same order of
magnitude. However, one can use the bound
\begin{equation} \left|{P(r) \over \rho(r)}\right| < A_s \left(\rho_{ch0}
\over \rho_{ch} \right)^{1+\alpha} ~~, \end{equation}
\noindent to ascertain that, for redshifts typical for structure
formation, $z=z_c=15$ and a set of GCG parameters $\alpha=0.2$,
$A_s=0.7$, one gets $|P(r) / \rho(r)| < 10^{-4}$; higher redshifts
provide an even lower upper bound. For a much recent $z = 1$, the
same set of parameters yields $|P(r) / \rho(r)| < 0.188$, which
still validates the Newtonian approximation, although to a lesser
extent. This is not troublesome, since most dark stars are assumed
to nucleate at an early age. Also, since the above condition only
provides an upper limit for $P(r) / \rho(r)$, a more complete
calculation can still validate the Newtonian approximation,
depending on the value of $\delta$.
Assuming that all dark stars have expanded up to their final size
(which follows from the stabilization of the GCG as a dark
energy), one can write the mass contribution of those created when
the Universe had a size $a(t)$:
\begin{eqnarray} && {M(a) \over 4 \pi} = \int_0^{\rho_n} \int_0
^{r_1(\rho_c)} \rho(r,\rho_c) f(\rho_c) r^2 dr ~d\rho_c
\\ \nonumber && = \beta^3 \int_0^{\rho_n} \int_0
^{\xi_1(\rho_c)} \rho_c \theta^n(\xi,\rho_c) f(\rho_c) \xi^2 d \xi
~d\rho_c ~~, \end{eqnarray}
\noindent where $r_1 = \beta \xi_1$ and the dependence on the
integration variables is made explicit. Integrating over time one
gets the mass contribution of all generations of dark stars, $
M_{DS} = \int_0 ^{a_0} M'(a) da $, where $M'(a) \equiv dM/da$.
A comparison between known observational bounds and numerical
integration of the above results can then be used to constraint
the GCG parameters $A_s$ and $\alpha$, namely through supernovae
data, gravitational lensing results and other dark matter
searches. This will be considered elsewhere.
\section{Numerical Results}
In order to substantiate our arguments, in this section we present
some numerical examples; we shall study the proposed scenario for
the ``typical'' values of the GCG model; one takes $\alpha=0.2$ and
$A_s=0.7$. A future study based on this results could embrace a
wider range of parameters and probe the creation of dark stars at
early stages of the Universe, providing further refinement to the
already known bounds (see, e.g. Ref. \cite{Bertolami8} for a
summary of the existing constraints).
A numerical integration of the modified Lane-Emden equation
(\ref{leg}) produces the results plotted in Fig. \ref{fig}. In
Table I we draw different scenarios, in order to ascertain the
dimensions of dark stars nucleated at different ages of the
Universe. One can see that at a redshift of $z=z_c=15$, presumably
typical for structure formation, even a small perturbation $\delta=5$
produces an overwhelmingly large object, with about $3000$ times
the mass and $20$ times the diameter of the Milky Way. Since these
dimensions scale with $\beta$, which decreases with $\rho_c$, one
probes higher redshifts in order to obtain smaller dark stars.
Therefore, at $z=50$ and $\delta=10$, one obtains an object with
approximately the size and double the mass of our galaxy. A larger
perturbation $\delta = 100$ yields approximately the same size, but a
ten-fold increase in mass.
Going further back in time, a darkstar born at $z=100$ with
$\delta=10$ ($100$) has about one-hundredth (one-tenth) the mass of
the Milky Way and one-tenth its diameter. Finally, $\delta=100$ and
an extremely high redshift of $z=500$, deep within the so-called
``dark ages'', yield a dark star with $1.6 \times 10^6$ solar
masses and a radius of $7.8~pc$, dimensions similar to those
ascribed to super-massive black holes in active galactic nuclei.
The above discussion is by no means definitive, and only serves to
illustrate the concept developed in this study. Nevertheless, one
might be surprised by the unphysically large size of a dark star
hypothetically nucleated at the redshift typical for structure
formation, $z_c=15$. However, notice that this describes the
condensation of baryonic matter interacting gravitationally with
dark matter. The dark star scenario poses quite a different
mechanism, where the GCG (in an era where its dark matter
component dominates) is the sole constituent of the spherical
body. Hence, it is reasonable to assume that bodies of
astrophysical dimensions can arise much earlier in the history of
the Universe. A precise description would of course imply the
nucleation probability distribution $f(\delta)$, since this is the
fundamental quantity ruling the onset of perturbations nucleation
on the GCG background.
To ascertain the validity of the Newtonian limit from which the
Lane-Emden equation is derived, conditions (\ref{newta}) and
(\ref{newtb}) can now be checked with a simple calculation. An
inspection of the Table I shows that $\xi_1$ is of the order of
$\delta$. Also, Fig. \ref{fig} shows that while $\theta'(\xi_1)$ slowly
increases and is of order unity, the scaling function $\theta(\xi)$
grows regularly; hence, one can use the power-law approximation
$\theta(\xi) \sim a \xi^b $, where $a$ and $b$ are coefficients of
order unity, since it does not introduce large deviations from a
full numerical calculation. Using Eq. (\ref{defrho}), this yields
\begin{equation} m(r) \sim {4 \pi \over 3 + n b } \rho(r) r^3 ~~, \end{equation}
\noindent and condition (\ref{newta}) becomes
\begin{equation} {4 \pi \over 3 + n b } G \rho(r) r^2 \ll 1~~.
\label{newtaapprox} \end{equation}
\noindent Since $\rho(r) \propto r^{nb}$, the left-hand side of
Eq. (\ref{newtaapprox}) scales with $r^{2+n b}$ and, since $|n b|
\sim 1$, is an increasing function. Therefore, it is majored by
its value at $r=r_1$, amounting to $ \sim 4 \pi G \rho_{ch} r_1^2
$. Using Table I one finds that it attains a maximum value of
$5.12 \times 10^{-5}$ for $\delta=5$, $z=z_c=15$, thus concluding
that condition (\ref{newta}) is verified. Notice that, for a fixed
redshift, this ratio is approximately independent of $\delta$. This
is due to the very weak scaling of the physical radius with the
relative perturbation, $r_1 \propto \delta^{-\alpha/2} \sim \delta^{-0.1}$,
for the chosen value $\alpha=0.2$.
In a similar fashion, the power-law approximation allows one to
write condition (\ref{newtb}) as
\begin{equation} P(r) \ll \rho(r) {1 \over 3 + n b}~~, \end{equation}
\noindent which, since $|n b | \sim 1 $, is equivalent to
condition (\ref{newtc}) and hence also satisfied at early ages.
Thus, the Newtonian approximation implicit in the Lane-Emden
equation is valid for the cases studied and, given the smallness
of the values encountered in its evaluation, it is also applicable
in a broader range of the nucleation redshift $z$ and relative
densities $\delta$.
Given the indicated values for the initial expansion velocity
$v_{e0}$ and the surface sound velocities $v_s$ (which depends
only on the redshift) the inequality (\ref{cond}) is valid for the
chosen values, and thus the corresponding dark stars do not
collapse at birth.
\begin{widetext}
\begin{table}
\begin{ruledtabular}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
& $ \delta = 5~,~ z=15 $ & & $\delta = 10~,~z=50$ & & $\delta = 100~,~z=50$ & & $\delta = 10~,~z=100$ & & $\delta = 100~,~z=100$ & & $\delta = 100~,~z=500$ \\
\hline
$ \xi_1 $ & $ 9.04 $ & & $ 19.1$ & & $ 239 $ & & $ 19.1 $ & & $ 239 $ & & $ 239 $ \\
$ \theta'(\xi_1) $ & $ 0.850 $ & & $ 0.916 $ & & $ 1.14 $ & & $ 0.916 $ & & $ 1.14 $ & & $ 1.14 $ \\
$ \rho_c~(Kg.m^{-3}) $ & $ 6.11\times 10^{-23 }$ & & $ 3.96\times 10^{-21 }$ & & $ 3.96\times 10^{-20 }$ & & $ 3.07\times 10^{-20 }$ & & $ 3.07\times 10^{-19 }$ & & $ 3.75\times 10^{-17 }$ \\
$ \beta~ (pc) $ & $ 7.59 \times 10^4 $ & & $ 772 $ & & $ 61.4 $ & & $ 81.0 $ & & $ 6.44 $ & & $ 3.26 \times 10^{-2 }$ \\
$ r_1 ~(pc)$ & $ 6.86 \times 10^5 $ & & $ 1.48\times 10^4 $ & & $ 1.47\times 10^4 $ & & $ 1.55\times 10^3 $ & & $ 1.54\times 10^3 $ & & $ 7.81 $ \\
$ M / M_\odot$ & $ 1.73\times 10^{ 15 }$ & & $ 1.13\times 10^{ 12 }$ & & $ 1.11\times 10^{ 13 }$ & & $ 1.02\times 10^{ 10 }$ & & $ 9.97\times 10^{ 10 }$ & & $ 1.58\times 10^6 $ \\
$ 4 \pi G \rho_{ch} r_1^2 /c^2 $ & $ 5.12\times 10^{-5 }$ & & $ 7.66\times 10^{-7 }$ & & $ 7.58\times 10^{-7 }$ & & $ 6.55\times 10^{-8 }$ & & $ 6.48\times 10^{-8 }$ & & $ 2.03\times 10^{-10 }$ \\
$ v_e /c $ & $ 1.89\times 10^{-2 }$ & & $ 2.33\times 10^{-3 }$ & & $ 2.35\times 10^{-3 }$ & & $ 6.81\times 10^{-4 }$ & & $ 6.86\times 10^{-4 }$ & & $ 3.84\times 10^{-5 }$ \\
$ v_s/c$ & $ 5.09\times 10^{-3 }$ & & $ 6.32\times 10^{-4 }$ & & $ 6.32\times 10^{-4 }$ & & $ 1.85\times 10^{-4 }$ & & $ 1.85\times 10^{-4 }$ & & $ 1.03\times 10^{-5 }$ \\
$ \theta'(\xi_1) \delta^{-\alpha/2} $ & $ 0.724 $ & & $ 0.728 $ & & $ 0.722
$ & & $ 0.728 $ & & $ 0.722 $ & & $ 0.722 $
\label{table}
\end{tabular}
\caption{Numerical results for the quantities $\xi_1$,
$\theta'(\xi_1)$, $\rho_c$, $\beta$, $r_1$, $M$, $ 4 \pi G \rho_{ch}
r_1^2 /c^2 $, $v_{e0}$, $v_s$ and $\theta'(\xi_1) \delta^{-\alpha/2}$, for
$\delta=10,~100$ and redshifts $z=15,~50,~100,~500$.}
\end{ruledtabular}
\end{table}
\end{widetext}
\section{Conclusions}
In this study we have analyzed the properties of spherical bodies
with a polytropic equation of state of negative index, in the
context of the GCG dark energy/dark matter unification model. We
have considered the associated Lane-Emden equation and looked at
the qualitative behavior of its solution; amongst the results we
find the conditions for fluctuations to be attenuated or to
develop as dark stars. Our criteria is based on the condition that
the sound velocity does not exceed the expansion velocity of the
dark star when it nucleates $v_{s, ch} < v_{e, 0}$. This enables
the computation of the mass contribution of the dark stars at
present times, which can then be used to constraint the GCG
parameters $A_s$ and $\alpha$, providing another testing ground for
this fascinating model.
\vskip 0.2cm
{\bf Note added:} While finalizing this work we became aware of
the study of stable dark energy objects \cite{Lobo} and halos of
$k$-essence \cite{Lim}. Even though the motivation of both works
are somewhat similar, our approaches are quite different.
\begin{acknowledgments}
\noindent JP is sponsored by the Funda\c{c}\~{a}o para a
Ci\^{e}ncia e Tecnologia under the grant BD~6207/2001.
\end{acknowledgments}
|
1,477,468,750,583 | arxiv | \section{Introduction}
\label{sec:intro}
The theory of gauge fields is based on symmetry principles and the
hypothesis of locality of fields. The principle of local gauge
invariance determines all the forms of the interactions and allows the
geometrical description of the interactions \cite{Utiy56}.
However the quantization of gauge fields leads to difficulties due
to the constraints arising from the gauge symmetry.
These difficulties of the quantization of constrained systems can be
circumvented by the extension of phase space including the anticommuting
ghost variables \cite{Fadd67}.
In this approach, the original gauge symmetry is
transformed into the so-called BRST symmetry in the extended phase
space \cite{BRST,Henn92}. The BRST symmetry will determine all the forms
of the interactions and the algebraic and topological properties
of the fields in the quantum theory \cite{Baul85}.
The question that comes naturally to mind is how we recover the
original gauge invariant space consisting of only physical degrees of
freedom from the extended phase space
with ghosts \cite{Henn92,Baul85,Naka90}
and what is the physical spectrum with the group invariant structure.
In order to study the algebraic and topological structures of
gauge theories, we follow the
point of view of Ref. \cite{Bono83} about the ghost fields and the BRST
transformation. That is, we identify the ghost field with the Cartan-Maurer
form on an infinite-dimensional Lie group $G_{\infty}$ - the group
of gauge transformation - and the BRST generator $Q$
with the coboundary operator $s$ on its Lie algebra ${\cal G}$.
Through these identifications,
we have the natural framework to construct the Lie algebra
cohomology induced by the BRST generator $Q$.
This Lie algebra cohomology will be related to the group invariants
of the configuration space of gauge fields and matter fields.
The organization of this paper is as follows.
In Sec. II, we construct the cochain complex on ${\cal G}$
with values in a ${\cal G}$-module \cite{Cartan,Gold,Choq89}.
With the pairing between
Lie algebra ${\cal G}$ and its dual space ${\cal G}^*$,
we define a chain as an element of the dual space to the cochain
and a dual operation $s_*$ of $s$.
We define a positive-definite inner product and construct an
adjoint operator $s^{\dagger}$ of $s$ using the Hodge duality operation.
We obtain the Hodge decomposition theorem,
Poincar\'{e} duality, and K\"{u}nneth formula analogous
to the de Rham cohomology \cite{Spanier}.
In Sec. III, we show that the adjoint of the coboundary operator can
be identified with the BRST adjoint generator $Q^{\dagger}$ for
the Lie algebra cohomology induced by BRST generator $Q$
and each cohomology class on a polynomial space
is characterized by the gauge invariant polynomials with a particular
group invariant structure imposed on the cochain (or chain) space.
We discuss the physical implications of the Lie algebra cohomology in the
contexts of gauge anomaly and the effective action with the symmetry
group $G$ spontaneously broken to a subgroup $H$.
The Lie algebra cohomology allows us algebraic and topological
characterization of them and provides an interesting
duality relation - Poincar\'{e} duality - between them.
In Sec. IV, we apply this cohomology to QED and QCD.
In order to consider the consistent embedding of the BRST adjoint
generator $Q^{\dagger}$ into the relativistic phase space,
we introduce the nonminimal sector of BRST generator \cite{Henn92}.
Through this procedure, we find the BRST-like N\"{o}ther charge
$Q^{\dagger}$ corresponding to the adjoint of the BRST generator $Q$,
which generates a new kind of noncovariant symmetry in QED
in Refs. \cite{Lave93,Yang1}.
Section V contains discussion and some comments.
\section{Lie algebra cohomology}
\label{sec:coho}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\begin{array}{\begin{array}}
\def\end{array}{\end{array}}
Let $P$ be a principal bundle with a structure group $G$ (a compact Lie
group with the invariant inner product defined on its Lie algebra
${\it g}$) over a differentiable manifold $M$ (flat Minkowski space or
Euclidean space ${\bf R}^n$).
The gauge transformation group $G_{\infty}$ - an automorphism of $P$-
and its Lie algebra ${\cal G}$ can be identified with
the set of $C^{\infty}$-functions
on $M$ taking values in the structure group $G$ and
its Lie algebra ${\it g}$, respectively.
One defines the dual spaces
${\it g}^*$ of ${\it g}$ and ${\cal G^*}$ of ${\cal G}$
as follows \cite{Choq89}:
\begin{equation}
<\;x,\;X>=\sum_{a=1}^{dimG} X^a x_a,\;\;
\mbox{for}\; X \in {\it g}\;\;({\cal G}),\;\;
x \in {\it g}^*\;\;({\cal G^*}).\label{digg}
\end{equation}
The spacetime dependence of the elements of $G_{\infty}$, ${\cal G}$,
and ${\cal G}^*$
will be suppressed unless otherwise explicitly indicated and an $L^2$-norm
will be assumed in the inner product (\ref{digg}) between ${\cal G}$ and
${\cal G}^*$ \cite{McMu87}.
Using the pairing between
Lie algebra ${\it g}$ (or ${\cal G}$) and its dual space
${\it g}^*$ (or ${\cal G}^*$), the coadjoint action of $G$ (or $G_{\infty}$)
on ${\it g}^*$ (or ${\cal G^*}$) is defined by
\begin{equation}
<\;X,\;Ad_g^* x>=<\;Ad_{g^{-1}}X,\;x>
\mbox{for}\; g \in G\;\;(G_{\infty}),\;\; x \in {\it g}^*\;\;({\cal G^*}).
\label{dadg}
\end{equation}
Consider a $p$-cochain $w^p$, an element of $C^p({\cal G};R)$, where
$C^p$ is an antisymmetric $p$-linear map on ${\cal G}$ with values
in a left ${\cal G}$-module $R$
with the ring structure \cite{Cartan,Gold,Choq89,Spanier}.
The space of cochains on ${\cal G}$
is the direct sum of the spaces of $p$-cochains:
\begin{equation}
C^*=\oplus_{p=0}^{dimG} C^p.
\end{equation}
We introduce on $C^*$ the operators $i(\vartheta)(x)$ and
$\epsilon(\vartheta^*)(x)$ on a point $x\in M$ defined as follows:
\[ i(\vartheta):C^p \rightarrow C^{p-1},\;\;\;\;\;
\forall \vartheta \in {\cal G}\]
by
\begin{equation}
(i(\vartheta)(x) w^p)(\vartheta_1,\cdots,\vartheta_{p-1})(y)=
w^p(\vartheta,\vartheta_1,\cdots,\vartheta_{p-1})(y)
\delta(x-y),\;\;\;w^p\in C^p;\label{i}
\end{equation}
and
\[ \epsilon(\vartheta^*) :C^p \rightarrow C^{p+1}, \;\;\;\;\;
\forall \vartheta^* \in {\cal G}^*\]
by
\begin{equation}
(\epsilon(\vartheta^*)(x)w^p)(\vartheta_1,\cdots,\vartheta_{p+1})(y)=
\sum_{l=1}^{p+1} (-1)^{l+1} <\;{\vartheta}^*(x),\; {\vartheta}_l(y) >
w^p(\vartheta_1,\cdots,\hat{\vartheta}_l,\cdots,\vartheta_{p+1})(y),
\label{epsilon}
\end{equation}
where $\;\hat{ }\;$ indicates omission.
Denote by $\{ {\theta}_a \}, a=1,\cdots,N \equiv dimG$,
a basis of ${\cal G}$
and by $\{ {\theta}^{*a} \}$ the basis of ${\cal G}^*$ such that
\begin{equation}
<\;{\theta}^{*a}(x),\; {\theta}_b(y) >=\delta^a_b\delta(x-y).\label{d}
\end{equation}
Then straightforward calculations using the definitions $(\ref{i})$
and $(\ref{epsilon})$ lead to the following relations \cite{Choq89}
\begin{eqnarray}
\begin{array}{l}
\{i(\theta_a),i(\theta_b)\}\equiv i(\theta_a) \circ i(\theta_b)+
i(\theta_b) \circ i(\theta_a)=0,\\
\{\epsilon(\theta^{*a}),\epsilon(\theta^{*b})\}=0,\\
\{\epsilon(\theta^{*a}),i(\theta_b)\}=
<\;{\theta}^{*a},\; {\theta}_b >{\bf 1}=\delta^a_b{\bf 1},\label{ie}
\end{array}
\end{eqnarray}
where $\;\circ\;$ denotes the map composition.
Then, for example, the $p$-cochain $w^p \in C^p$ can be constructed using
the operator $\epsilon(\theta^{*})$ as follows
\begin{equation}
w^{p}=\sum\frac{1}{p!}\underbrace{\epsilon(\theta^{*a}) \circ
\epsilon(\theta^{*b})\circ \cdots \circ \epsilon(\theta^{*c})}
_{p\;\;elements} \phi^{(p)}_{ab\cdots c}, \;\;\;\;\mbox{where}
\;\;\;\phi^{(p)}_{ab\cdots c} \in R. \label{p-cochain}
\end{equation}
It must be kept in mind that the operations in
Eqs. (\ref{i})-(\ref{p-cochain}) must be understood as defined on
a point $x \in M$ and we have omitted delta-function on $M$
in Eq. (\ref{ie}).
This shorthand notation will be used throughout this paper if it
raises no confusion.
Let $s:C^p \rightarrow C^{p+1}$ be
the coboundary operator, i.e., $s^2=0$ \cite{Bono83,Cartan,Gold,Choq89}
defined on $C^*({\cal G};R)$ by
\begin{eqnarray}
(sw^{p})(\theta_1,&\cdots&,\theta_{p+1})(x)=
\sum_{l=1}^{p+1} (-1)^{l+1} \theta_l \cdot
w^p(\theta_1,\cdots,\hat{\theta}_l,\cdots,\theta_{p+1})(x)\nonumber\\
&+& \sum_{l<n} (-1)^{l+n} w^p([\theta_l,\theta_n],\theta_1,
\cdots,\hat{\theta}_l,\cdots,\hat{\theta}_n,
\cdots,\theta_{p+1})(x),\label{s}
\end{eqnarray}
where a dot means the linear transformation of $R$ defined by an
element of ${\cal G}$. The coboundary operator $s$ can then be expressed
in terms of $\epsilon(\theta^*)$ and $i(\theta)$ as follows
\begin{equation}
s=\sum_{a=1}^N \int_M\theta_a \cdot \epsilon(\theta^{*a})-
\sum_{a<b}^N \int\int_M i([\theta_a,\theta_b])
\circ\epsilon(\theta^{*a})\circ\epsilon(\theta^{*b}),\label{os}
\end{equation}
where the integrations are defined over $M$.
Now we define a chain complex $C$ as the dual space of the cochain
complex $C^*$ using the duality (\ref{digg}) \cite{Gold,Spanier}, namely,
\[ <\;,\;>:C^p\times C_p \rightarrow R\]
by
\begin{equation}
(w^p,\;v_p)\mapsto <\;w^p,\;v_p>=\int_{v_p}\;w^p, \;\;\;\;\;
w^p\in C^p\;\mbox{and}\;v_p\in C_p,\label{chain}
\end{equation}
where we set $<\;w^p,\;v_q>=0$ if $p\neq q$, and $C^*$ and $C$ are
augmented compleces, that is, $C^p=C_p=0$ for $p<0$ \cite{Cartan,Spanier}.
The duality (\ref{chain}) allows us to define an operator
$s_*:C_p({\cal G}^*;R)\rightarrow C_{p-1}({\cal G}^*;R)$ dual to $s$:
\begin{equation}
<\;sw^{p-1},\;v_p>=<\;w^{p-1},\;s_* v_p>, \;\;\;\;\;
w^{p-1}\in C^{p-1}\;\mbox{and}\;v_p\in C_p.\label{duals}
\end{equation}
Obviously, Eq. (\ref{duals}) shows us $s^2=0$ implies $s_*^2=0$.
Thus we will identify $s_*$ with the boundary operator acting on
the chains $\{v_p\}$. Of course, the above procedures defining the chain
complex is completely analogous
to the ordinary homology theory \cite{Cartan,Gold,Spanier}.
Let us introduce the Hodge star duality operation whose action on the
cochain space is defined as follows
\begin{equation}
\ast:C^p \rightarrow C^{N-p}
\end{equation}
by
\begin{equation}
(*w^p)(\theta_{a_{p+1}},\cdots,\theta_{a_N})=
\sum \frac{1}{p!} w^p(\theta_{b_1},\cdots,\theta_{b_p})\;
\varepsilon_{\;\;\;\;\;\;\;\;a_{p+1}\cdots a_N}^{b_1 \cdots b_p}.\label{hodd}
\end{equation}
As the de Rham cohomology, we want to define the adjoint
operator $s^{\dagger}$ of $s$ \cite{Gold,Eguchi}
under the new nondegenerate inner product
defined by
\begin{equation}
(w_1,\;w_2)=\int_{u_N}\;w_1 \wedge *w_2 \label{adjoint}
\end{equation}
with the $N$-chain $u_N$ satisfying $s_* u_N=0$. Then
\begin{equation}
(sw_1,\;w_2)=(w_1,\;s^{\dagger}w_2),\label{daggers}
\end{equation}
and $s^{\dagger}:C^p \rightarrow C^{p-1}$ is given by
\begin{equation}
s^{\dagger}=(-1)^{Np+N+1} *\circ s \circ *.\label{dagger}
\end{equation}
For convenience, we have taken the Cartan-Killing metric $g_{ab}$
of the semi-simple Lie subalgebra as positive definite:
\[g_{ab}=-\frac{1}{2}c^{l}_{ad}c^{d}_{bl}=\delta_{ab},\]
where $[\theta_a(x), \theta_b(y)]=c_{ab}^l\theta_l(x) \delta(x-y)$.
The operator $s^{\dagger}$ is nilpotent since $s^{\dagger 2}
\propto * s^2 *=0$. Using the definitions in Eqs. (\ref{dagger}), (\ref{s}),
and (\ref{hodd}), one can determine the action of $s^{\dagger}$ on
a $p$-cochain $w^p$:
\begin{eqnarray}
(s^{\dagger}w^{p})(\theta_1,&\cdots&,\theta_{p-1})(x)=-\sum_{l=p}^{N}
\theta_l \cdot w^p(\theta_l,\theta_1,\cdots,\theta_{p-1})(x)\nonumber\\
&-& \sum_{l=1}^{p-1}\sum_{a<b}\;(-1)^{l+1} c_{ab}^l
w^p(\theta_a,\theta_b,\theta_1,\cdots,
\hat{\theta}_l,\cdots,\theta_{p-1})(x).\label{sdagger}
\end{eqnarray}
Similarly, the adjoint operator $s^{\dagger}$ can be expressed in
terms of $\epsilon(\theta^*)$ and $i(\theta)$ as follows
\begin{equation}
s^{\dagger}=-\sum_{a=1}^N \int_M \theta_a \cdot i(\theta_{a})+
\sum_{a<b}^N \int_M c_{ab}^{\;\;\;c}\; \epsilon(\theta^{*c})
\circ i(\theta_{a})\circ i(\theta_{b}).
\end{equation}
Let us define an operator $\delta\equiv s\circ s^{\dagger}+
s^{\dagger}\circ s$ corresponding to the Laplacian,
which clearly takes $p$-cochains back into $p$-cochains as
\[ \delta:C^p \rightarrow C^{p}.\]
The straightforward calculation using the Eq. (\ref{ie})
and the Jacobi identity for $c_{ab}^{c}$ leads to the following
expression for the Laplacian $\delta$
\begin{equation}
\delta=-\int_M (\sum \theta_a \cdot \theta_{a}+
\sum c_{ab}^{c} \theta_a \cdot \epsilon(\theta^{*c})
\circ i(\theta_{b})+\frac{1}{2}\sum c_{ab}^{c}c_{ae}^{d}
\epsilon(\theta^{*c}) \circ i(\theta_{b})\circ
\epsilon(\theta^{*d})\circ i(\theta_{e})).\label{dlap}
\end{equation}
Considering the formal resemblance to the de Rham cohomology, it will
be sufficient to state, without proof, only the important results
which are necessary for later applications. For mathematical details of
homology and cohomology theory, see Refs. \cite{Cartan,Gold,Spanier}.
We define the $p$-th cohomology group of the Lie algebra ${\cal G}$
by the equivalence class of the $p$-cochains $C^p({\cal G};R)$,
that is, the kernel of $s$ modulo its image:
\begin{equation}
H^p ({\cal G};R)\equiv Ker^p s/Im^p s, \;\;\; p=0,\cdots,N.\label{cohom}
\end{equation}
Then the nondegenerating inner product (\ref{chain}) provides a natural
pairing between $p$-th cohomology group $H^p ({\cal G};R)$ and
$p$-th homology group $H_p ({\cal G}^*;R)$
\[ H^p ({\cal G};R)\otimes H_p ({\cal G}^*;R) \rightarrow R, \]
so that {\it the inner product (\ref{chain}) establishes the duality of
the vector spaces $H^p ({\cal G};R)$ and $H_p ({\cal G}^*;R)$},
the de Rham theorem \cite{Spanier}.
The following result is the direct consequence of
the positive definiteness of the inner product (\ref{adjoint}):\\
{\it The ``harmonic'' $p$-cochain $w^p\in Harm({\cal G};R)$, i.e.
$\delta w^p=0$ is satisfied if and only if it is exact, i.e. $s w^p=0$
and co-exact, i.e. $s^{\dagger}w^p=0$.}
The adjointness of the operator $s$ and $s^{\dagger}$ under the
nondegenerate inner product (\ref{adjoint}) and their nilpotency lead to
the so-called Hodge decomposition theorem in the cochain space
in a unique way \cite{Gold,Eguchi}:\\
{\it Any $p$-cochain $w^p$ can be uniquely decomposed as a sum of exact,
co-exact, and harmonic forms}, i.e.,
\begin{equation}
w^p=\delta^p_H\oplus sw^{p-1}\oplus s^{\dagger}w^{p+1},
\;\;\;p=0,\cdots,N,\label{hodc}
\end{equation}
where $\delta^p_H$ is a harmonic $p$-cochain.
The Hodge decomposition theorem (\ref{hodc}) implies {\it the isomorphism
between the $p$-th cohomology space $H^p ({\cal G};R)$ and the $p$-th
harmonic space $Harm^p ({\cal G};R)$}.
The Hodge star operator $*$ maps $C^p \rightarrow C^{N-p}$ and commute
with the Laplacian $\delta$. Thus $*$ induces an isomorphism
\[ Harm^p ({\cal G};R) \approx Harm^{N-p} ({\cal G};R).\]
Consequently, {\it $H^{N-p} ({\cal G};R)$ and $H^p ({\cal G};R)$ are
isomorphic as vector spaces},
\begin{equation}
H^{N-p} ({\cal G};R) \approx H^p ({\cal G};R). \label{poind}
\end{equation}
This is just the Poincar\'{e} duality \cite{Spanier}.
If the Lie algebra ${\cal G}$ is a direct sum of semi-simple
Lie algebras and/or Abelian $u(1)$ algebras, that is,
${\cal G}={\cal G}_1\oplus {\cal G}_{2}$ and thus each of these algebras
${\cal G}_{\alpha}$ is an ideal of ${\cal G}$, then a total $p$-cochain
$C^p$ will be a sum of a tensor product of cochains
corresponding to each Lie algebra ${\cal G}_{\alpha}$
\[ C^p=\oplus_{q+r=p}\;C_1^q\otimes C_2^r \]
and $w^p\in C^p$ will be given by
\[ w^p=\sum_{q=0}^p w_1^q \times w_2^{p-q},\;\;\; w_1^q\in C_1^q,
\;\;w_2^{p-q}\in C_2^{p-q}.\]
The map $w^p\in C^p$ on ${\cal G}$ is defined by
\[ w^p(\theta_1,\cdots,\theta_{q};\xi_1,\cdots,\xi_{p-q})=
w_1^q(\theta_1,\cdots,\theta_{q})
w_2^{p-q}(\xi_1,\cdots,\xi_{p-q}),\;\;\;
\theta_i\in {\cal G}_1,\;\;\xi_i\in {\cal G}_2.\]
Then {\it $H^p ({\cal G};R)$ can be decomposed into a sum of
a product of each $H^q ({\cal G}_{1};R)$ and $H^{p-q}({\cal G}_{2};R)$}:
\begin{equation}
H^p ({\cal G};R)= \oplus_{q=0}^p [H^q ({\cal G}_1;R)\otimes
H^{p-q}({\cal G}_2;R)].\label{kunn}
\end{equation}
This is known as the K\"{u}nneth formula
for a product space (in our case, a product group
$G_1 \otimes G_2$) \cite{Spanier,Eguchi}.
\section{Group structure of gauge theories}
\label{sec:group}
In this section we will show that the group invariant structure
of constrained system can be described by the Lie algebra cohomology
induced by the BRST generator $Q$ in the algebra of invariant
polynomials on ${\cal G}$ with the generalized Poisson bracket \cite{Henn92},
taking the complete correspondence with the results of Sec. II.
It will provide the algebraic and the topological characterization
with respect to group invariant structures in the gauge theory.
Consider any physical system with gauge transformation group
$G_{\infty}$ and its compact Lie algebra ${\cal G}$ with $N$ generators
$G_a, a=1,\cdots,N$, satisfying the following Lie algebra:
\begin{equation}
[G_a(x), G_b(y)]=g f^{c}_{ab} G_c(x)\delta(x-y),
\;\;a,b,c=1,\cdots,N. \label{liea1}
\end{equation}
Corresponding to each generator, we introduce a ghost $\eta^a(x)$ and an
antighost $\rho_a(x)$ which satisfy the following
Poisson bracket relations
\begin{equation}
\{\eta^a, \eta^a\} = \{\rho_a, \rho_b\} = 0,\;\;
\{\eta^a, \rho_b\} = \delta^{a}_{b}.\label{gha1}
\end{equation}
Then we can construct the nilpotent BRST generator \cite{Henn92}
\begin{equation}
Q=\int_M G_a\eta^a -\frac{1}{2}g\int_M f^{c}_{ab}
\rho_c\eta^a\eta^b,\label{brq1}
\end{equation}
and its nilpotency
\begin{equation}
Q^2 = 0\label{nilq1}
\end{equation}
follows from the Lie algebra (\ref{liea1}) together with the Jacobi identity.
If one identifies the operators $\epsilon(\theta^{*a})(x)$ and
$i(\theta_a)(x)$ in Sec. II with the ghost $\eta^a(x)$ and
the antighost $\rho_a(x)$ respectively \cite{Choq89},
the expression (\ref{os}) about the coboundary operator $s$ exactly
agrees with the BRST generator $Q$, where structure constants $c_{ab}^l=
g f^{l}_{ab}$ and $G_a$ is any representation for $\theta_a$.
Rewrite the BRST generator as
\begin{equation}
Q=\int_M (J_a\eta^a -\frac{1}{2}\tau_a\eta^a), \label{brq2}
\end{equation}
where $J_a=G_a+\tau_a$. $\tau_a=g\rho_m f^{m}_{al}\eta^l$ satisfies
the same algebra as $G_a$ and commutes with it.
Then BRST $s$-transformation law with respect to a field ${\cal F}(x)$ is
defined as follows,
\begin{equation}
s{\cal F}(x)=[Q,{\cal F}(x)\},\label{stran}
\end{equation}
where the symbol $[\;,\;\}$ is the generalized Poisson bracket.
Thus the $s$-transformations with respect to the ghost fields $\eta$
and $\rho$ by $Q$ are
\begin{equation}
s\eta^a=-\frac{1}{2}g f^{a}_{bc}\eta^b\eta^c,\;\;\;
s\rho_a=J_a. \label{cme}
\end{equation}
According to the Ref. \cite{Bono83}, we identify the ghost field
$\eta(x)$ with a left-invariant Cartan-Maurer form on the group $G_{\infty}$.
With this interpretation of the ghost field $\eta(x)$, the first equation
in Eq. (\ref{cme}) is just the Cartan-Maurer equation with respect to
``exterior derivative'' $s$ for forms $\eta(x)$ on $G_{\infty}$.
It is also obvious that the adjoint operator $s^{\dagger}$ of $s$ introduced
in Sec. II can be constructed in terms of $\eta$ and $\rho$. We define
the corresponding generator by $Q^{\dagger}$ and it is given by
\begin{eqnarray}
Q^{\dagger}&=&-\int_M(G^a\rho_a -\frac{1}{2}g f^{ab}_{\;\;\;c}
\eta^c\rho_a\rho_b),\nonumber\\
&=&-\int_M(J^a\rho_a -\frac{1}{2}\tau^a\rho_a). \label{cbrq1}
\end{eqnarray}
One can easily check this generator is also nilpotent, i.e.
$Q^{\dagger2}= 0$ as stated in Sec. II.
The generator $Q^{\dagger}$ first appeared in Ref. \cite{Gerv86}
to find the gauge invariant interactions in string theory and then
in Ref. \cite{Holt90} to construct the BRST complex and
the cohomology of compact Lie algebra. The Lie algebra
cohomology in this paper is quite different from
the BRST cohomology constructed
in the paper \cite{Yang2}, so we use the nomenclature,
Lie algebra cohomology, in order to avoid confusion with
the BRST cohomology since these two cohomologies have been often confused
in the literatures.
In fact, the cohomology of Ref. \cite{Holt90}
corresponds to the Lie algebra cohomology in this paper
as long as the spacetime dependences of the Lie group
$G_{\infty}$ and the Lie algebra ${\cal G}$ are fixed.
However, it is necessary to consider the infinite-dimensional
Lie group and Lie algebra in order that
the BRST generator may be viewed as the coboundary operator
for the Lie algebra cohomology \cite{Bono83}.
The $s^{\dagger}$-transformation with respect to a field
${\cal F}(x)$ is defined by
\begin{equation}
s^{\dagger}{\cal F}(x)=[Q^{\dagger},{\cal F}(x)\}.\label{s^*tran}
\end{equation}
Then the $s^{\dagger}$-transformations with respect to the ghost fields
$\eta$ and $\rho$ are
\begin{equation}
s^{\dagger}\eta^a=-\;J^a,\;\;\;
s^{\dagger}\rho_a=\frac{1}{2}g f_{a}^{\;\;bc}\rho_b\rho_c.\label{*cme}
\end{equation}
The above equations show that one can identify the antighost $\rho_a$ with
the Cartan-Maurer form with respect to the ``exterior derivative''
$s^{\dagger}$ as well.
Since $Q$ and $Q^{\dagger}$ are nilpotent, it follows that $Q$ and
$Q^{\dagger}$ are invariant by $G_{\infty}$, i.e.
\begin{equation}
[Q, J_a]=0,\;\;\; [Q^{\dagger}, J_a]=0.\label{brt2}
\end{equation}
One finds that $Q$ and $Q^{\dagger}$ satisfy the supersymmetrylike algebra
that closes into a Laplacian generator $\Delta$
\begin{equation}
\{Q,Q^{\dagger}\} = -\Delta,\;\; [\Delta, Q]=0,\;\; [\Delta,Q^{\dagger}]=0,
\label{bra}
\end{equation}
where the Laplacian $\Delta$ can be computed in
terms of the Casimir generators \cite{Gerv86}
\begin{equation}
\Delta=\frac{1}{2}\int_M(J^aJ_a+G^aG_a).\label{lap}
\end{equation}
The operator $\delta:C^p \rightarrow C^{p}$ in Sec. II corresponds
to this generator and it has the exactly same expression as $\Delta$
if it is rewritten in terms of Casimir operators.
Following the same scheme as those in the Refs. \cite{Viol85,Dubo92},
we construct the cochains on ${\cal G}$ spanned by the
polynomial $\omega_{(p)}=Tr\;\eta^p$,
where $\eta=\eta^a T_a$ and $T_a$ is a generator of ${\it g}$.
That is, a p-dimensional cochain $C^p({\cal G};R)$ corresponding to
the Eq. (\ref{p-cochain}) is spanned by elements
of the space of $w^p=\wedge^r\omega_{(p_r)} \cdot \phi \;(\sum p_r=p),$
where $\phi$ is an element of $R$, i.e.
${\cal G}$-module of symmetric polynomials
on ${\cal G}$ without (anti-)ghosts.
Then $\omega_{(p)}=0$ if $p$ is even and $\omega_{(p)}$ is a
``closed'' $p$-form - a $p$-cocycle, i.e.
$s\omega_{(p)}=0$ by Eq. (\ref{cme}).
Notice, for semi-simple groups $G$, $\omega_{(1)}=0$ \cite{Eguchi}.
Let us reexpress the $p$-cochain $w^p$ as the following form:
\begin{equation}
w^{p}=\sum\frac{1}{p!}\eta^{a_1}\eta^{a_2}\cdots\eta^{a_p}\cdot
\phi^{(p)}_{a_1a_2\cdots a_p}.\label{cochain}
\end{equation}
Note that the results such as Hodge decomposition theorem,
Poincar\'{e} duality, and K\"{u}nneth formula in Sec. II
will be reproduced here in the same manner as well.
In Sec. II, we stated the isomorphism between the $p$-th cohomology space
$H^p ({\cal G};R)$ and the $p$-th harmonic polynomial space
$Harm^p ({\cal G};R)$. Therefore, the BRST invariant polynomial space
can be summarized as the {\it harmonic} polynomial space $\delta w^{p}=0$,
whose solutions are represented by
\begin{equation}
[G_a, w^p]=0,\label{gsing}
\end{equation}
and
\begin{equation}
[\tau_a, w^p]=[g\rho_m f^{m}_{al}\eta^l, w^p]=0.\label{ginvs1}
\end{equation}
The second condition reads, in components,
\begin{equation}
f^{m}_{a[a_1}\phi^{(p)}_{a_2\cdots a_p]m}=0,\label{ginvs2}
\end{equation}
where the square bracket denotes complete antisymmetrization over the
enclosed indices \cite{Holt90}.
The first condition (\ref{gsing}) imposes the $G$-invariance -
$G$-singlet - on the polynomial and the second one imposes very important
constraints about the group invariant structures.
For the $p=0$ and $p=N$, the condition (\ref{ginvs1}) is always satisfied
trivially as long as they are associated with the
$G$-invariant polynomials,
which leads to the conclusion that the zeroth and the $N$-th
cohomology spaces require only the space of $G$-singlet.
For semi-simple groups $G$, there are no solutions
satisfying the condition (\ref{ginvs1}) for $p=1,\;2,\;4$ since there is
no cohomology basis $\wedge^r\omega_{(p_r)}$ to be closed
and for $p=N-1,\;N-2,\;N-4$ by Poincar\'{e} duality (\ref{poind}),
so that their cohomologies $H^p({\cal G};R)$ vanish.
Note that the gauge group $SU(2)$ is cohomologically trivial so
that the group invariant structure in the $SU(2)$ gauge theory is
similar to eletrodynamics. In this respect, we would like to refer
the interesting analysis \cite{Prokh} which arrives at the same
conclusion under the different approach.
If one $U(1)$ factor is present
(for example, $SU(2) \times U(1),\; U(2)$, etc.),
then $H^1 ({\cal G};R)$ is
non-trivial since $\omega_{(1)}$ is nonzero \cite{Eguchi,Viol85}.
For $G=SU(N),\;N\geq3$, there exist nontrivial cohomologies
$H^3 ({\cal G};R)$ and $H^5 ({\cal G};R)$ whenever
the symmetric polynomials $\phi^{(3)}$ and $\phi^{(5)}$ are proportional
to the structure constants as follows, respectively:
\begin{equation}
\phi^{(3)}_{abc}=f_{abc}\cdot\phi,\;\;\;\phi^{(5)}_{abcde}=
d_{amn}f_{mbc}f_{nde}\cdot\phi,\label{harm3}
\end{equation}
where $d_{abc}=\frac{1}{2}Tr T_a \{T_b,T_c\}$
and $\phi$ is any $G$-singlet.
These follow directly from the expansion
(\ref{cochain}) \cite{Bono83,Band86} or the Eq. (\ref{ginvs2})
with the Jacobi identity.
It is worth mentioning, for $G=SU(3)$, the nontrivial cohomologies
$H^3 ({\cal G};R)$ and $H^5 ({\cal G};R)$ are related with each other by
Poincar\'{e} duality (\ref{poind}).
The solution of the descent equations corresponding to the Wess-Zumino
consistency conditions in gauge theories \cite{Treiman} shows that
the polynomials $\omega_{(3)}$
and $\omega_{(5)}$ corresponding to the third and the fifth cohomologies
(\ref{harm3}) respectively generate the two dimensional and the four
dimensional gauge anomaly \cite{Bono83,Viol85}
(see also recent analysis \cite{Sore93} by Sorella,
where the cohomology basis $\omega_{(3)}$ and $\omega_{(5)}$
have a fundamental importance on solving the descent equations).
Thus, from the results of these literatures, we can
conclude that 2 and 4 dimensional $SU(3)$ anomalies are related with
each other by the Poincar\'{e} duality; in other words,
the gauge anomaly in two dimensional QCD implies the anomaly in four
dimensional QCD as long as $d$-cohomology
is trivial \cite{Bono83,Viol85,Dubo92}.
This observation is also applied to the problem yielding the general
$G$-invariant effective action \cite{Wein94}
with the symmetry group $G$ spontaneously broken to the
subgroup $H$ since the $G$-invariant effective actions for homogeneous
spaces $G/H$ can be understood as the Lie algebra cohomology problem
of the manifold $G/H$. For example, in the case for $SU(3) \times SU(3)$
spontaneously broken to the subgroup $SU(3)$, the two dimensional
correspondence of the Wess-Zumino-Witten term in four dimensional theory
is the Goldstone-Wilczek topological current \cite{Gold81}.
\section{Cohomology in QED and QCD}
\label{sec:qcd}
In this section, we want to see whether it is possible to find a
corresponding adjoint generator $Q^{\dagger}$ of
the nilpotent N\"{o}ther charge $Q$ in relativistic theories and
what is the role of the adjoint $Q^{\dagger}$ in the Lagrangian
formulation. That is, the solution we want to find out is how to
embed the adjoint $Q^{\dagger}$ of $Q$ into the relativistic phase space.
We showed in Ref. \cite{Yang1} the consistent
nilpotent N\"{o}ther charge $Q^{\dagger}$ exists for Abelian gauge theories
and the generator $Q^{\dagger}$ generates new noncovariant symmetry
and imposes strong constraint on state space.
In order to consider the consistent embedding of the BRST adjoint
generator $Q^{\dagger}$ into the relativistic phase space,
it is necessary to introduce
the nonminimal sector of BRST generator \cite{Henn92,Bata75}.
First, consider the BRST (and anti-BRST) invariant effective QED Lagrangian.
(Our BRST treatments are parallel with those of Baulieu's paper
\cite{Baul85}.)
\begin{eqnarray}
{\cal L}_{eff} = &-&\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+ {\bar\psi}
(i\gamma^{\mu}D_{\mu}-m)\psi-\frac{1}{2}
{\bar s}s(A_{\mu}^2+\alpha{\bar c}c) \nonumber\\
= &-&\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+ {\bar\psi}
(i\gamma^{\mu}D_{\mu}-m)\psi + A_{\mu}\partial^{\mu}b+\frac{\alpha}{2}b^2
-\partial_{\mu}{\bar c}\partial^{\mu}c,\label{qedl}
\end{eqnarray}
where $D_{\mu}= \partial_{\mu}+ieA_{\mu}$ is the covarint derivative with
the metric $g_{\mu\nu}=(1,-1,-1,-1).$
The explicit BRST transformations are
\begin{eqnarray}
\begin{array}{ll}
sA_{\mu}=\partial_{\mu}c,\;\; & sc=0,\\
s{\bar c}=b,\;\; & sb=0, \\ \label{qedt}
s\psi=-iec\psi.
\end{array}
\end{eqnarray}
We introduced an auxiliary field $b$ to achieve off-shell nilpotency of the
BRST (and the anti-BRST) transformation.
Then the nilpotent N\"{o}ther charge generated by the BRST
symmetry reads as
\begin{equation}
Q = \int d^3x \{(\partial_i F^{io}-J_0)c+b\dot{c}\}, \label{qedq}
\end{equation}
where $J_0$ is a charge density defined by
\begin{equation}
J_0=e {\bar \psi}\gamma_0 \psi.\label{qedch}
\end{equation}
The constraint functions $G^i$ consist of two commuting groups,
$G^i=(\Phi, b),\;i=1,2$, where $\Phi=\partial_i F^{io}-J_0$
is a Gauss law constraint in
the theory and $b$ is the momentum canonically conjugate to the Lagrange
multiplier $A_0$, so that it generate a gauge
transformation, $\delta A_0$. Thus adding the nonminimal sector in
the BRST generator, the Lie algebra ${\cal G}$ is
composed of a direct sum of two Abelian ideals ${\cal G}_1$ and
${\cal G}_2$ corresponding to the $u(1)$ generators $\Phi$ and $b$,
respectively. In the similar fashion, let the ghost fields split as follows:
\begin{equation}
\eta^i=(c,\;\; \pi_{{\bar c}}=\dot {c}), \;\;
\rho^i=(\pi_c=-\dot{{\bar c}},\;\; {\bar c}).\label{ghsp1}
\end{equation}
Then the BRST charge $Q$ can be written as
\begin{equation}
Q=G^i\alpha_{ij}\eta^j,\label{qedbrst}
\end{equation}
where $\alpha_{ij}=\left(\begin{array}{cc}1\;\; 0\\0\;\; 1\end{array}\right)$.
Since the constraints in the relativistic phase space for Abelian gauge
theories impartially generate $u(1)$ Lie algebras and the K\"{u}nneth
formula (\ref{kunn}) shows $H({\cal G};R)$ is the product of each
$H({\cal G}_{1};R)$ and $H({\cal G}_{2};R)$, we expect it is trivial to
embed the adjoint $Q^{\dagger}$ of the BRST generator $Q$
corresponding to the total Lie algebra ${\cal G}$
including the nonminimal sector into the relativistic phase space.
According to the Eq. (\ref{cbrq1}),
one can guess the form of the generator
$Q^{\dagger}$ must be the following: $Q^{\dagger}=G^i\beta_{ij}\rho^j$.
Note that we have a degree of freedom to the extent of
multiplicative factor in defining the BRST generator $Q$
or the its adjoint generator $Q^{\dagger}$
for a given Lie algebra ${\cal G}$ as long as it does not affect
the nilpotency of $Q$ or $Q^{\dagger}$. Using this degree of freedom
either in the Lie algebra ${\cal G}_{2}$ or in ${\cal G}_{1}$ sector
in defining the adjoint generator $Q^{\dagger}$, we take
the following choices for the matrix $\beta_{ij}$ which will allow the
well-defined canonical mass dimension for $Q^{\dagger}$:
$\beta_{ij}=\left(\begin{array}{cc}1\;\;\;\;\; 0\\0\;\;
-\nabla^2\end{array}\right)$
or $\left(\begin{array}{cc}\nabla^{-2}\;\; 0\\0\;\;\;\;\; -1\end{array}\right)$.
These choices make the BRST adjoint $Q^{\dagger}$ the
symmetry generator of the Lagrangian (\ref{qedl}) and
so complete the consistent embedding of $Q^{\dagger}$
into the relativistic phase space.
The former type corresponds to the generator in Ref. \cite{Yang1} and the
latter to the generator in Ref. \cite{Lave93}.
The explicit form of the BRST adjoint generator $Q^{\dagger}$ for the
former type is
\begin{equation}
Q^{\dagger} = \int d^{3}x \{(\partial_i F^{io}-J_0)\dot{{\bar c}}
+b\nabla^2 {\bar c}\}.\label{qedcbrch}
\end{equation}
Then the explicit transformations defined by (\ref{s^*tran}) are that
\begin{eqnarray}
\begin{array}{ll}
s^{\dagger}A_{0}=-\nabla^2 {\bar c}, \;\;
& s^{\dagger}A_i=-\partial_0\partial_{i}{\bar c},\\
s^{\dagger} c=(\partial_i F^{io}-J_0),\;\;
& s^{\dagger}{\bar c}=0, \\ \label{qedct}
s^{\dagger}\psi=ie\dot{\bar c}\psi, \;\;
& s^{\dagger}b=0.
\end{array}
\end{eqnarray}
In the Ref. \cite{Yang1}, it has shown that this noncovariant transformation
is a symmetry of the Lagrangian (\ref{qedl}) and that there also exists
the same kind of symmetry in the Landau-Ginzburg and the
Chern-Simons theories.
As discussed in the Ref. \cite{Yang1}, the symmetry generated by
$Q^{\dagger}$ is realized in quite different way compared to the
BRST symmetry: while the gauge-fixing term in the effective QED
Lagrangian (\ref{qedl}), i.e. $A_{\mu}\partial^{\mu}b+
\frac{\alpha}{2}b^2 \rightarrow -\frac{1}{2\alpha}(\partial_{\mu}A^{\mu})^2$,
remains invariant under the transformation (\ref{qedct}),
the variation from the ghost term is canceled up to the total
derivative by the variation
from the original gauge-invariant classical Lagrangian which
remains invariant under the BRST transformation (\ref{qedt}).
These differences in the way of realizing the symmetries imply
that the BRST adjoint
symmetry can give the different superselection sector
from the BRST symmetry \cite{Lave95} (as it is also seen from
the Hodge decomposition theorem (\ref{hodc})
which is a canonical decomposition into a direct sum of linearly
independent subspaces) unlike the recent comment \cite{Rive95}.
If we choose, instead, the matrix $\beta_{ij}=\left(\begin{array}{cc}
\frac{1}{\nabla^2}\;\; 0\\ 0\;\; -1\end{array}\right)$ in the Eq. (\ref{qedcbrch}),
we will obtain the nonlocal symmetry in Ref. \cite{Lave93}. Of course,
in this case, we must impose the good boundary conditions on fields.
But there is no reason to introduce the nonlocality and it seems
unnatural since the generator $Q^{\dagger}$ must be the adjoint of
the generator $Q$ of the {\it local} gauge transformation.
The adjoint generator in the configuration space can be understood as the
generator of transformation consistent with the gauge fixing condition
\cite{Lave93,Yang1}. Thus, in the configuration space,
there may not exist the global expression of the adjoint generator
$Q^{\dagger}$ of non-Abelian gauge theory compatible with the gauge
fixing condition on account of the topological obstructions
such as Gribov ambiguity \cite{Grib78}.
But it does not imply that there can not exist the local expression
of $Q^{\dagger}$, because the difficulty posed by
the Gribov ambiguity can be avoided \cite{Singer}
by finding a local cross section on a finite local
covering and using the Faddeev-Popov trick locally.
Nevertheless, it seems a nontrivial problem to find the solution
for the consistent embedding into the relativistic phase space
for the non-Abelian gauge theory such as QCD.
This problem remains to be future work.
We want to focus our attention about the
construction of $su(3)$ Lie algebra cohomology in QCD.
Consider the BRST (and anti-BRST) invariant effective QCD Lagrangian:
\begin{eqnarray}
{\cal L}_{eff} = &-&\frac{1}{4}F^{a}_{\mu\nu}F^{a\mu\nu}+ {\bar\Psi}
(i\gamma^{\mu}D_{\mu}-M)\Psi-\frac{1}{2}
{\bar s}s(A^a_{\mu}A^{a\mu}+\alpha{\bar C}^aC^a) \nonumber\\
= &-&\frac{1}{4}F^2_{\mu\nu}+ {\bar\Psi}
(i\gamma^{\mu}D_{\mu}-M)\Psi
+ A_{\mu}\partial^{\mu}B+\frac{\alpha}{2}B^2+
\frac{\alpha}{2}gB[C,{\bar C}]\nonumber\\
&-&\partial_{\mu}{\bar C}D^{\mu}C+\frac{\alpha}{2}g^2[{\bar C},C]^2,
\label{qcdl}
\end{eqnarray}
where quark fields $\Psi$ are taken to transform according to the
fundamental $SU(3)$ representation,
the Yang-Mills vector potential $A_{\mu}$, a pair of anticommuting
ghosts $C$, ${\bar C}$ and
the auxiliary field $B$ take values in the adjoint
representation of a $SU(3)$ Lie group. The QCD Lagrangian (\ref{qcdl})
is invariant with respect to the following
BRST transformations \cite{Baul85}:
\begin{eqnarray}
\begin{array}{ll}
sA_{\mu}=D_{\mu}C,\;\;\; & sC=-\frac{g}{2} [C,C],\\
s{\bar C}=B, \;\;\; & sB=0,\\ \label{qcdt}
s\Psi=-gC\Psi.
\end{array}
\end{eqnarray}
$D_{\mu}$ defines the covariant derivatives of $SU(3)$ Yang-Mills
symmetry group.
The corresponding conserved nilpotent BRST generator is
given by
\begin{equation}
Q = \int d^3x \{(D_i F^{io}-J_0+g[\dot{\bar C},C])^aC^a
+B^a(D_0C)^a -\frac{1}{2}g [\dot{\bar C},C]^aC^a\}, \label{qcdq1}
\end{equation}
where $J_0^a$ is a matter color charge density defind by
\begin{equation}
J_0^a=-ig {\bar \Psi}\gamma_0 T^a \Psi.\label{qcdcch}
\end{equation}
The constraint functions $G^A$ are composed of two commuting groups,
$G^A=(\Phi^a, B^a)$, where $\Phi^a=(D_i F^{io}-J_0)^a$ is the original
Gauss-law constraints
in the theory generating $su(3)$ Lie algebra:
\begin{equation}
[\Phi_{a}, \Phi_{b}]=g f^{c}_{ab} \Phi_{c},\label{qcdlie}
\end{equation}
and $B^a$s are the momenta
canonically conjugate to the Lagrange multipliers $A^a_0$ and generate
$u(1)$ Lie algebras. In the similar fashion as QED, one can split the ghosts
as follows:
\begin{equation}
\eta^A=(C^a,\;\; \Pi^a_{{\bar C}}=(D_0 C)^a), \;\;
\rho^A=(\Pi^a_C=-\dot{\bar C^a}, \;\;{\bar C}^a).
\label{qcdghsp}
\end{equation}
Note that $s\Pi^a_{{\bar C}}=0$, so that we can identify the ghost
$\Pi^a_{{\bar C}}$ with the Cartan-Maurer form on $U(1)$ group.
Of course, the BRST generator $Q$ in Eq. (\ref{qcdq1}) is exactly
same form of the Eq. (\ref{brq1}).
Let us rewrite the BRST generator $Q$ as the form of
the Eq. (\ref{brq2})
\begin{equation}
Q = \int d^3x \{J_aC^a+B_a\Pi^a_{{\bar C}}
-\frac{1}{2}\tau_a C^a\},\label{qcdq2}
\end{equation}
where the generator $J^a$ and the generator of the ghost
representation $\tau_a$ \cite{Gerv86} are given by
\begin{equation}
J^{a}=(D_i F^{io}-J_0+g[\dot{\bar C},C])^a=\Phi^a+\tau^a,\;\;\;
\tau^a=g[\dot{\bar C},C]^a.\label{qcdj}
\end{equation}
The generators $J_a$ and $\tau_a$ satisfy the same $su(3)$ algebra:
\begin{equation}
[J_{a}, J_{b}]=g f^{c}_{ab} J_{c},\;\;\;
[\tau_{a}, \tau_{b}]=g f^{c}_{ab} \tau_{c}. \label{qcdtau}
\end{equation}
Since the two groups of the constraint functions $G^A=(\Phi^a, B^a)$ commute
with each other, the total Lie algebra ${\cal G}$
including the nonminimal sectors $B^a$ is composed of the $su(3)$
non-Abelian ideal and the eight $u(1)$ Abelian ideals:
\begin{equation}
{\cal G}=\oplus su(3)\oplus^{8}_{\alpha=1} u(1)_{\alpha}. \label{qcdg}
\end{equation}
In order to construct only the cohomology of
the color $su(3)$ Lie algebra for the reason explained above,
we drop the Abelian sectors from the BRST generator $Q$ through the
direct restriction on the cochain space (\ref{cochain}), in other words,
considering only $su(3)$ sub-cochain complex.
The BRST adjoint $Q^{\dagger}$ defined on the cochain
$C^*(su(3);R)$ is equal to
\begin{equation}
Q^{\dagger}=-\int d^3x \{J^a\Pi^a_C -\frac{1}{2}\tau^a \Pi^a_C \}.
\label{qcdcq}
\end{equation}
Then the Laplacian $\Delta$ of the $su(3)$ subalgebra sector
can be represented in terms of the generators $J_a$
and the original constraints $\Phi_a$
\begin{equation}
\Delta=\frac{1}{2} \int d^3x \{J^aJ_a+ \Phi^a\Phi_a\}, \label{qcdlap}
\end{equation}
which is equal to the expression given by Eq. (\ref{lap})
for $su(3)$ cohomology.
Thus the harmonic polynomials of the $su(3)$ algebra sector
must satisfy the following conditions,
\begin{equation}
[\Phi^a, w^p]= [(D_i F^{io}-J_0)^a, w^p]=0,\;\;
a=1,\cdots,8,\label{qcdgsing}
\end{equation}
and
\begin{equation}
[\tau^a, w^p]=[gf^a_{bc}\dot{\bar C^b}C^c, w^p]=0,\;\;
a=1,\cdots,8.\label{qcdginvs}
\end{equation}
{}From the arguments in Sec. III, we see that the solutions of
Eqs. (\ref{qcdgsing}) and (\ref{qcdginvs}) exist
trivially for $p$=0 and $p$=8 as long as they are given by
the gauge invariant polynomials because they are singlets
under the adjoint representation of the $su(3)$ Lie algebra.
But the cohomologies $H^p(su(3);R)$ for $p=1,\;2,\;4,\;6$, and $7$ vanish.
For $p$=3 and 5, there always exist non-trivial cohomologies
$H^3(su(3);R)$ and $H^5(su(3);R)$ whose structures
are given by Eq. (\ref{harm3}) and
they are related with each other
by the Poincar\'{e} duality (\ref{poind}).
Since the Lie algebra cohomology proves the nontrivial property
of group invariant structures, the nonvanishing Lie algebra
cohomologies $H^p(su(3);R)$ can be related to the
gauge invariants in $SU(3)$ gauge theory.
It remains to investigate the deep relation between the gauge
invariant configuration of gauge and matter fields in the
spacetime and the Lie algebra cohomology.
\section{Discussion}
\label{sec:disc}
We have constructed the Lie algebra cohomology of the group of gauge
transformation and obtained the Hodge decomposition theorem
and the Poincar\'{e} duality.
As long as a Lie algebra has a nondegenerate Cartan-Killing
metric so that the underlying manifold is orientable,
we can always define a unique (up to a multiplicative factor)
adjoint of the coboundary operator
under a nondegenerate inner product using a Hodge duality.
However, for Lie algebras such as the Virasoro algebra
for which no Cartan-Killing metric exists, the adjoint can not
be unique. Indeed, for the Virasoro algebra, the adjoint of BRST generator
defined by Niemi \cite{Niem} is different from ours and
that in Ref. \cite{Gerv86}.
We also considered the consistent extension of
the Lie algebra cohomology into the relativistic phase space
in order to obtain the Lagrangian formulation.
In order to do that, we extended the Lie algebra
by including the nonminimal sector of BRST generator.
The adjoint $Q^{\dagger}$ constructed through this
procedure generates the noncovariant local or nonlocal symmetry
in QED in Refs. \cite{Lave93,Yang1}.
We have pointed that there is no reason to introduce
the nonlocality necessarily
and it seems unnatural since the generator $Q^{\dagger}$ must be
the adjoint of the BRST generator $Q$ generating local gauge transformation.
But, in the configuration space, the adjoint $Q^{\dagger}$
compatible with the gauge fixing condition can not exist globally
for the non-Abelian gauge theory due to the topological obstructions
such as Gribov ambiguity. As explained in Sec. IV,
the adjoint $Q^{\dagger}$ in the non-Abelian gauge theory
can exist locally (or perturbatively), so that it can generate
new symmetry at least locally (or perturbatively).
So it will be interesting to study the role
of the symmetry transformation generated by the generator $Q^{\dagger}$
and the Ward identity of this symmetry
in the local (or perturbative) sense.
Note that the Lie algebra cohomology constructed here
is quite different from the BRST cohomology
in Refs. \cite{Naka90,Yang2,Spie87}.
In the two cohomologies, the role of ghost fields is
quite different and each inner product to obtain Hodge theory is defined
by the definitely different schemes. It can be shown \cite{Yang3} that
there is no paired singlet in the BRST cohomology
so that higher cohomologies
with nonzero ghost number vanish as long as the asymptotic completeness is
assumed. Therefore the ghost number characterizing cohomology classes in this
paper has different meaning from the ghost number of state space.
The distinction between the BRST cohomology and the Lie algebra
cohomology will be further clarified \cite{Yang3}.
In QCD, there are nontrivial cohomologies $H^p(su(3);R)$ for $p=0,\;8$
and $p=3,\;5$ and they are, respectively,
related to each other by the Poincar\'{e} duality. Since the Lie
algebra cohomology proves the nontrivial property of group invariant
structures, the nonvanishing Lie algebra cohomologies $H^p(su(3);R)$
may be deeply related to the colorless combination
of $SU(3)$ color charges which satisfy the $su(3)$ Lie algebra.
Then it will be very interesting to investigate the relation between
the color confinement and the $su(3)$ cohomology.
\section*{ACKNOWLEDGEMENTS}
This work was supported by the Korean Science and Engeneering Foundation
(94-1400-04-01-3 and Center for Theoretical Physics) and
by the Korean Ministry of Education (BSRI-94-2441).
|
1,477,468,750,584 | arxiv | \section{Introduction}
In numerous low-mass X-ray binaries quasi-periodic oscillations (QPOs)
between 300 and 1200 Hz have been discovered (the kHz QPOs; see van
der Klis 1997 for a recent review on kHz QPOs). Here we present the
search for kHz QPOs in the atoll sources GX 9$+$1 and GX 9$+$9.
\section{Observations and analysis}
We observed GX 9$+$1 on 1996 Feb 29, Apr 21, May 29, and 1997 Feb 10
and Mar 9, and GX 9$+$9 on 1996 Aug 12, Oct 16, and Oct 30 with the
RXTE satellite. We obtained a total of 23.3 ksec (GX 9$+$1) and 15.2
ksec (GX\,9$+$9) of data. The X-ray hardness-intensity diagrams
(HIDs) were made using the {\it Standard 2} data. Due to gain changes
the GX 9$+$1 HID for the 1996 Feb 29 observation can not be directly
compared with those of the other observations. The power density
spectra were made using the 250$\mu$s time resolution data. We
calculated rms amplitude upper limits (95\% confidence) on QPOs with a
FWHM of 150 Hz in the frequency range 100--1500 Hz.
\section{Results}
The HIDs for GX 9$+$1 and GX 9$+$9 are shown in Figure 1. According to
the HID (Fig. 1a) and the high-frequency noise in the power spectrum
GX 9$+$1 was on the lower banana during the 1996 Feb 29
observation. During the other observations GX 9$+$1 moved along the
banana branch (Fig. 1b). The power spectrum and the HID of GX\,9$+$9
suggest that this source was on the banana branch during the
observations. We find for GX 9$+$1 upper limits of 1.6\% (the 1996
Feb 29 observation; energy range 2--60 keV) and 1.3\% (all other
observations combined; energy range 2--18.2 keV), and for GX 9$+$9 an
upper limit of 1.8\% (all data; energy range 2--18.2 keV). We divided
the GX 9$+$1 banana in Figure 1b into different regions. For each
region we calculated the rms upper limit on kHz QPOs. We find rms
amplitude upper limits of 3.2\%, 1.3\%, 1.9\%, 2.7\%, and 3.4\%
(energy range 2--18.2 keV), for region 1, 2, 3, 4, and 5,
respectively.
\begin{figure}
\psfig{figure=HID.ps,width=12.5cm}
\caption{The HIDs of GX 9$+$1 ({\it a} and {\it b}) and GX 9$+$9 ({\it
c}). The data of 1996 Feb 29 of GX 9$+$1 ({\it a}) were taken with a
different PCA gain compared to the data of the other observations
({\it b} and {\it c}). The intensity is the count rate in the photon
energy range 2.0--15.9 keV ({\it a}) or 2.1--16.0 keV ({\it b} and
{\it c}); the hard colour is the count rate ratio between 9.7--15.9 kev
and 6.5--9.7 keV in {\it a}, and between 9.7--16.0 keV and 6.4--9.7
keV in {\it b} and {\it c}. All points are 16s averages. The count
rates are background subtracted, but not dead-time corrected. The
regions in {\it b} have been used to calculate the upper limits.}
\end{figure}
\section{Discussion}
The non-detection of kHz QPOs in GX 9$+$1 and GX 9$+$9 is consistent
with the predictions of the sonic-point model proposed to explain the
kHz QPOs (Miller et al. 1997). It is known from other atoll sources
(e.g. 4U 1636$-$53: Wijnands et al. 1997; 4U 1820-30: Smale et
al. 1997) that when they are in the upper banana branch the kHz QPOs
are not detected. Thus, it remains possible that when GX 9$+$1 and GX
9$+$9 are observed longer on the lower banana, or even in the island
state, kHz QPOs are detected in these sources.
|
1,477,468,750,585 | arxiv | \section{Introduction}
Extrasolar planets reveal orbital eccentricities much higher than those found among the planets of the Solar System, a deviation that in the beginning was considered so strange that it even lead some people to doubt whether the radial velocity exoplanet measurements actually showed real planets. In the present study we will show that the eccentricity of the Solar System planets actually follow the same trend as all other known planetary systems, but belong to the tail of a continuous distribution.
When searching for extraterrestrial life we often focus on Earth-like planets and Solar System-like systems, and so low eccentricities are included in our search criteria. But exactly how the habitability of a planet might be affected by the eccentricity of its orbit is yet unknown. A planet on a high-eccentricity orbit can undergo drastic seasonal changes in surface temperature due to the difference in stellar radiation from perihelion to aphelion. These seasonal changes could lead to periods of time without liquid water on the surface, which would greatly limit the habitability of the planet \citep{bolmont2016habitability}. However, a series of studies (reviewed in \cite{2019arXiv191104441K}) have found that often the atmosphere and oceans of a planet can act like a buffer to the temperature variations, in which case the surface climate will be determined by the average stellar radiation rather than the seasonal extremes. In other cases large seasonal variability was found to expand the habitable zone for the planet, by allowing water to remain liquid at larger semi major axes \citep{linsenmeier2015climate}. Since it is still uncertain how orbit eccentricities affect the habitability of a planet, it is critical for us to study and understand the eccentricities in the existing exoplanet sample and how they might deviate from those in the Solar System.
From previous investigations \citep{2007DDA....38.1501C,2008ApJ...686..621F,2008ApJ...686..603J,2008ASPC..398..295J, 2019A&A...629L...7C}, planet-planet interaction has been suggested as the dominating mechanism determining orbital eccentricities of planets, either through dynamical relaxation or planet-planet scattering. The dynamical interactions of planetary systems is reviewed in \cite{2014prpl.conf..787D}. As a conclusion of this, a correlation between orbital eccentricity and multiplicity (number of planets) is predicted. This prediction has been tested empirically by \citet{2015PNAS..112...20L} based on 403 exoplanets detected by the radial velocity method (RV) and listed in \textit{exoplanets.org}. A strong anti-correlation between eccentricity (e) and multiplicity (M) was found, and for multiplicities above two the correlation could be described by a power law: $e(M)\approx 0.584\cdot M^{-1.20}$. The eccentricity-multiplicity correlation has later been investigated by \cite{2017A&A...605L...4Z}, who found a similar correlation for multiplicities above one based on 258 selected RV and transit planets from NASA Exoplanet Archive. Both of the previous investigations have based their analyses on individual planets rather than treating the systems as units.
The main motivation for this article is to further the investigations by \citet{2015PNAS..112...20L} and \cite{2017A&A...605L...4Z} using the expanded planet sample known to date, comparing search methods, population groups, and databases, and aiming to set the results in perspective to our own Solar System and habitability. Our planet sample contains planets found by several detection methods including RV, transiting planet (transit), microlensing (ML) and others. By including all planets, regardless of detection method, we will be able to comment on whether there is an observational bias related to the specific methods, and the large dataset available today makes it possible to exclude more planets that might potentially introduce unwanted bias into the correlation. Unlike the previous investigations we will treat each system as a unit by conducting the analysis based on the average orbital eccentricities in the systems rather than the eccentricity of each individual planet. This is done since both the multiplicity and potential planet-planet interactions are properties of the planetary system as a whole rather than the individual planets. \\
From the resulting eccentricity-multiplicity correlation an estimate of the mean multiplicity of a planetary system can be obtained in addition to a probability distribution of the multiplicity of planetary systems. From this we wish to set our Solar System in perspective against a "standard" planetary system.
We envision that planetesimals are formed in relatively circular orbits, then gravitationally scatter one another into higher eccentricity, before they over longer timescales collide to build up solid planets or the planetary cores of giants. After the evaporation of the gas disk, planet-planet interaction would be the dominating mechanism determining the final eccentricities, in such a way that the more planets there end up being in the system the more circular the orbits become. This is a logic scenario to provide an image of the physical process behind the correlation we investigate in the present paper, but we stress that this is only an image that helps us (and hopefully the reader, too) to imagine the process. Our study is empirical, and hence have no apriori assumption about which exact mechanisms cause the correlation. In order to further the development of the theoretical understanding, we take advantage of the large sample now available to also analyze whether different populations of exoplanets show different correlations.\\
A major concern when investigating extrasolar planets is that we are highly constrained by limitations in our detection methods. When using RV the detection probability of a planet is biased towards large masses, and when using transit it is biased towards ultra short periods. That leaves a large parameter space where planets go mainly undetected, and thereby bias conclusions about standard planetary systems drawn from the limited sample. Today the two most abundant detection methods (RV and transit) basically have shown us that exoplanetary systems very different from our own Solar System are abundant. Direct observational estimates of how abundant exoplanetary systems resembling our own Solar System are, may most likely come from future extensive microlensing surveys from space (perhaps from a dedicated microlensing satellite \citep{Bennett_2002} or from WFIRST \citep{penny2019predictions}) or from the ground (perhaps from GravityCam-like instruments \citep{mackay2018gravitycam}), and they will give us the full set of orbital parameters of solar-system-like exoplanets \citep{gaudi2012microlensing,2018AJ....155...40R}, as opposed to today where orbital eccentricity has been obtained for only one microlensing exoplanet \citep{gaudi2008discovery}. Until then it can be useful to look at indirect evidences for what a standard exoplanetary system looks like. A motivation for this article is to go beyond the data sample by finding a general theory for all systems (including those with planets yet undetected), and from this estimate the characteristics of standard planetary systems. This may give us some insight into the standard formation mechanism of planetary systems and how they develop into the most common configurations of planets, give hints about what to look for and thereby which instruments to develop, and maybe contribute to give us a more realistic view on how abundant truly Earth-like exoplanets might be. One such indirect method is the study of the eccentricity distribution among known exoplanets, as presented here.\\
In Sect.\, \ref{sec:Data} the dataset is discussed.
In Sect.\, \ref{sec:e(M)} the correlation between eccentricity and multiplicity is examined, both for the full data samples from two different databases, for subsamples sorted for detection methods and for population groups, and for a high-eccentricity subsample in which we attempt to exclude most systems containing undiscovered planets. Based on the correlation a power law is found. In Sect.\, \ref{sec:meanM} some of the potential implications of the power law correlation are explored. A probability distribution of the multiplicity is found, and from this a mean multiplicity of planetary systems is estimated. In Sect.\, \ref{sec:Dis} the results and theories are discussed. Finally in Sect.\, \ref{sec:Con} the conclusions are summarized.
\section{The Dataset} \label{sec:Data}
Our data from \textit{exoplanet.eu} were retrieved in August 2019. All confirmed planets regardless of detection method are included. We are aware that \textit{exoplanet.eu}, like most other databases, might be subject to errors in their data listing. For the sake of this study we mostly try not to question the validity of the data found on the website. Planets without listed eccentricities or where the eccentricity is falsely listed as zero (i.e. without listed uncertainties) are excluded from the sample. Of the 4103 planets listed on \textit{exoplanet.eu} a total of 1171 planets remain in the sample, 2932 are excluded due to unknown eccentricities and 60 of these have eccentricities listed as zero with unknown uncertainties. In Table \ref{tab:M}\footnote{All planets and systems with a multiplicity of X will henceforth be referred to as MX-planets or MX-systems} the number of planets sorted by multiplicity can be seen for each of the included detection methods.
Because no multiplicities are listed on \textit{exoplanet.eu} each planet has been given a multiplicity based on the number of confirmed planets orbiting the same star listed in the database. Since some of the systems might contain yet undiscovered planets the known companions in these systems will initially be sorted into the wrong multiplicity bins, and the actual distribution might differ from Table \ref{tab:M}. Due to the small number of systems with high multiplicities, all systems with more than 5 known planets have been combined in one bin. The multiplicity of this bin is calculated as the mean multiplicity of the included systems.
Note that the number of planets in each bin is not necessarily a multiple of the multipliciticy. This is caused by the fact that not all planets in each system are included, mainly because their eccentricities are unknown. Our dataset is three to four times larger than any of the previous analyses (1171 in this study, compared to 403 in \cite{2015PNAS..112...20L} and 258 in \cite{2017A&A...605L...4Z}). We have not accounted for the uncertainties listed for each of the eccentricities in the database in this analysis, which will be discussed further in Sec. \ref{sec:Dis}.
\begin{table}
\caption[]{Planets included in data samples. Retrieved from $exoplanet.eu$. Planets are sorted for detection method. Rightmost column show the number of systems present in each multiplicity bin, whereas columns 2-5 show number of individual planets.}
\label{tab:M}
$$
\begin{array}{p{0.2\linewidth} l l l l | l}
\hline
\noalign{\smallskip}
Multiplicity & Total & RV & Transit & Other & Systems \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
M1 & 667 & 408 & 234 & 23 & 667 \\
M2 & 274 & 215 & 52 & 5 & 151 \\
M3 & 121 & 65 & 50 & 6 & 45 \\
M4 & 63 & 43 & 17 & 3 & 20 \\
${\geq}$M5 & 46 & 34 & 10 & 2 & 12\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Total & 1171 & 765 & 363 & 39 & 895\\
\noalign{\smallskip}
\hline
\end{array}
$$
\end{table}
\section{Eccentricity and multiplicity} \label{sec:e(M)}
Each system is assigned an eccentricity found as the mean eccentricity of the planets in the system. This differs from previous studies, where the planets were not sorted into systems, and the authors looked at the eccentricities of the individual planets. The final results from the two methods do not differ greatly, but we find that sorting the planets into systems is more meaningful, since the effects we observe might be caused by planet-planet interactions within the systems and will change the system as a whole.
These assigned system eccentricities are then used to calculate overall mean and median eccentricities within each multiplicity bin.
In Fig.\, \ref{e(M)} mean and median values of the system eccentricities are plotted for each of the multiplicity bins, together with our Solar System with a multiplicity of eight.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{BachMoellerFig1.eps}
\caption{ Mean and median values of the eccentricity for each multiplicity. The mean eccentricity of the Solar System is plotted with a black $\times$. The multiplicity of the ${\geq}M5$ multiplicity-bin is plotted as $M=5.7$.}
\label{e(M)}
\end{figure}
The errors are calculated using the following methods: Mean; As the standard deviation of system means found by the Bootstrap method. Median; As the one-third and two-thirds quantiles from a Cumulative Distribution Function divided by $\sqrt{N-1}$, where $N$ is the number of systems in the multiplicity bin. Notice that the errors indicate the uncertainties of the mean and median eccentricities of each multiplicity bin, and not the spread of the eccentricities among the individual planets, which is significantly larger than the errors shown.
Fig.\, \ref{e(M)} suggests a trend of decreasing eccentricity for increasing multiplicity. As can be seen the Solar System too follows this trend indicating that our system does not deviate from the norm.
An exception for this trend, is the M1 systems. Whereas the other data points seems to approximately follow a power law (seemingly linear because of the logarithmic axes), the eccentricities for M1 deviate from the trend by being too low to follow the power law. This deviation will be discussed later.
\subsection{Planet populations} \label{sec:pop}
A potential uncertainty related to the study of an eccentricity-multiplicity correlation is the dependence of the correlation on factors such as planet mass and semi major axis. \citet{turrini2020normalized} and \citet{laskar2017amd} therefore looked at the correlation of multiplicity and angular momentum deficit (AMD), rather than multiplicity and eccentricity. The AMD does depend on the eccentricity, but also on the semi major axis and the mass of the planets, and \citet{turrini2020normalized} found an anticorrelation between the normalized angular momentum deficit (NAMD) and the multiplicity. \citet{turrini2020normalized} argues that the eccentricity-multiplicity correlation found by other studies is a reflection of the underlying NAMD-multiplicity correlation. The study of the NAMD-multiplicity is complicated by the fact that few planets have both their masses, eccentricity and semi-major axis well-known, and as such the dataset is smaller. The larger sample in our data set compared to previous data sets, allows us to study directly the correlation of eccentricity and multiplicity for a number of different subsamples, in order to test how the planet mass ($m_p$) and semi major axis (or period, $P$) might affect the eccentricity-multiplicity correlation.\\
To test the impact of mass and period, we have divided the systems into three different populations: 1) Systems containing a hot-Jupiter ($m_p > 0.1 M_J$ and $P < 100$ days). 2) Systems containing a cold-Jupiter ($m_p > 0.1 M_J$ and $P > 100$ days), and no hot-Jupiters. 3) Systems dominated by super-Earths ($m_p < 0.1 M_J$) with no giant planets. In order to increase the data sample, planets with no listed mass in the database, have been sorted based on their $mass\cdot sin(i)$ value, when this is known, and a total of 849 systems are sorted into the population categories. The distribution of systems in each population category can be seen in Table \ref{tab:pop}. It should be noted, that the observed planet sample does not represent the true planet population since some planet types are more easily observed than others, but the differences between the populations, as shown here, might still give us an insight into the uncertainties of the eccentricity-multiplicity correlation. Research in the actual occurrence rate of different planet types is reviewed in e.g. \cite{winn2015occurrence}.
Table \ref{tab:pop} shows that different multiplicities are dominated by different populations of planets, such that most of the M1 systems are giant-planet systems, whereas the larger multiplicity systems are dominated by super-Earths. A priori one could expect that since the cold-Jupiters dominate the M1 systems, we could seek the explanation for the deviation from the power law followed by the $M>1$ systems in the cold-Jupiter population. However, we find that this is not the case, when we look at the mean eccentricities plotted as a function of multiplicity in Fig.\,\ref{fig:pop}.
\begin{table}
\caption[]{Distribution of the systems in Table. \ref{tab:M} where in addition to the eccentricity, also the mass or $m sin(i)$ is known, such that they can be divided into the groups: hot-Jupiters (HJ), cold-Jupiters (CJ), super-Earths (SE), and plotted in Fig. \ref{fig:pop}. Last column shows the number of systems (Number). A total of 849 systems are included.}
\label{tab:pop}
$$
\begin{array}{p {0.2\linewidth} l l l | l }
\hline
\noalign{\smallskip}
Multiplicity & HJ \hspace{0.08\linewidth}& CJ \hspace{0.08\linewidth}& SE \hspace{0.1\linewidth}& Number \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
M1 & 39.2\% & 51.3\% & 9.4\% & 637 \\
M2 & 22.9\% & 57.6\% & 19.4\% & 144 \\
M3 & 25.6\% & 20.5\% & 53.8\% & 39 \\
M4 & 21.1\% & 26.3\% & 52.6\% & 19 \\
${\geq}$M5 & 10.0\% & 30.0\% & 60.0\% & 10 \\
\hline
\end{array}
$$
\end{table}
Fig.\,\ref{fig:pop} shows the mean eccentricities plotted for the full sample (equivalent to the mean values from Fig.\, \ref{e(M)}) together with the three different populations introduced above. A power law has been fitted to all samples for multiplicities above one, not including the Solar System, i.e. $1<M<8$. The power law has been fitted to the overall mean eccentricities for all systems in each multiplicity bin, corresponding to the data points seen in the figure. Due to the small sample of Jupiter-systems with four or more planets, the $M4$ and ${\geq}M5$ bins have been combined for the hot-Jupiter and cold-Jupiter systems. The multiplicity for these bins are the mean multiplicities among the systems combined in the bins.
The main conclusion from Fig.\,\ref{fig:pop} is that all three populations follow similar power law trends to the one for the full sample (although of course with larger scatter of the individual points due to the smaller data sample). We notice that the cold-Jupiter population is not the cause of the low eccentricities of the M1 systems, but on the contrary displays the highest eccentricities of the M1 systems among all populations.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{BachMoellerFig5.eps}
\caption{ Mean values of eccentricities for each multiplicity for four subsamples. Full red line: The full sample from \textit{exoplanet.eu} identical to the mean values from Fig.\, \ref{e(M)}. Dashed: Subsample of systems containing a hot-Jupiter (HJ). Dotted: Subsample of systems containing a cold-Jupiter (CJ). Dot-dashed: Subsample of systems only containing smaller planets. Mean value of the Solar System (SS) is plotted in black. Power laws (PL) have been fitted to all four samples for multiplicities above one; this is discussed in Sect.\, \ref{sec:pop}.}
\label{fig:pop}
\end{figure}
\subsection{The undiscovered planets in the systems}
To get further understanding of the uncertainties of the power law correlation, Fig.\,\ref{e(M)full75LT} shows the mean eccentricities plotted as a function of multiplicities for three additional subsamples: Beside the full system sample from \textit{exoplanet.eu}, are shown a high-eccentricity subsample consisting of only the 75\% systems with highest eccentricities, a subsample consisting of RV planets listed on \textit{exoplanets.org} before 2014 (L\&T) equivalent to the sample used by \citet{2015PNAS..112...20L}, and a full sample of the 704 planets with known eccentricities from the database \textit{exoplanets.org}. Power laws have been fitted to all samples for multiplicities above one. \\
The high-eccentricity subsample has been created to exclude systems containing undiscovered planets.
According to the trend visible in Fig.\, \ref{e(M)} larger systems have lower eccentricities, and systems with additional, undiscovered, planets should therefore have eccentricities below what is appropriate for their given multiplicity. We might therefore expect, that the systems showing the lowest orbital eccentricities, could have extra undiscovered planets.
Removing these systems from the fit does change the relation a bit (obviously shifting the line to somewhat higher eccentricities), but do keep the same trend of a fine linear fit to the systems with $M>1$ and a substantially lower average eccentricity for the M1 systems than expected from the power law.
Since both of the dominating detection methods (the radial velocity method and the transit method) depend on the size of the planets, smaller planets are more difficult to detect, and only few planets with a size comparable to Mercury or Mars have been found. Mars and Mercury represent one fourth of the (known) planets in the Solar System, and following this line of argument a first attempt of a qualified guess on a typical number of undetected planets could be, that a minimum of 25\% of the planets in exoplanet systems remain undiscovered. By removing the 25\% systems with the lowest eccentricities in each multiplicity-bin we hope to lower the bias in the correlations by "contamination" due to systems with unknown planets.
No systems are removed from the M8 bin, since this only consist of the Solar System. We see from Fig.\, \ref{e(M)full75LT} that the high-multiplicity systems are less affected than the low-multiplicity systems when removing the 25\% lowest eccentricity systems, indicating that high-multiplicity systems could be more completely surveyed.\\
\begin{figure}
\centering
\includegraphics[width=\hsize]{BachMoellerFig2.eps}
\caption{Mean values of eccentricities for each multiplicity for four subsamples. Full red line: The full sample from \textit{exoplanet.eu} identical to the mean values from Fig.\, \ref{e(M)}. Dashed: High-eccentricity subsample consisting of 75\% systems with highest eccentricities. Dotted: Subsample of RV planets detected before 2014 equivalent to the sample used by \citet{2015PNAS..112...20L}. Dot-dashed: Full sample from \textit{exoplanets.org}. Mean value of the Solar System (SS) is plotted in black. Power laws (PL) have been fitted to all samples for multiplicities above one; this will be discussed in Sect.\, \ref{sec:meanM}.}
\label{e(M)full75LT}
\end{figure}
The L\&T subsample has been plotted to compare the power law correlation found in this study with one found using a data sample similar to the one used in the original study by \citet{2015PNAS..112...20L}. Notice that whereas the mean eccentricities for the full, high-eccentricity, and \textit{exoplanets.org} subsamples are found as the mean of the system eccentricities for each multiplicity, the mean eccentricities of the L\&T subsample are found as the mean of all planets in each multiplicity-bin (to stay consistent with the analysis methods used by \citet{2015PNAS..112...20L} as explained previously). \\
In order to further constrain potential uncertainties related to our data, we repeated the entire analysis using data from the database \textit{exoplanets.org}. It should be remembered that our main database, \textit{exoplanet.eu}, is more complete and up to date than \textit{exoplanets.org}, but that the planets listed on \textit{exoplanets.org} have undergone a more strict selection process in regard to peer-review (\cite{2014PASP..126..827H,schneider2011defining}, and personal communication with Jason Wright and Françoise Roques). Although the two databases therefore will not contain the exact same data sample, comparison of the results based on both databases gives more clear impression of the uncertainties. \\
Fig.\, \ref{e(M)full75LT} shows that all the subsamples, display the same general tendency of a power law correlation between eccentricity and multiplicity for $M>1$ as the full sample, and a lower eccentricity of the M1 systems not following the power law trend of the higher multiplicity systems. The slopes, however, vary for the different samples.
\begin{figure}
\includegraphics[width=\hsize]{BachMoellerFig3.eps}
\caption{Mean values of eccentricities for each multiplicity for three subsamples. Full red line: The full sample. Dashed: Subsample consisting of planets discovered by the transit method. Dotted: Subsample consisting of planets discovered by RV. Mean value of the Solar System (SS) is plotted in black. Power laws (PL) have been fitted to all samples for multiplicities above one.}
\label{e(M)met}
\end{figure}
\subsection{Detection methods}
Whereas the L\&T subsample consists only of RV planets our sample contains planets found by all detection methods. To test how this difference might affect the eccentricity-multiplicity correlation, and to better understand whether the behaviour of the correlation could be dominated by a bias effect related to the detection method, a plot for the transit and RV subsamples together with the full sample can be seen in Fig.\, \ref{e(M)met}. It should be noted that the eccentricities listed for planets discovered with the transit method are often determined from followup RV observations, so the two populations are not completely separated.
Fig.\, \ref{e(M)met} shows that both the transit and the RV subsamples have eccentricity-multiplicity correlations similar to that of the full sample, and the trend of the M1 systems falling below the $M>1$ relation is identical.
We also see that the transit systems show lower eccentricities at all multiplicities compared to the RV systems. This bias, that transit planets generally have lower eccentricities, is in correspondence with a study by \citet{van2015eccentricity} who found high-multiplicity Kepler planets to generally have lower eccentricities than the RV planet sample. This tendency might be caused by the bias, that there are more low-mass planets in the transit subsample than in the RV sample, and that lower mass planets are more easily circularised by planet-planet interaction \citep{kane2012exoplanet}. We see a hint of the same tendency in Fig.\,\ref{fig:pop} where the super-Earth subsample shows lower eccentricities than the full sample, and the important conclusion is that independent of the shift and its potential explanation in an observational bias, the same tendencies discussed above applies to both of the subsamples.
It is also possible that planet-planet scattering could cause a spread in the orbital inclinations \citep{2007DDA....38.1501C} in addition to lowering the multiplicity of the system. The spread in inclination could lead to a higher number of undiscovered planets in the transit systems and thereby a higher number systems with eccentricities too low to fit their assigned multiplicity. This trend would be strongest for low-multiplicity systems, as seen in Fig.\, \ref{e(M)met}, if these are formed due to severe planet-planet scattering.
It can be seen from the errorbars given in Fig.\, \ref{e(M)met} that the listed eccentricities of the transit planets have a greater variation than the RV planets, possibly caused by a larger uncertainty in their determination \citep{kane2012exoplanet,van2015eccentricity}.
\subsection{Kolmogorov-Smirnov test}
To statistically test the correlation between multiplicity and eccentricity, a two-sample Kolmogorov-Smirnov is conducted on the full system sample. The test compares the multiplicity-bins one and one to test the difference in the eccentricity distributions of the systems. The test results can be seen in Table \ref{tab:KS}. Notice that the distribution of eccentricities for the individual \textit{planets} is used for the Solar System, whereas the distributions of the \textit{systems} are used for the rest.\\
It can be seen that the eccentricities of most of the multiplicity-combinations show significant differences, on a 5\% significance level. This indicates that the difference in eccentricity for systems of different multiplicity is caused by a connection between the two factors and not by coincidence. The higher p-values seen for high-multiplicity combinations might be caused by the small number of systems in these multiplicity-bins.
Altogether the statistical test supports, that there is a correlation between multiplicity and eccentricity. \\
\begin{table}
\caption[]{Test result for Kolmogorov-Smirnov test.}
\label{tab:KS}
$$
\begin{array}{p{0.15\linewidth} l l l l l l}
\hline
\noalign{\smallskip}
& M1 & M2 & M3 & M4 & {\geq}M5 & M8 (SS) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
M1 & 1 & & & & & \\
M2 & <0.01 & 1 & & & & \\
M3 & 0.04 & 0.01 & 1 & & & \\
M4 & 0.01 & <0.01 & 0.15 & 1 & & \\
$\geq$M5 & 0.04 & <0.01 & 0.06 & 0.31 & 1 & \\
M8 (SS) & 0.01 & <0.01 & 0.05 & 0.38 & 0.65 & 1 \\
\noalign{\smallskip}
\hline
\end{array}
$$
\end{table}
\subsection{Quantification of the multiplicity-eccentricity correlation}
In the standard core-accretion model for the formation of planetary systems, the dust component of the disk relatively quickly clumps together (via simple condensation or even faster via streaming instability) to form many objects of planetesimal sizes \citep{johansen2017forming}. Over a longer timescale the planetesimals then excite one another’s orbits by gravitational interaction, leading to collisions and hence growth to planet size. After the dissipation of the protoplanetary disk the orbits of the planets are largely determined by planet-planet interactions, indicating a correlation between the orbital eccentricity and the number of interactions and hence planets. The numerical simulations by \cite{2007DDA....38.1501C} and \cite{2008ASPC..398..295J} confirms that this expectation is correct, by showing that the final architecture of a system is almost independent of the assumed starting conditions of planetesimals, and suggesting that planet-planet interaction is the dominating mechanism for changing the average orbital eccentricity. The simulations do not in themselves predict a specific analytical correspondence between eccentricity and multiplicity, which, however, can be done by fitting the corresponding observational data. In Fig.\, \ref{e(M)full75LT} it was indicated that the high-multiplicity systems seemed to have fewer undiscovered planets, and in both Fig.\, \ref{fig:pop}, \ref{e(M)full75LT} and \ref{e(M)met} we quantified the relation by fitting the mean eccentricities for $M>1$ to a power law. Our best fit to the full set of data (as shown in red in the figures) can be expressed as:
\begin{equation}
e(M)=0.429\cdot M^{-0.93}
\label{eq:e(M)}
\end{equation}
where $e$ is the eccentricity and $M$ is the multiplicity. Fig.\, \ref{fig:pop}-\ref{e(M)full75LT} and \ref{e(M)met} further demonstrates that this fit also agrees with the Solar System despite the fact that the $M=8$ was not included in the fit. This adds extra confidence in believing that the quantification is universal, and two fits, with and without the Solar System, showed the following correlation coefficient; $R^2=0.98$ for $M=[2;7]$ and $R^2=0.99$ for $M=[2;8]$. \\
Since the physical cause behind the relation is thought to be planet-planet gravitational interaction, one should expect the decreasing tendency to range all the way from M1 systems to a maximum number of planets, $M_{\rm max}$, for which the systems can still remain stable, \citep{2001MNRAS.325..221P,2008ApJ...686..603J},
with the M1 systems having the largest average eccentricity. Observationally, the M1 planets, obviously, do not show the high eccentricity expected from the correlation, and therefore the observed M1 systems must be affected differently from the multi-planet populations. In the following section, Sect.\, \ref{sec:meanM}, we will elaborate on one potential explanation for the deviation of the M1 systems from the trend, namely the idea that the low M1 eccentricity is caused by a combination of mechanisms other than the general planet-planet interaction, lowering the eccentricities, plus an observational bias. When correcting for these two effects, the remaining M1 systems are made to follow the same trend as the rest of the systems, and potential implications for the trend are explored.
An alternative explanation for the discrepancy between the M1 and multi-planet systems could be that they are dominated by different planet populations. To analyze if any specific population dominates the lowering of the M1 eccentricities, we investigated, in Sect.\, \ref{sec:pop}, whether the population of large planets (which observationally dominates the M1 and M2 systems) and the population of smaller planets (that have a more dominating role in the higher multiplicity systems), show different observational trends. We concluded that all of the populations follow the same general trend between eccentricity and multiplicity, indicating that the same general mechanism is responsible for all the observed populations of exoplanets from M1 to M8 (and is likely to be planet-planet interaction with some correction for the M1 systems).
In all cases, it is obvious from Fig.\, \ref{e(M)}-\ref{e(M)met} that the observed M1 systems do not follow the trend expressed in Eq.\, \ref{eq:e(M)}. If a reasonable transformation from the observed abundance of M1 systems to intrinsic M1 system abundances can be obtained, it will be possible from Eq.\,\ref{eq:e(M)} to give an estimate of the true probability distribution of multiplicities among the observed systems.
\section{Perspective and implications: Conversion of observed multiplicity distribution to actual distribution} \label{sec:meanM}
Fig.\, \ref{e(M)}-\ref{e(M)met} demonstrates that the observed average eccentricity of one-planet systems (M1) falls below the relation for multi-planet systems. The main assumption in this further analysis is that the M1 systems intrinsically follow the same eccentricity correlation as the other multiplicities. This assumption is supported by a series of studies by \cite{he2019architectures,he2020architectures}, who recreated the multiplicity distribution of the Kepler observations, by forward-modelling multi-planet systems at the AMD-stability limit (introduced in \citet{laskar2017amd,petit2017amd}). \cite{he2020architectures}, found that all multiplicites from one to ten followed the same eccentricity-multiplicity power law correlation, with the intrinsic M1 systems having higher eccentricites than the multi-planet systems, and they found that most observed M1 systems contain yet undiscovered planets. In this section will will try to identify these systems with undiscovered planets, and redistribute them to the multiplicity bin appropriate to their multiplicities.\\
We will first investigate whether some of the low eccentricity M1 planets can have got their low eccentricity due to other mechanisms than the general planet-planet interaction assumed to be responsible for Eq.\,\ref{eq:e(M)}. \\
Exoplanets in ultra small orbits are often tidally locked to the host star, which could lead to circularisation of the planetary orbit \citep{2008IAUS..249..187J}. By looking at the eccentricity damping timescale \citep{2014ARA&A..52..171O}, the eccentricity damping from these planet-star interactions can be approximated by:
\begin{equation}
\dot{e} \propto \frac{m_*}{m_p}\frac{1}{a^5}
\label{eq:edamp}
\end{equation}
where $\dot{e}$ is the change in eccentricity, $a$ is the semi major axis of the planet, and $m_p$ and $m_*$ are the masses of the planet and the star respectively.
In order to distinguish systems that have low eccentricities due to planet-star interactions from those that may have low eccentricities for other reasons, all planets for which the value from Eq.\, \ref{eq:edamp} exceeds a certain threshold are excluded. The threshold was chosen to $6.77\times10^5$, and 191 M1 planets, and 100 planets among the other multiplicities, were excluded on this basis. These planets will be excluded in the following probability analysis, but were not excluded in the making of Eq. \ref{eq:e(M)} (which would have very small effect as described below). The chosen threshold is the value of Mercury, and even though Mercury is far from being circularised (it holds the highest eccentricity in the Solar System), it is "almost" tidally locked (in a 2/3 orbital/rotational resonance), and is the planet in the Solar System that has the highest potential for tidal circularisation. In an analysis of hot-Jupiters with known obliquities, \cite{Hjortphd} was able to divide the planets into two distinct groups, with 15\% of the planets having extremely low obliquity (and hence low eccentricity) and 85\% having a continuous obliquity distribution. \cite{Hjortphd} ascribed the former group to planet migration in the disk and the latter to migration due to planet-planet interaction (scattering). It is therefore likely that also a fraction of the M1 systems will have much lower eccentricities than expected from Eq.\, \ref{eq:e(M)} due to disk-migration.\\
Next, we pursue the idea that some of the remaining systems may contain yet undiscovered planets, and that these systems will lower the mean eccentricity of their multiplicity bins, since systems with more planets are expected to have lower eccentricities. Those of the observed systems that have had their eccentricity determined by planet-planet interactions (as opposed to the systems excluded above due to a potential star-planet circularisation) are to first approximation expected to follow the planet-planet eccentricity relation expressed in Eq.\,\ref{eq:e(M)}. We align the mean eccentricities of the multiplicity bins with the power law correlation, by moving the lowest eccentricity systems of the multiplicity bins to an M corresponding to their observed eccentricity (i.e. assuming undiscovered planets in those systems). During this exercise it was found, that the best alignment occurred when 55\% of the M1 systems and all of the $M>1$ systems were assumed \textit{not} to contain undiscovered planets, and the rest had new multiplicities estimated based on their eccentricity.
Of the M1 systems 50 (i.e. roughly 10\% of the $667-191=476$ M1 systems that remained after the exclusion of planets that might have experienced planet-star circularisation) have such low eccentricities that they should be moved to multiplicities that might exceed M$_{\rm max}$ (in some cases more than 50 planets). It was therefore assumed that these systems might not contain undiscovered planets, but that other physical mechanisms were responsible for circularizing these 10\%.
For the proceeding estimates, have in mind that the effect of keeping these 50 planets would be to slightly increase the estimated abundance of M1 systems and decrease correspondingly the abundance of high multiplicity systems like our own solar system.
Non-planet-planet interacting mechanisms that could be responsible for circularization of a fraction of this amount of M1 systems could include migration of a single large planet to small orbit while substantial amount of the protoplanetary disk was still in place (\cite{Hjortphd}).
For those of the remaining group of ($667-191-50 = 426$) M1 systems with eccentricities that potentially could be attributed to yet undiscovered planets, we attempted a redistribution of the systems by artificially counting them as belonging to higher values of M. The new multiplicity, $M_{\rm new}$, was determined from the eccentricity of the planet using Eq.\, \ref{eq:e(M)}. A total of 164 M1 systems were redistributed and the new multiplicity distribution can be seen in Table. \ref{tab:newM}.
\begin{table*}
\caption[]{Redistribution of systems. Left; the observed multiplicity distribution of systems from $exoplanet.eu$. Right; the multiplicity distribution of systems after the M1 systems have been redistributed according to their eccentricities as described in the text. The rightmost column indicates the probability of a system having a given multiplicity according to Eq.\, \ref{eq:prob2}}
\label{tab:newM}
$$
\begin{array}{l l l l l l}
\hline
\noalign{\smallskip}
Multiplicity & \multicolumn{2}{l}{Observed\: distribution} & \multicolumn{3}{l}{Redistribution} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
& Number \: of \: Systems & Percentage & Number \: of \: Systems & Percentage & Probability \\
M1 & 667 & 75\% & 262 & 41\% & 41\% \\
M2 & 151 & 17\% & 149 & 24\% & 24\% \\
M3 & 45 & 5\% & 90 & 14\% & 14\% \\
M4 & 20 & 2\% & 53 & 8\% & 8\% \\
M5 & 4 & $<1$\% & 25 & 4\% & 5\% \\
M6 & 6 & $<1$\% & 21 & 3\% & 3\% \\
M7 & 1 & $<1$\% & 12 & 2\% & 2\% \\
M8 & 1 & $<1$\% & 7 & 1\% & 1\% \\
M9 & & & 5 & $<1$\% & $<1$\% \\
M10 & & & 9 & 2\% & $<1$\% \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Total & 895 & & 633 & & \\
\noalign{\smallskip}
\hline
\end{array}
$$
\end{table*}
In addition to the number of systems within each multiplicity bin, Table \ref{tab:newM} also shows the percentage- and probability distributions for the redistributed planets. The probability distribution is found by fitting an exponential fit to the percentage distribution as shows in Fig. \ref{fig:pop} and explained later.
The redistribution of the M1 systems has been made such that the mean eccentricity of the remaining 262 systems falls on the same relation as the rest of the multiplicity systems described by Eq.\, \ref{eq:e(M)}. For the sake of this experiment, we assume that these remaining M1 systems would be the intrinsic M1 population among the observed systems, with no additional undiscovered planets and whose eccentricity is determined by the same planet-planet interactions as the multi-planet systems. In this sense one can think of the relation given by Eq.\,\ref{eq:e(M)} applied to all the systems from M1 to M$_{\rm max}$ as giving a minimum abundance of M1 systems and corresponding maximum abundance of high multiplicity systems. We stress this fact because it for many might seem intuitively (for example based on the antropic principle) surprising that our solar system belongs to such a relatively rare type of planetary systems as predicted from Eq.\,\ref{eq:e(M)} and shown in Fig.\,\ref{prob}; without the redistribution suggested above, the Solar System would be predicted to be of an even more rare type of planetary system.
We therefore suggest that the $M_{\rm new}$ distribution in Table 3 is a reasonable first qualified guess of the relative distribution of the number of planets in planetary systems, whose average eccentricity distribution is determined by planet-planet interactions. This probability distribution is shown in Fig.\, \ref{prob} and has been fitted to an exponential function described as:
\begin{equation}
P(M)=0.72\cdot \mathrm{e}^{-0.54M}
\label{eq:prob2}
\end{equation}
Where $P(M)$ indicates the probability of a system having $M$ planets. This relation has been found by normalizing the exponential fit seen in Fig.\, \ref{prob}, such that $\sum^{10}_{M=1}{P(M)}=1$.
\begin{figure}
\centering
\includegraphics[width=\hsize]{BachMoellerFig4.eps}
\caption{Percentage of systems with given multiplicity, corresponding to values from Table \ref{tab:newM}. Probability function found as exponential fit. Mean multiplicity estimated to $\sim 2.5$.}
\label{prob}
\end{figure}
The average number of planets in planetary systems, according to the distribution in Table. \ref{tab:newM}, is $\langle M \rangle = 2.48$, and is marked by a diamond in the figure.
Based on the discrete probability distribution in Eq.\, \ref{eq:prob2} the probability of a system having eight planets is $P(8) \approx 1 \%$, indicating that systems the size of the Solar System are rare but not exceptionally so. In this interpretation the Solar System is in the tail of a distribution of multiplicity, and corresponding orbital eccentricities, near the maximum possible from stability considerations. We have in the above summation assumed that the maximum cut off is near to $M_{\rm max}$ = 10 planets, but we note that the exact value of $M_{\rm max}$ is unimportant for the conclusion, since the integral of Eq.\,\ref{eq:prob2} from 8 to infinity is very small. Remark also that the number 1\% refers to the fraction of the systems that have had their eccentricities determined by planet-planet interaction, or a similar process that is responsible for Eq.\,\ref{eq:e(M)} and Eq.\, \ref{eq:prob2}. If one is counting also the M1 planets that were excluded in deriving Eq.\ref{eq:prob2}, then the probability of finding 8 planets would be slightly lower. It should be noted that all results in this analysis rely on the assumption that the power law in Eq. \ref{eq:e(M)} describes the true intrinsic correlation between eccentricity and multiplicity. The redistribution was based on a correlation fitted to the observed multi-planet systems. The fact that some of these multi-planet systems might host yet undiscovered planets could therefore pose a uncertainty to the analysis. However, theoretical studies have found that the observed M1 population is the only one that differ greatly from the theoretical predictions \citep{johansen2012can}. As mentioned previously, some have suggested that this is caused by the fact that the observed M1 systems are especially prone to containing undiscovered planets \citep{he2020architectures}. As such, the analysis should not be affected greatly by undiscovered planets in the multi-planet systems. As mentioned previously planet-star interaction has not been taken into account when making Eq. \ref{eq:e(M)}. If the planets from the multi-planet systems that might have experienced planet-star interaction had been excluded in Eq. \ref{eq:e(M)}, the mean multiplicity would have been 2.6 rather than 2.5 planets per system. \\
It is encouraging to note that \cite{2008ApJ...686..603J} found the average number of planets to be between 1.8 and 3.0 planets per system from a series of individual simulations with different initial planetesimal conditions, and that \citet{raymond2018solar} found the probability of forming planetary systems with a number of planets similar to our own to be $\sim$ 1\% based on dynamical arguments. Both results are very similar to our result but based on completely different and independent arguments. \\
\section{Discussion} \label{sec:Dis}
We find an anti-correlation between orbital eccentricity and multiplicity of known exoplanet systems similar to the reports by previous studies \citep{2015PNAS..112...20L,2017A&A...605L...4Z}. Our planet sample and method differ from the investigation by \citet{2015PNAS..112...20L} by including planets discovered by all detection methods, not just RV, and from both studies by including a much larger dataset and by comparing the results obtained based on different databases with different selection criteria. In addition we have chosen to consider systems as units unlike the previous studies, that treated each planet separately.
When comparing our investigation to the previous ones, it should be noted that we, of course, share a great part of our data sample, and although the larger dataset in our analysis has allowed for a more restrictive debias process, all our analyses are biased by the basic limitation by the RV technique (biased towards larger planets), and the transit technique (biased towards ultra small orbits).
The fact that we include all planets regardless of detection methods has shown us that similar eccentricity-multiplicity correlations can be found for the full sample, and RV- and transit subsamples respectively, though with slightly different fits as discussed above. Explicitly, we also studied the eccentricity-multiplicity correlation for subsamples of hot-Jupiters, cold-Jupiters, and super-Earths separately, and found that also these subsamples followed the same general tendency. This shows that the correlation is not solely caused by the giant planets, that currently dominate our observations, or by planets at very short periods, but might also apply to the population of planets that have yet to be observed, with smaller masses and at larger periods.\\
A correlation between orbit eccentricity and multiplicity is supported by several other studies.
Surveys conducted by \citet{howard2013observed} and \citet{wright2009ten} found lower orbit eccentricities among planets in multi-planet systems compared to single planets. \citet{howard2013observed} suggests that the trend could be due to severe planet-planet scattering in the existing single-planet systems where giant planets have excited the eccentricities of its previous companions before ejecting them. Multi-planet systems have had fewer scattering events (otherwise they would no longer be multi-planet), and have thereby been allowed to stay in stable low-eccentricity orbits. \citet{wright2009ten} argues that multi-planet systems will naturally favour low-eccentricity orbits because of the need for high orbital stability in the system. The stability of multi-planet systems was studied further by \cite{2017AJ....153..210H}, who found that a single outer high-eccentricity giant planet would greatly affect the stability of an inner system, by reducing the multiplicity and exciting the eccentricities of the remaining planets.
Both \citet{wright2009ten} and \citet{2007DDA....38.1501C} support the theory of single high-eccentric planets as a result of ejecting companions. The ejection of planets from planetary systems have been confirmed by \citet{mroz2017no}, who from analysis of short duration events in 6 years of microlensing data have found free floating planets of both Jupiter- and Earth-size, although they also conclude that the abundance of free floating planets is small, and can therefore only account for the eccentricity of a small fraction of the M1 systems. A study by \cite{xie2016exoplanet} have also reported lower eccentricities in multi-planet systems. This study measured the eccentricity distribution of a sample of transit planets using transit duration statistics, and found that single-planets in general show eccentricities of $e\approx 0.3$, whereas their multi-planet counterparts have average eccentricities of $e\approx0.04$. \cite{xie2016exoplanet} found all planets from multi-planet systems to follow similar eccentricity distributions, and so, found no general correlation between eccentricity and multiplicity. \\
Several studies have suggested that the correlation between eccentricity and multiplicity originate in an underlying correlation between multiplicity on the stability of the system, or the angular momentum deficit (AMD) \citep{laskar2017amd,turrini2020normalized,he2020architectures}. In their study, \citet{he2020architectures} recreate the multiplicity distribution observed at the Kepler data, using a forward model, by looking at the AMD-stability limit. They find that the median eccentricities as a function of multiplicity follow a power law correlation for all multiplicites from one to ten. Their model predicts that intrinsic single-planet systems have higher eccentricities than multi-planet systems, whereas most observed single-planet systems contain yet undiscovered planets, similar to our assumptions in Sec. \ref{sec:meanM}. Like previous studies \citet{he2020architectures} argued that the correlation between intrinsic multiplicity and the eccentricity of the systems was caused by the fact that the AMD-stability criteria puts strong demands on the total system AMD and minimum system period ratio, in order for no planet orbits to cross, and thereby destabilizing the system.
The eccentricity-multiplicity anti-correlation is opposed by \citet{bryan2016statistics} and \citet{dong2013warm}, who found lower eccentricities among single planets compared to planets with outer companions. Both surveys mainly focus on jovian planets, Dong et. al. solely on warm-Jupiters with jovian companions. \cite{dong2013warm} suggest that their results indicate that planet-planet interactions are not the dominating mechanism for creating short-period jovian planets, as opposed to the suggestions by several other studies \citep{rasio1996dynamical,marzari2002eccentric,2007DDA....38.1501C,nagasawa2008formation}.
As argued by \citet{bryan2016statistics} a significant uncertainty is involved with the investigation by \citet{2015PNAS..112...20L} and some of this apply to our study as well. Many planets included have small semi major axes (the majority within 1 AU), and the low eccentricities found in high-multiplicity systems might reflect the fact that systems this closely packed would not be able to remain stable at higher eccentricities. With our larger data sample we have found similar correlations between RV- and transit subsamples which lowers the probability that the correlation is caused by observational biases. \cite{bryan2016statistics} further emphasize the uncertainty related to the fact that \cite{2015PNAS..112...20L} do not account for the individual errors of each of the listed eccentricities, which could also pose an uncertainty for this study.
Since we have not included the listed uncertainties to the eccentricities of each individual planet, we have not accounted for the uncertainty involved with the estimation of orbit eccentricity of exoplanets. In addition to this, previous studies have found that many eccentricities are systematically overestimated \citep{shen2008eccentricity}, and that some seemingly high-eccentricity single planets can turn out to have an unknown outer companion that artificially increase their estimated eccentricity \citep{fischer2001planetary}. The latter, fits our eccentricity-multiplicity correlation, with a decrease in eccentricity for an increasing number of known planets, it does however represent an uncertainty to our calculated model.
Unlike this study, the study by \cite{2017A&A...605L...4Z} did account for the uncertainties related to the eccentricity measurements. When calculating the mean eccentricities, \cite{2017A&A...605L...4Z} weighted their data with one over the listed uncertainties, which resulted in a steeper curve for the power law correlation compared to unweighted data. Especially the M2 systems seemed to differ between the weighted and unweighted data by having a significantly higher mean eccentricity in the weighted sample. They did not give an explanation as to why the low-eccentricity M2 planets should have generally higher uncertainties. In this study we find that the M2 systems have eccentricities that fit the general eccentricity multiplicity correlation for $M > 1$ without correction for the uncertainties. In our analysis only the M1 systems falls substantially below the power law fit, but since no M1 systems were included in the analysis by \cite{2017A&A...605L...4Z} we are not able to compare this trend to their results.
\section{Conclusion} \label{sec:Con}
During this study we have investigated the correlation between orbital eccentricity and multiplicity for 1171 planets distributed in 895 systems listed in the database \textit{exoplanet.eu}.
We found a strong correlation between average eccentricity and multiplicity for all systems with two or more planets, which could be expressed as $e(M)=0.429\cdot M^{-0.93}$ (Eq.\,\ref{eq:e(M)}). The Solar System fits this trend, without being included in the making of the power law, whereas the average eccentricity of the observed M1 systems were markedly lower than predicted from Eq.\,\ref{eq:e(M)}. It is not unexpected from standard core accretion theory that the M2 to M$_{\rm max}$ systems fit the same power law distribution, but it is surprising that the M1 systems fall substantially below the correlation.
The eccentricity-multiplicity correlation is investigated for at number of different subsamples, in order to explore the stability of the power law correlation, and investigate possible explanations for the deviating M1 average eccentricity. All subsamples show the same general pattern, with all multiplicities fitting a power law correlation well, except the M1 systems having consistently lower eccentricities. The analyzed subsamples include: different planet populations (divided into hot-Jupiter-, cold-Jupiter-, and super-Earth systems), planets detected by the RV- or transit method respectively, etc.
In order to investigate some of the implications of the power law trend, we speculated on the potential consequences, if the trend that was found for $M>1$, in reality applies to all multiplicities. Following the idea that Eq. \ref{eq:e(M)} describes the true eccentricity-multiplicity correlation, we assumed that the seemingly low eccentricities of the M1 systems were caused by a combination of some systems having been circularized through planet-star interactions, and others containing yet undiscovered planets. Correcting for these assumptions, a probability distribution over the different multiplicities was expressed by Eq.\, \ref{eq:prob2}, and based on this the mean multiplicity among the observed systems was estimated to $\langle M \rangle \approx 2.5$, while the probability of a system having eight planets was $\sim 1\%$.
It is not surprising that the probability of finding high-multiplicity systems comes out this low, after all there are very few known exoplanetary systems with more than 6 planets, but it is assuring that the average number of planets in a "standard" exoplanet system in our Galaxy comes out very close to the number predicted independently from numerical simulations of planetesimal collisions (\cite{2008ApJ...686..603J}) and that the probability of finding Solar System like multi-planet systems comes out close to recent independent predictions from dynamical simulations \citep{raymond2018solar}.
This indicates that the orbit eccentricities of the Solar System planets are not unusually low, when the multiplicity of the system is taking into account, but rather that the number of planets in our Solar System is unusually high.The rarity of the large number of planets in our Solar System, and the corresponding low value of the orbital eccentricities, raise the simple and central, but speculative, question “Is there a connection between the high number of planets in our Solar System and the fact that we are here?”.\\
\section*{Acknowledgments}
This research has made use of The Extrasolar Planets Encyclopaedia at \textit{exoplanet.eu} and the Exoplanet Orbit Database and the Exoplanet Data Explorer at \textit{exoplanets.org} .
We are thankful for clarifying discussions with F. Roques about the selection criteria used by \textit{exoplanet.eu} and with J. Wright about the selection criteria used by \textit{exoplanets.org}. We acknowledge funding from the European Union H2020-MSCA-ITN-2019 under Grant no. 860470 (CHAMELEON) and from the Novo Nordisk Foundation Interdisciplinary Synergy Program grant no. NNF19OC0057374.
We are grateful to an anonymous referee, whose valuable input improved the analyses and argumentation throughout the paper.
\section*{Data Availability}
The data underlying this article are available in The Extrasolar Planets Encyclopaedia, at \url{http://exoplanet.eu/catalog/}.
\bibliographystyle{mnras}
|
1,477,468,750,586 | arxiv | \section{Introduction}
Our object is a superposition of fundamental solutions for the \(p\)-Laplace Equation
\begin{equation}
\Delta_p u := \di \left(|\nabla u|^{p-2}\nabla u\right) = 0.
\label{int.eq}
\end{equation}
Although the equation is non-linear, the function
\[V(x) = \int_{\mathbb{R}^n}\frac{\rho(y)}{|x-y|^\frac{n-p}{p-1}}\dd y,\qquad \rho\geq 0,\quad 2\leq p < n\]
is a supersolution in \(\mathbb{R}^n\), i.e. \(\Delta_p V\leq 0\) in the sense of distributions. It is a so-called \(p\)-superharmonic function -- see Definition \ref{def.psup} on page \pageref{def.psup} -- according to which it has to obey the comparison principle.
The case \(p=2\) reduces to the Laplace Equation \(\Delta u = 0\) with the Newtonian potential
\[V(x) = \int_{\mathbb{R}^n}\frac{\rho(y)}{|x-y|^{n-2}}\dd y,\]
which is a superharmonic function.
M. Crandall and J. Zhang discovered in \cite{crandall2003} that the sum
\[\sum_{i=1}^N\frac{a_i}{|x-y_i|^\frac{n-p}{p-1}},\qquad a_i>0\]
of fundamental solutions is a \(p\)-superharmonic function. Their proof was written in terms of viscosity supersolutions. A different proof was given in \cite{lindqvist2008}. The purpose of our note is a \emph{simple} proof of the following theorem:
\newpage
\begin{theorem}\label{int.mainthm}
Let \(2\leq p < n\). For an arbitrary concave function \(K\),
\begin{equation}
W(x) := \sum_{i=1}^\infty \frac{a_i}{|x-y_i|^\frac{n-p}{p-1}} + K(x),\qquad y_i\in\mathbb{R}^n,\, a_i \geq 0,
\label{int.main}
\end{equation}
is \(p\)-superharmonic in \(\mathbb{R}^n\), provided the series converges at some point.
\end{theorem}
Through Riemann sums one can also include potentials like
\[\int_{\mathbb{R}^n}\frac{\rho(y)}{|x-y_i|^\frac{n-p}{p-1}}\dd y + K(x),\qquad \rho\geq 0.\]
Similar results are given for the cases \(p=n\) and \(p>n\) and, so far as we know, the extra concave term \(K(x)\) is a new feature. The key aspect of the proof is the explicit formula \eqref{sup.sign} for the \(p\)-Laplacian of the superposition. Although the formula is easily obtained, it seems to have escaped attention up until now.
Finally, we mention that in \cite{garofalo2010} the superposition of fundamental solutions has been extended to the \(p\)-Laplace Equation in the Heisenberg group. (Here one of the variables is discriminated.) In passing, we show in Section \ref{ep} that similar results are \emph{not} valid for the evolutionary equations
\[\frac{\partial}{\partial t}u = \Delta_p u\qquad\text{and}\qquad \frac{\partial}{\partial t}(|u|^{p-2}u) = \Delta_p u\]
where \(u = u(x,t)\).
We are able to bypass a lenghty calculation in our counter examples.
\section{The fundamental solution}
Consider a radial function, say
\[f(x) = v(|x|)\]
where we assume that \(v\in C^2(0,\infty)\). By differentiation
\begin{align}\label{rad.formulas}
\nabla f &= \frac{v'}{|x|}x^T, & |\nabla f| &= |v'|,\\ \notag
\mathcal{H}f &= v''\frac{xx^T}{|x|^2} + \frac{v'}{|x|}\left(I - \frac{xx^T}{|x|^2}\right), & \Delta f &= v'' + (n-1)\frac{v'}{|x|},
\end{align}
when \(x\neq0\).
The Rayleigh quotient formed by the Hessian matrix \(\mathcal{H}f = \left[\frac{\partial^2f}{\partial x_i\partial x_j}\right]\) above will play a central role. Notice that for any non-zero \(z\in\mathbb{R}^n\), we have that
\[\frac{z^T}{|z|}\frac{xx^T}{|x|^2}\frac{z}{|z|} = \cos^2\theta\]
where \(\theta\) is the angle between the two vectors \(x\) and \(z\). This yields the expedient formula
\begin{equation}
\frac{z^T(\mathcal{H}f) z}{|z|^2} = v''\cos^2\theta + \frac{v'}{|x|}\sin^2\theta,\qquad x,z\neq 0.
\label{obs.ray}
\end{equation}
Since the gradient of a radial function is parallel to \(x\), the Rayleigh quotient in the identity
\begin{equation}
\di \left(|\nabla f|^{p-2}\nabla f\right) = |\nabla f|^{p-2}\left((p-2)\frac{\nabla f (\mathcal{H}f) \nabla f^T}{|\nabla f|^2} + \Delta f\right)
\label{int.id}
\end{equation}
reduces to \(v''\). The vanishing of the whole expression is then equivalent to
\begin{equation}
(p-1)v'' + (n-1)\frac{v'}{|x|} = 0
\label{fund.lap}
\end{equation}
which, integrated once, implies that a radially decreasing solution \(w\) is on the form
\begin{equation}
w(x) = v(|x|)\qquad \text{where}\qquad v'(|x|) = -c|x|^\frac{1-n}{p-1}.
\label{fund.sol}
\end{equation}
The constant \(c = c_{n,p}>0\) can now be chosen so that
\[\Delta_p w + \delta = 0\]
in the sense of distributions. Thus
\begin{equation}
w(x) =
\begin{cases}
-c_{n,p}\frac{p-1}{p-n}|x|^\frac{p-n}{p-1}, &\text{when } p\neq n,\\
-c_{n,n}\ln|x|, &\text{when } p = n
\end{cases}
\label{fund.sol2}
\end{equation}
is the \textbf{fundamental solution} to the \(p\)-Laplace Equation \eqref{int.eq}.
\section{Superposition of fundamental solutions}\label{sup}
We now form a superposition of translates of the fundamental solution and compute its \(p\)-Laplacian. To avoid convergence issues all sums are, for the moment, assumed finite.
\begin{lemma}\label{sup.thm}
Let \(w\) be the fundamental solution to the \(p\)-Laplace equation. Define the function \(V\) as
\begin{equation}
V(x) := \sum_{i=1}^N a_i w(x-y_i),\qquad a_i>0,\;y_i\in\mathbb{R}^n.
\label{sup.lincomb}
\end{equation}
Then, in any dimension and for any \(p\neq1\)
\footnote{When \(p=1\) there are no non-constant radial solutions of \eqref{int.eq}. Instead we get the zero mean curvature equation in which a solution's level sets are minimal surfaces.},
\(\Delta_pV\) is of the same sign wherever it is defined in \(\mathbb{R}^n\). Furthermore, the dependence of the sign on \(p\) and \(n\) is as indicated in figure \ref{supfig}.
\end{lemma}
\begin{figure}[ht]
\includegraphics{supfig
\caption{\textcolor{blue}{\(\Delta_p V\leq0\)}, \textcolor{green}{\(\Delta_p V=0\)}, \textcolor{red}{\(\Delta_p V\geq0\)}
\label{supfig
\end{figure}
\begin{proof}
We simplify the notation by letting \(w_i\) and \(v_i\) denote that the functions \(w\) and \(v\) are to be evaluated at \(x-y_i\) and \(|x-y_i|\), respectively.
First, the linearity of the Hessian and the Laplacian enable us to write
\begin{align*}
\Delta_p V &= |\nabla V|^{p-2}\left((p-2)\frac{\nabla V(\mathcal{H} V)\nabla^T V}{|\nabla V|^2} + \Delta V\right)\\
&= |\nabla V|^{p-2}\sum_{i=1}^N a_i\left((p-2)\frac{\nabla V(\mathcal{H}w_i)\nabla^T V}{|\nabla V|^2} + \Delta w_i\right).\\
\intertext{Secondly, by \eqref{rad.formulas} and \eqref{obs.ray} this is}
&= |\nabla V|^{p-2}\sum_{i=1}^N a_i \left((p-2)\Big(v_i''\cos^2\theta_i + \frac{v_i'}{|x-y_i|}\sin^2\theta_i\Big)\right.\\
& \qquad\qquad\qquad\qquad{}+ \left.v_i'' + (n-1)\frac{v_i'}{|x-y_i|}\right)\\
&= |\nabla V|^{p-2}\sum_{i=1}^N a_i\left((p-2)\Big(\frac{v_i'}{|x-y_i|} - v_i''\Big)\sin^2\theta_i\right.\\
& \qquad\qquad\qquad\qquad{}+ \left.(p-1)v_i'' + (n-1)\frac{v_i'}{|x-y_i|}\right)\\
\intertext{where \(\theta_i\) is the angle between \(x-y_i\) and \(\nabla V(x)\). And finally, as \(w\) is a fundamental solution, the last two terms disappear by \eqref{fund.lap}. We get }
\Delta_p V &= (p-2)|\nabla V|^{p-2}\sum_{i=1}^N a_i\left(\frac{v_i'}{|x-y_i|} - v_i''\right)\sin^2\theta_i.
\end{align*}
It only remains to use the formula \eqref{fund.sol} for \(v'_i\) to compute that
\[\frac{v_i'}{|x-y_i|} - v_i'' = -c_{n,p}\frac{p+n-2}{p-1}|x-y_i|^\frac{2-n-p}{p-1}\]
and the sign of \(\Delta_p V\) can easily be read off the final identity
\begin{equation}
\Delta_p V(x) = -c_{n,p}\tfrac{(p-2)(p+n-2)}{p-1}|\nabla V|^{p-2}\sum_{i=1}^N a_i\frac{\sin^2\theta_i}{|x-y_i|^\frac{p+n-2}{p-1}}.
\label{sup.sign}
\end{equation}
\end{proof}
\begin{remark}
The three green lines in figure \ref{supfig} deserve some attention. The line \(p=2\) is obvious since the equation becomes linear. So is the line \(n=1\) as the ``angle'' between two numbers is 0 or \(\pi\). The little surprise, perhaps, is the case \(p+n = 2\). Then the terms in \( V\) will be on the form \(a_i|x-y_i|^2\) and it all reduces to the rather unexciting explanation that a linear combination of quadratics is again a quadratic.
\end{remark}
\section{Adding more terms}\label{add}
We will now examine what will happen to the sign of the \(p\)-Laplace operator when an extra term, \(K(x)\), is added to the linear combination \eqref{sup.lincomb}. We will from now on only consider \(p>2\). Restricted to this case, the factor \(C_{n,p} := c_{n,p}\frac{(p-2)(p+n-2)}{p-1}\) in \eqref{sup.sign} stays positive.
Let \( V\) be as in Lemma \ref{sup.thm} and let \(K\in C^2\). For efficient notation, write \(\xi = \xi(x) := \nabla V(x) + \nabla K(x)\). Then
\begin{align*}
\Delta_p( V + K) &= |\xi|^{p-2}\left((p-2)\frac{\xi\mathcal{H}( V + K)\xi^T}{|\xi|^2} + \Delta(V + K)\right)\\
&= |\xi|^{p-2}\left((p-2)\frac{\xi(\mathcal{H} V)\xi^T}{|\xi|^2} + \Delta V\right)\\
&\quad{}+ |\xi|^{p-2}\left((p-2)\frac{\xi(\mathcal{H}K)\xi^T}{|\xi|^2} + \Delta K\right).
\end{align*}
Now, the second to last term equals
\[-C_{n,p}|\xi|^{p-2}\sum_ia_i|x-y_i|^\frac{2-n-p}{p-1}\sin^2\alpha_i \leq 0\]
where \(\alpha_i\) is the angle between \(x-y_i\) and \(\nabla V(x) + \nabla K(x)\). Thus it suffices to ensure that the last term also is non-positive in order for the \(p\)-Laplace to hold its sign.
Lemma \ref{add.thm} presents a sufficient condition.
\begin{lemma}\label{add.thm}
Let \(p>2\) and define \(V\) as in \eqref{sup.lincomb}.
Then
\begin{equation}
\Delta_p( V(x) + K(x)) \leq 0
\label{eq:}
\end{equation}
for all concave functions \(K\in C^2(\mathbb{R}^n)\) wherever the left-hand side is defined.
\end{lemma}
\begin{proof}
\(z^T(\mathcal{H}K)z \leq 0\) for all \(z\in\mathbb{R}^n\) since the Hessian matrix of a concave function \(K\) is negative semi-definite. Also \(K\) is superharmonic since the eigenvalues of \(\mathcal{H}K\) are all non-positive, i.e. \(\Delta K \leq 0\). Therefore,
\[\Delta_p( V(x) + K(x)) \leq |\xi|^{p-2}\left((p-2)\frac{\xi(\mathcal{H}K)\xi^T}{|\xi|^2} + \Delta K\right) \leq 0.\]
\end{proof}
\begin{remark}
Though \(K\in C^2\) being concave is sufficient, it is not necessary. A counter example is provided by the quadratic form
\[K(x) = \frac{1}{2}x^TAx, \qquad\text{where } A=\diag(1-m,1,\dots,1),\; m=p+n-2.\]
Then \(K\) is not concave, but a calculation will confirm that \((p-2)\frac{\xi(\mathcal{H}K)\xi^T}{|\xi|^2} + \Delta K \leq 0\) and hence \(\Delta_p(V+K)\leq 0\). In fact, a stronger result than Lemma \ref{add.thm} is possible: Let \(f_i\) be \(C^2\) at \(x\) for \(i=1,\dots,N\) and let
\[\lambda_1^i \leq \lambda_2^i \leq \cdots \leq \lambda_n^i\]
be the eigenvalues of the Hessian matrix \(\mathcal{H}f_i(x)\). If
\[\lambda_1^i + \cdots + \lambda_{n-1}^i + (p-1)\lambda_n^i \leq 0\qquad \forall\, i,\]
then \(\Delta_p\left(\sum_i f_i\right) \leq 0\) at \(x\).
\end{remark}
\section{\(p\)-superharmonicity}\label{psup}
We now prove that
\[W(x) :=\sum_{i=1}^\infty a_i w(x-y_i) + K(x),\qquad a_i \geq 0,\, y_i\in\mathbb{R}^n,\quad K \text{ concave}\]
is a \(p\)-superharmonic function in \(\mathbb{R}^n\).
The three cases \(2<p<n\), \(p=n\) and \(p>n\) are different and an additional assumption, \eqref{sup.assumption}, seems to be needed when \(p\geq n\). In the first case, only convergence at one point is assumed. We start with the relevant definitions and a useful Dini-type lemma.
\begin{definition}
Let \(\Omega\) be a domain in \(\mathbb{R}^n\). A continuous function \(h\in W^{1,p}_{loc}(\Omega)\) is \textbf{\(p\)-harmonic} if
\begin{equation}
\int|\nabla h|^{p-2}\nabla h\nabla\phi^T\dd x = 0
\label{psup.pharmdef}
\end{equation}
for each \(\phi\in C_0^\infty(\Omega)\).
\end{definition}
\begin{definition}\label{def.psup}
A function \( u \colon \Omega \rightarrow (-\infty,\infty]\) is \textbf{\(p\)-superharmonic} in \(\Omega\) if
\begin{enumerate}[i)]
\item \( u \not\equiv \infty\).
\item \( u\) is lower semi-continuous in \(\Omega\).
\item If \(D\subset\subset\Omega\) and \(h\in C(\overline{D})\) is \(p\)-harmonic in \(D\) with \(h\big|_{\partial D}\leq u\big|_{\partial D}\), then \(h\leq u\) in \(D\).
\end{enumerate}
\end{definition}
Furthermore, if \(u\in C^2(\Omega)\), it is a standard result that \(u\) is \(p\)-harmonic if and only if \(\Delta_p u = 0\) and \(u\) is \(p\)-superharmonic if and only if \(\Delta_p u\leq0\).
Also, a function \(u\) in \(C(\mathbb{R}^n)\cap W^{1,p}_{loc}(\mathbb{R}^n)\) is \(p\)-superharmonic if
\begin{equation}
\int_{\mathbb{R}^n}|\nabla u|^{p-2}\nabla u \nabla \phi^T\dd x \geq 0
\label{psup.inteq}
\end{equation}
for all \(0\leq \phi\in C^\infty_0(\mathbb{R}^n)\). See \cite{Lindqvist1986}.
\begin{lemma}\label{dini}
Let \((f_N)\) be an increasing sequence of lower semi-continuous (l.s.c.) functions defined on a compact set \(C\) converging point-wise to a function \(f\geq 0\). Then, given any \(\epsilon > 0\) there is an \(N_\epsilon\in\mathbb{N}\) such that
\[f_N(x) > - \epsilon\]
for all \(x\in C\) and all \(N\geq N_\epsilon\).
\end{lemma}
The standard proof is omitted.
In the following, \(K\) is any concave function in \(\mathbb{R}^n\). We let \(K_\delta,\;\delta>0\) denote the smooth convolution \(\phi_\delta * K\) with some mollifier \(\phi_\delta\). One can show that
\(K_\delta\) is concave and
\[K_\delta \to K\]
locally uniformly on \(\mathbb{R}^n\) as \(\delta\to 0^+\).
\subsection{The case 2\(<\)p\(<\)n}
Let \(\delta>0\). If \(y_i\in\mathbb{R}^n\) and \(a_i>0\), the function
\[W^\delta_N(x) := \sum_{i=1}^N\frac{a_i}{|x-y_i|^\frac{n-p}{p-1}} + K_\delta(x)\]
is \(p\)-superharmonic except possibly at the poles \(y_i\) (Lemma \ref{add.thm}).
Defining \(W_N^\delta(y_i) := \infty\), \textbf{we claim that \(W^\delta_N\) is \(p\)-superharmonic in the whole \(\mathbb{R}^n\).}
We have to verify Def. \ref{def.psup}. Clearly, i) and ii) are valid. For the comparison principle in iii) we select \(D\subset\subset\mathbb{R}^n\) (i.e. \(D\) is bounded) and let \(h\in C(\overline{D})\) be \(p\)-harmonic in \(D\) with \(h\big|_{\partial D}\leq W^\delta_N\big|_{\partial D}\).
If any, isolate the points \(y_i\) in \(\overline{D}\) with \(\epsilon\)-balls \(B_i := B(y_i,\epsilon)\) where \(\epsilon>0\) is so small so that \(W^\delta_N\big|_{B_i} \geq \max_{\overline{D}}h\).
This is possible because \(h\) is bounded and because \(\lim_{x\to y_i} W^\delta_N(x) = \infty\). Then \(W^\delta_N\) is \(C^2\) on \(D\setminus \cup B_i\) so, by Lemma ~\ref{add.thm}, \(\Delta_p W_N\leq0\) on this set. Also, \(h\big|_{\partial(D\setminus\cup B_i)} \leq W^\delta_N\big|_{\partial(D\setminus\cup B_i)}\) by the construction of the \(\epsilon\)-balls, so \(h\leq W^\delta_N\) on this set since \( W^\delta_N\) is \(p\)-superharmonic there. Naturally, \(h\leq W^\delta_N\) on \(\cup B_i\), so the inequality will hold in the whole domain \(D\). This proves the claim.
Now \(N\to\infty\). Assume that the limit function
\[W^\delta(x) := \sum_{i=1}^\infty\frac{a_i}{|x-y_i|^\frac{n-p}{p-1}} + K_\delta(x)\]
is finite at least at one point in \(\mathbb{R}^n\). \textbf{We claim that \(W^\delta\) is \(p\)-superharmonic.}
By assumption \( W^\delta\not\equiv\infty\) and it is a standard result that the limit of an increasing sequence of l.s.c functions is l.s.c.
Part iii). Suppose that \(D\subset\subset\mathbb{R}^n\) and \(h\in C(\overline{D})\) is \(p\)-harmonic in \(D\) with \(h\big|_{\partial D}\leq W^\delta\big|_{\partial D}\). Then \((W^\delta_N - h)\) is an increasing sequence of l.s.c. functions on the compact set \(\partial D\) with point-wise limit \((W^\delta-h)\big|_{\partial D}\geq 0\). If \(\epsilon>0\), then \((W^\delta_N - h)\big|_{\partial D} > - \epsilon\) for a sufficiently big \(N\) by Lemma \ref{dini}. That is
\[(h-\epsilon)\big|_{\partial D} < W^\delta_N\big|_{\partial D}\]
so \((h-\epsilon)\big|_{D} \leq W^\delta_N\big|_{D}\) since \(h-\epsilon\) is \(p\)-harmonic and \(W^\delta_N\) is \(p\)-superharmonic. Finally, since \(W^\delta_N\leq W^\delta\) we get
\[(h-\epsilon)\big|_D \leq W^\delta\big|_D\]
and as \(\epsilon\) was arbitrary, the required inequality \(h\leq W^\delta\) in \(D\) is obtained and the claim is proved.
Let \(\delta\to 0\) and set
\[W(x) := \sum_{i=1}^\infty\frac{a_i}{|x-y_i|^\frac{n-p}{p-1}} + K(x).\]
\textbf{We claim that \(W\) is \(p\)-superharmonic.}
Part i) and ii) are immediate. For part iii), assume \(D\subset\subset\mathbb{R}^n\) and \(h\in C(\overline{D})\) is \(p\)-harmonic in \(D\) with \(h\big|_{\partial D}\leq W\big|_{\partial D}\). Let \(\epsilon>0\). Then there is a \(\delta>0\) such that
\[|K(x)-K_\delta(x)| < \epsilon\]
at every \(x\in\overline{D}\). We have
\[W^\delta = W + K_\delta - K > W - \epsilon \geq h - \epsilon\]
on \(\partial D\).
And again, since \(h-\epsilon\) is \(p\)-harmonic and \(W^\delta\) is \(p\)-superharmonic, we get \(W^\delta \geq h-\epsilon\) in \(D\). Thus
\[W\big|_D \geq W^\delta\big|_D-\epsilon \geq h\big|_D - 2\epsilon.\]
This proves the claim, settles the case \(2<p<n\) and completes the proof of Theorem \ref{int.mainthm}.
\bigskip
We now turn to the situation \(p\geq n\) and introduce the assumption
\begin{equation}
A := \sum_{i=1}^\infty a_i<\infty.
\label{sup.assumption}
\end{equation}
\subsection{The case p=n}
Let \(\delta>0\). The partial sums
\[W^\delta_N(x) := -\sum_{i=1}^N a_i\ln|x-y_i| + K_\delta(x)\]
are \(p\)-superharmonic in \(\mathbb{R}^n\) by the same argument as in the case \(2<p<n\).
Let \(N\to\infty\).
\textbf{We claim that}
\[W^\delta(x) := -\sum_{i=1}^\infty a_i\ln|x-y_i| + K_\delta(x)\]
\textbf{is \(p\)-superharmonic in \(\mathbb{R}^n\)} provided the sum converges absolutely\footnote{Conditional convergence is not sufficient. A counter example is \(a_i = ~1/i^2\), \(|y_i| = ~\exp((-1)^i i)\), yielding \(W^\delta(x) = -\infty\) for all \(y_i\neq x\neq0\). } at least at one point.
Assume for the moment that, given a radius \(R>0\), it is possible to find numbers \(C_i\) so that
\begin{equation}
\begin{gathered}
\ln|x-y_i| \leq C_i \text{ for all } x \in B_R := B(0,R), \text {and}\\
\text{the series } \sum_{i=1}^\infty a_i C_i =: S_R \text{ converges.}
\end{gathered}
\label{sup.c}
\end{equation}
Define the sequence \((f_N)\) in \(B_R\) by
\[f_N(x) := \sum_{i=1}^N\big(-a_i\ln|x-y_i| + a_iC_i\big) + K_\delta(x),\qquad f(x) := \lim_{N\to\infty}f_N(x).\]
Then \((f_N)\) is an increasing sequence of l.s.c functions implying that \(f\) is l.s.c. in \(B_R\) and that
\[W^\delta = f - S_R\]
is as well. Since \(R\) can be arbitrarily big, we conclude that \(W^\delta\) does not take the value \(-\infty\) and is l.s.c. in \(\mathbb{R}^n\).
For part iii) we show that \(f\) obeys the comparison principle. Assume \(D\subset\subset B_R\) and \(h\in C(\overline{D})\) is \(p\)-harmonic in \(D\) with \(h\big|_{\partial D}\leq f\big|_{\partial D}\).
Then \((f_N - h)\) is an increasing sequence of l.s.c. functions on the compact set \(\partial D\) with point-wise limit
\[(f-h)\big|_{\partial D} \geq 0.\]
If \(\epsilon>0\), then \((f_N - h)\big|_{\partial D} > - \epsilon\) for a sufficiently big \(N\) by Lemma \ref{dini}. That is
\[(h-\epsilon)\big|_{\partial D} < f_N\big|_{\partial D}\]
so \((h-\epsilon)\big|_{D} \leq f_N\big|_{D}\) since \(h-\epsilon\) is \(p\)-harmonic and \(f_N\) is \(p\)-superharmonic. Finally, since \(f_N\leq f\) we get
\[(h-\epsilon)\big|_D \leq f\big|_D \]
and as \(\epsilon\) was arbitrary, the required inequality \(h\leq f\) in \(D\) is obtained. Hence \(W^\delta(x) = f(x) - S_R\) is a \(p\)-superharmonic function in any ball \(B_R\).
The claim is now proved if we can establish the existence of the numbers \(C_i\) satisfying \eqref{sup.c}.
By a change of variables we may assume that the convergence is at the origin. That is
\[L := \sum_{i=1}^\infty a_i|\ln|y_i|| < \infty.\]
We have
\begin{align*}
\ln|x-y_i| &\leq \ln(|x| + |y_i|)\\
&\leq \ln(2\max\{|x|,|y_i|\})\\
&= \max\{\ln|x|,\ln|y_i|\} + \ln 2,
\end{align*}
so
\[C_i := \max\{\ln R,\ln|y_i|\} + \ln 2\]
will do since (for \(R>1/2\)) the sequence of partial sums \(\sum_{i=1}^N a_i C_i\) is increasing and bounded by \(A\ln 2R + L\).
The final limit \(\delta\to 0\) causes no extra problems.
\[W(x) := -\sum_{i=1}^\infty a_i\ln|x-y_i| + K(x)\]
is \(p\)-superharmonic in \(\mathbb{R}^n\).
This settles the case \(p=n\).
\subsection{The case p\(>\)n}
Let \(\delta>0\). Consider again the partial sums
\[W^\delta_N(x) := -\sum_{i=1}^N a_i|x-y_i|^\frac{p-n}{p-1} + K_\delta(x).\]
As before \textbf{\(W^\delta_N\) is \(p\)-superharmonic in \(\mathbb{R}^n\)}, but now a different approach is required for the proof.
For ease of notation, write
\[u(x) := -\sum_{i=1}^N a_i|x-y_i|^\alpha + K(x),\qquad 0<\alpha := \frac{p-n}{p-1}<1,\]
where \(K\in C^\infty(\mathbb{R}^n)\) is concave.
We will show that \(u\) satisfies the integral inequality \eqref{psup.inteq}.
Clearly, \(u\) is continuous and \(\int_\Omega |u|^p\dd x < \infty\) on any bounded domain \(\Omega\). Also,
\[|\nabla (|x|^\alpha)|^p = \left|\alpha\frac{x^T}{|x|^{2-\alpha}}\right|^p \propto \frac{1}{|x|^{(1-\alpha)p}}\]
where one can show that
\[(1-\alpha)p < n.\]
Thus \(\int|\nabla u|^p\dd x < \infty\) locally so \(u \in C(\mathbb{R}^n)\cap W^{1,p}_{loc}(\mathbb{R}^n)\).
Let \(0\leq \phi\in C^\infty_0(\mathbb{R}^n)\) and write
\begin{align*}
\int_{\mathbb{R}^n}|\nabla u|^{p-2}\nabla u \nabla \phi^T\dd x &= \left(\int_{\mathbb{R}^n\setminus\cup_j B_j} + \int_{\cup_j B_j}\right)|\nabla u|^{p-2}\nabla u \nabla \phi^T\dd x\\
&=: I_\epsilon + J_\epsilon
\end{align*}
where \(B_j := B(y_j,\epsilon)\) and where \(\epsilon>0\) is so small so that the balls are disjoint. Obviously, \(J_\epsilon\to 0\) as \(\epsilon\to 0\) but
\begin{align*}
I_\epsilon &= \int_{\partial(\mathbb{R}^n\setminus\cup_j B_j)}\phi |\nabla u|^{p-2}\nabla u\, \nu\dd\sigma - \int_{\mathbb{R}^n\setminus\cup_j B_j} \phi\Delta_p u\dd x\\
&\geq \int_{\cup_j\partial B_j}\phi |\nabla u|^{p-2}\nabla u\, \nu\dd\sigma
\end{align*}
since \(\Delta_p u\leq 0\) on \(\mathbb{R}^n\setminus\cup_j B_j\) by Lemma \ref{add.thm}. Here, \(\nu\) is a sphere's \emph{inward} pointing normal so, for \(x\in\partial B_i\),
\begin{align*}
\nabla u(x)\nu &= \nabla u(x)\frac{y_i-x}{\epsilon}\\
&= \left(-\alpha\sum_{j=1}^N a_j\frac{(x-y_j)^T}{|x-y_j|^{2-\alpha}} + \nabla K(x)\right)\frac{y_i-x}{\epsilon}\\
&= \frac{\alpha a_i}{\epsilon^{1-\alpha}} + \alpha \sum_{j\neq i} a_j\frac{(x-y_j)^T}{|x-y_j|^{2-\alpha}}\frac{x-y_i}{\epsilon} + \nabla K(x)\frac{y_i-x}{\epsilon}\\
&> \frac{\alpha a_i}{\epsilon^{1-\alpha}} - \frac{\alpha}{(d_i/2)^{1-\alpha}}\sum_{j\neq i} a_j - C_K,\qquad d_i := \min_{j\neq i}|y_j-y_i|\\
&> 0
\end{align*}
for \(\epsilon\) sufficiently small.
That is,
\[\int_{\mathbb{R}^n}|\nabla u|^{p-2}\nabla u \nabla \phi^T\dd x \geq 0\]
for all non-negative test-functions. The partial sums are therefore \(p\)-superharmonic functions.
Let \(N\to \infty\) and set
\[W^\delta(x) := -\sum_{i=1}^\infty a_i|x-y_i|^\alpha + K_\delta(x), \qquad \alpha := \frac{p-n}{p-1}\]
remembering the assumption \eqref{sup.assumption}.
This function is automatically \emph{upper} semi-continuous but as the definition of \(p\)-superharmonicity requires \emph{lower} semi-continuity, \emph{continuity} has to be shown.
\textbf{We claim that \(W^\delta\) is \(p\)-superharmonic in ~\(\mathbb{R}^n\)} provided the series converges at least at some point.
Again we may assume that the convergence is at the origin. That is \(\sum_{i=1}^\infty a_i|y_i|^\alpha < \infty\). Since \(0<\alpha<1\), we get
\begin{align*}
|x-y_i|^\alpha &\leq (|x| + |y_i|)^\alpha\\
&\leq |x|^\alpha + |y_i|^\alpha
\end{align*}
so since
\[\sum_{i=1}^\infty a_i|x-y_i|^\alpha \leq |x|^\alpha \sum_{i=1}^\infty a_i + \sum_{i=1}^\infty a_i|y_i|^\alpha < \infty\]
we see that \(W^\delta_N\to W^\delta\) locally uniformly in \(\mathbb{R}^n\). We infer that \(W^\delta\) is continuous in \(\mathbb{R}^n\).
For part iii), assume \(D\subset\subset\mathbb{R}^n\) and \(h\in C(\overline{D})\) is \(p\)-harmonic in \(D\) with \(h\big|_{\partial D}\leq W^\delta\big|_{\partial D}\). Since \(W^\delta\leq W^\delta_N\) and \(W^\delta_N\) is \(p\)-superharmonic we get \(h\big|_D\leq W^\delta_N\big|_D\) for all \(N\). So given any \(\epsilon>0\)
\[h\big|_D\leq W^\delta\big|_D + \epsilon\]
by uniformity on the bounded set \(D\). This proves the claim.
Next, let \(\delta\to 0\). \textbf{Then}
\[W(x) := -\sum_{i=1}^\infty a_i|x-y_i|^\frac{p-n}{p-1} + K(x)\]
\textbf{is \(p\)-superharmonic in \(\mathbb{R}^n\)}
by the same argument as when \(2<p<n\). This settles the case \(p>n\).
\section{Epilogue: Evolutionary superposition.}\label{ep}
The superposition of fundamental solutions has been extended to \(p\)-Laplace equations in the Heisenberg group, see \cite{garofalo2010}. When it comes to further extensions, a natural question is whether such a superposition is valid for the evolutionary \(p\)-Laplace equation
\begin{align}
u_t &= \Delta_p u,\\
\intertext{or for the homogeneous equation}
\frac{\partial}{\partial t}(|u|^{p-2}u) &= \Delta_p u.
\end{align}
The following shows it does not.
In both cases \(p>2\) and \(u = u(x,t)\) where \(x\in\mathbb{R}^n\) and \(t>0\). The fundamental solutions to these equations
are given by
\[\mathcal{B}(x,t) := \frac{1}{t^{n\beta}}\left(C - \frac{p-2}{p}\beta^\frac{1}{p-1}\left(\frac{|x|}{t^\beta}\right)^\frac{p}{p-1}\right)_+^\frac{p-1}{p-2},\qquad \beta := \frac{1}{n(p-2) + p}\]
and
\[\mathcal{W}(x,t) := \frac{c}{t^{\frac{n}{p(p-1)}}}\exp\left(-\frac{p-1}{p}(1/p)^\frac{1}{p-1}\left(\frac{|x|}{t^{1/p}}\right)^\frac{p}{p-1}\right)\]
respectively, where the subscript \(+\) in the so-called Barenblatt solution \(\mathcal{B}(x,t)\) means \((\cdot)_+ = \max\{\cdot,0\}\). The \(C\) and \(c\) are positive constants chosen so that the solutions satisfy certain conservation properties.
For any fixed positive time the functions are \(C^2\) away from the origin and, in the case of \(\mathcal{B}\), away from the boudary of its support. We also notice that \(\mathcal{W}>0\) on \(\mathbb{R}^n\times(0,\infty)\) while \(\mathcal{B}\geq 0\) has compact support for any finite \(t\).
In some ways these functions are similar to the heat kernel. In particular, one can show that for any fixed \(0\neq y\in\mathbb{R}^n\) \emph{there is a time when the time derivatives} \(\mathcal{W}_t(y,t)\) and \(\mathcal{B}_t(y,t)\) change sign. In fact, a calculation will confirm that
\[\Delta_p(a\mathcal{B}) - (a\mathcal{B})_t = (a^{p-1} - a)\mathcal{B}_t,\qquad 0<a\neq 1\]
changes sign at \(y\) when
\[|y| = (Cpn)^\frac{p-1}{p}\beta^\frac{p-2}{p}t^\beta\]
showing that not even the simple superposition \(\mathcal{B} + \mathcal{B}\) holds.
This counter example arises due to \(\mathcal{B}\) not being multiplicative and will not work when applied to \(\mathcal{W}\).
Although the \(p\)-Laplacian
\[\Delta_p u = |\nabla u|^{p-2}\left((p-2)\frac{\nabla u(\mathcal{H} u)\nabla u^T}{|\nabla u|^2} + \Delta u\right),\qquad p>2,\]
is not well defined at \(x_0\) if \(\nabla u(x_0)=0\), it can be continuously extended to zero if \(u\) is \(C^2\) at the critical point. We will thus write \(\Delta_p u(x_0)=0\) in those cases.
Fix a non-zero \(y\in\mathbb{R}^n\) and define the linear combination \(V\) as
\begin{equation}
V(x,t) := \mathcal{W}(x+y,t) + \mathcal{W}(x-y,t).
\label{comb}
\end{equation}
Since \(\mathcal{W}(x,t) =: f(|x|,t)\) is radial in \(x\), the gradient can be written as
\[\nabla \mathcal{W}(x,t) = f_1(|x|,t)\frac{x^T}{|x|}\]
and
\[V(0,t) = \mathcal{W}(y,t) + \mathcal{W}(-y,t) = 2\mathcal{W}(y,t).\]
Thus \(V\) is \(C^2\) at the origin and
\[\nabla V(0,t) = \bigg|_{x=0} f_1(|x+y|,t)\frac{(x+y)^T}{|x+y|} + f_1(|x-y|,t)\frac{(x-y)^T}{|x-y|} = 0\]
for all \(t>0\). So, at \(x=0\) we get
\begin{align*}
\frac{\partial}{\partial t}\left(|V|^{p-2}V\right) - \Delta_p V
&= (p-1)V^{p-2}V_t - 0\\
&= 2(p-1)(2\mathcal{W}(y,t))^{p-2}\mathcal{W}_t(y,t)
\end{align*}
which has the aforementioned change of sign at some time \(t\).
Thus the sum of the two fundamental solutions \(\mathcal{W}(x\pm y,t)\) cannot be a supersolution nor a subsolution.
\bibliographystyle{alpha}
|
1,477,468,750,587 | arxiv |
\section{Introduction}
\section{Introduction: T2K and the near detector}
T2K is a neutrino oscillation experiment that spans 295km across Japan \cite{t2kNIM}; the baseline is optimised to measure $\theta_{13}$ through electron neutrino ($\nu_e$) appearance in a predominantly muon neutrino ($\nu_{\mu}$) beam. The neutrino beam peak energy, $\sim 0.6 \,$GeV, coincides with the first $\nu_e$ appearance probability maximum and enables T2K to exclude $\theta_{13} = 0$ with an impressive significance of 7.3$\sigma$ \cite{t2knueapp2014}.
The far detector, Super-Kamiokande (SK), is a Cherenkov light detector situated at an off-axis angle of $2.5^{\circ}$ relative to the beam direction. Positioned 280m from the source, along the same axis as SK, is the near detector, ND280. This measures the flux, interaction rates and flavour content of the beam to constrain predictions at SK.
\paragraph{Electron neutrinos at T2K \\}
For $\nu_{\mu} \rightarrow \nu_e$ oscillation searches, the signal at SK is $\nu_e$ and the biggest background comes from the intrinsic $\nu_e$ component of the beam itself. The precision with which the $\nu_e$ cross-sections and intrinsic flux at SK are modelled consequently play a significant role in reducing the systematic errors of T2K oscillation results. Previously the $\nu_e\,$CC-inclusive cross section was measured with ND280 data \cite{ben_nueCC_xs}.
Charged current quasi elastic (CCQE) interactions dominate at the T2K beam peak energy. However, only particles that exit the nucleus may be detected, and since particles undergo final state interactions in the nucleus, it is only possible to measure events that appear CCQE-like. Pions, for example, may undergo scattering, absorption and charge exchange. For this reason we define our signal in terms of particles exiting the nucleus, and focus here on $\nu_e $ charged current (CC) events with no pions in the final state, $\nu_e \,$CC$\,0\pi$. The remaining CC events are labelled $\nu_e \,$CC-other; these form a significant part of the background since the separation is limited by detector reconstruction efficiency.
\paragraph{The near detector\\}
ND280 comprises multiple sub-detectors, as depicted in Figure \ref{fig:nd280} where `downstream' (`upstream') is defined as the +z (-z) direction. Fine Grained scintillator Detectors (FGDs) provide an active target mass, and their size is optimised such that there is a good chance the lepton will travel through the adjacent Time Projection Chamber (TPC) and possibly the Electromagnetic Calorimeters (ECals). One FGD has water layers between the scintillator to enable measurements on water, the target at SK. The 3D reconstructed TPC tracks are used to calculate the momentum and charge of particles as they travel in the magnetic field. Furthermore, the energy deposited as a function of distance gives excellent particle identification (PID) capabilities. In the ECal, distribution of charge is used for PID where track-like (muon) and shower-like (electron) objects are distinguished. FGDs/TPCs are numbered in the downstream direction, and upstream of these ND280 contains a $\pi^0$ detector (P$\emptyset$D). The magnet is instrumented with a side muon range detector (SMRD).
\begin{figure}[!ht]
\begin{center}
\begin{minipage}{0.40\textwidth}
\includegraphics[width=1\columnwidth]{nd28022.png}
\caption{Schematic of ND280}
\label{fig:nd280}
\end{minipage}
\hspace{24pt}
\begin{minipage}{0.36\textwidth}
\centering
\includegraphics[width=0.72\columnwidth]{ccqe_proton_recon.png}
\includegraphics[width=0.72\columnwidth]{cc_other3.png}
\caption{ND280 display of simulated $\nu_e$ CC0$\pi$ (top) and $\nu_e$ CC-other (bottom) events}
\label{fig:events}
\end{minipage}
\end{center}
\end{figure}
\section{ $\nu_e \,$CC$\,0\pi$ event selection}
\label{sec:sigSel}
The $\nu_e \,$CC$\,0\pi$ selection process is detailed below and the final distribution is displayed as a function of momentum in the left of Figure \ref{fig:Sel}. NEUT, a Monte Carlo event generator for neutrino interactions, is used to optimise the selection and produce plots.
\vspace{4pt}
\\
\textbf{0) Event Quality} - Data quality and time compatibility checks are performed.
\vspace{4pt}
\\
\textbf{ 1) Track selection} - The highest momentum negative track originating from FGD1 with a good quality TPC track is selected. This is the lepton candidate.
\vspace{4pt}
\\
\textbf{2) PID} - To identify the selected track, the TPC uses (momentum dependent) energy deposited over the distance, and the ECal PID looks at charge distribution.
\vspace{4pt}
\\
\textbf{3) Momentum cut} - Only events with a reconstructed momentum greater than $200\,$MeV are accepted. Below this the selection is dominated by $\gamma$-background.
\vspace{4pt}
\\
\textbf{4) Gamma Veto} - Events where the selected track is an $e^-$ from $\gamma \rightarrow e^- e^+ $ interactions are targeted; this is done by cutting on the invariant mass between the selected track and a second track that is positive and has an electron-like TPC track.
\vspace{4pt}
\\
\textbf{5) Upstream Vetoes} - Events with upstream activity in the P$\emptyset$D, ECal or TPC are removed; this indicates that the initial interaction outside of the FGD.\vspace{4pt}
\\
\textbf{6) No Michel electrons} – Events with Michel electron candidates are rejected.
\vspace{4pt}
\\
\textbf{7) Track multiplicity} - Events with additional FGD tracks are rejected. In the case of only one extra track, events pass only if it forms a proton-like track in the TPC.
\begin{figure}[htb]
\centering
\includegraphics[width=0.384\linewidth]{cc0pi_mom_proper2.png}
\includegraphics[width=0.384\linewidth]{ccother_mom_proper2.png}
\caption{Momentum distribution of $\nu_e \,$CC$\,0\pi$ (left) and $\nu_e \,$CC-other (right) samples.}
\label{fig:Sel}
\end{figure}
\section{Backgrounds and control samples}
The signal selection is estimated to be $\sim 53\,\%$ pure, with the two largest backgrounds due to $\nu_c$CC-other and the `$\gamma$-background' (due to $e^-$ coming from $\gamma \rightarrow e^- e^+ $). These backgrounds, along with $\nu_{\mu}\,$CC$\,0\pi$, are constrained with dedicated control samples.
\paragraph{ $\nu_e \,$CC-other control sample \\}
Cuts $0-5$ in section \ref{sec:sigSel} are designed to select $\nu_e\,$CC interactions and therefore apply also to the $\nu_e \,$CC-other sample. Events are then distinguished by the presence of Michel electrons or extra tracks. There is an upper bound on the $\nu_e$ CC-other track multiplicity to reflect events that may enter the signal selection due to detector reconstruction failure. The resulting selection is shown in the right plot of Figure \ref{fig:Sel}.
\paragraph{$\gamma$-background control sample\\}
Photons can travel through the detector unidentified and although the $\gamma \rightarrow e^- e^+ $ conversion happens inside the FGD, it is quite possible, and indeed quite frequent, that the neutrino interaction vertex occurred outside it. Consequently, $\gamma$ coming from outside the FGD have additional uncertainty compared to those that originate and convert in the FGD. This is due to modelling of interactions on a wider variety of elements and possible miss-modelling of `dead' material (cables, joining material etc.). A sample of $\gamma$-events are obtained using the inverse of requirements described in cut 4 of section \ref{sec:sigSel}; the resulting sample is displayed in the left plot of Figure \ref{fig:bkgs}. Upstream activity is used to specifically target those coming from outside the FGD.
\paragraph{$\nu_{\mu} \,$CC$\,0\pi$ control sample\\}
Despite the good PID capabilities at ND280, the beam is $\sim 99\,\%$ pure in $\nu_{\mu}$ and sometimes a muon enters the signal selection. To constrain this a $\sim 70\,\%$ pure sample of $\nu_{\mu} \,$CC$\,0\pi$ interactions are selected (right plot of Figure \ref{fig:bkgs} ) by identifying muons and events with no Michel electrons and no pion tracks.
\begin{figure}[htb]
\centering
\includegraphics[width=0.384\linewidth]{OOFGDg_mom2.png}
\includegraphics[width=0.384\linewidth]{numuCC0pi_proper_mom2.png}
\caption{The $\gamma$-background (left) and $\nu_{\mu}\,$CC$\,0\pi$ (right) control samples}
\label{fig:bkgs}
\end{figure}
\section{Towards a $\nu_e \,$CC$\,0\pi$ cross section measurement}
This analysis is working towards a $\nu_e \,$CC$\,0\pi$ cross section measurement on carbon. Control samples constrain the background, and Bayesian unfolding evaluates the momentum, energy, angular and $Q^2$ dependence, in addition to a flux averaged result.
To account for the $\gamma$-background in the $\nu_e \,$CC-other sample, the normalisation fits are performed across the control samples simultaneously. Initial fake data studies show that the small signal component in the $\nu_e \,$CC-other control sample has negligible effect in comparison to the statistical and systematic uncertainties, even when the signal prediction is significantly wrong. Systematic and statistical uncertainties are expected to be $\sim 20\,\%$ and $\sim 15\,\%$ respectively.
\section{Summary}
The process for selecting $\nu_e \,$CC$\,0\pi$ events in ND280 is finalised, and the main backgrounds and sources of uncertainty identified. Dedicated control samples constrain the backgrounds due to, $\nu_e$-CCother, and $\gamma$-background and $\nu_{\mu}\,$CC$\,0\pi$ and details of the cross section measurement on carbon are being finalised.
|
1,477,468,750,588 | arxiv | \section{Introduction}
Relation extraction (RE) aims to identify the semantic relations between named entities in text. While previous work \cite{zeng2014relation,zhang2015bidirectional,zhang2018graph} focuses on extracting relations within a sentence, a.k.a.~\emph{sentence}-level RE, recent studies \cite{verga2018simultaneously,christopoulou2019connecting,sahu2019inter,yao2019docred} have escalated it to the \emph{document} level, since a large amount of relations between entities usually span across multiple sentences in the real world. According to an analysis on Wikipedia corpus \cite{yao2019docred}, at least 40.7\% of relations can only be extracted on the document level.
Compared with sentence-level RE, document-level RE requires more complex reasoning, such as logical reasoning, coreference reasoning and common-sense reasoning. A document often contains many entities, and some entities have multiple mentions under the same phrase of alias. To identify the relations between entities appearing in different sentences, document-level RE models must be capable of modeling the complex interactions between multiple entities and synthesizing the context information of multiple mentions.
Figure~\ref{fig:example} shows an example of document-level RE. Assume that one wants to extract the relation between \textit{``Surfers Riverwalk"} in S11 and \textit{``Queensland"} in S1. One has to find that \textit{``Surfers Riverwalk"} contains \textit{``Pacific Fair"} (from S11), and \textit{``Pacific Fair"} (coreference) is located in \textit{``Queensland"} (from S1). This chain of interactions helps infer the inter-sentential relation \textit{``located in"} between \textit{``Surfers Riverwalk"} and \textit{``Queensland"}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{example.pdf}
\caption{An example of document-level RE excerpted from the DocRED dataset~\cite{yao2019docred}. Arrows denote intra/inter-sentential relations.}
\label{fig:example}
\end{figure}
\smallskip
\noindent\textbf{State-of-the-art.} Early studies \cite{peng2017cross,quirk2017distant} confined document-level RE to short text spans (e.g., within three sentences). Some other studies \cite{nguyen2018convolutional,gupta2019neural} were restricted to handle two entity mentions in a document. We argue that they are incapable of dealing with the example in Figure~\ref{fig:example}, which needs to consider multiple mentions of entities integrally. To encode the semantic interactions of multiple entities in long distance, recent work defined document-level graphs and proposed graph-based neural network models. For example, \citet{sahu2019inter,gupta2019neural} interpreted words as nodes and constructed edges according to syntactic dependencies and sequential information. However, there is yet a big gap between word representations and relation prediction. \citet{christopoulou2019connecting} introduced the notion of document graphs with three types of nodes (mentions, entities and sentences), and proposed an edge-oriented graph neural model for RE. However, it indiscriminately integrated various information throughout the whole document, thus irrelevant information would be involved as noise and damages the prediction accuracy.
\smallskip
\noindent\textbf{Our approach and contributions.} To cope with the above limitations, we propose a novel graph-based neural network model for document-level RE. Our key idea is to make full use of document semantics and predict relations by learning the representations of involved entities from both coarse-grained and fine-grained perspectives as well as other context relations. Towards this goal, we address three challenges below:
First, \emph{how to model the complex semantics of a document?} We use the pre-trained language model BERT \cite{devlin2019bert} to capture semantic features and common-sense knowledge, and build a heterogeneous graph with heuristic rules to model the complex interactions between all mentions, entities and sentences in the document.
Second, \emph{how to learn entity representations effectively?} We design a global-to-local neural network to encode coarse-grained and fine-grained semantic information of entities. Specifically, we learn entity global representations by employing R-GCN \cite{schlichtkrull2018modeling} on the created heterogeneous graph, and entity local representations by aggregating multiple mentions of specific entities with multi-head attention \cite{vaswani2017attention}.
Third, \emph{how to leverage the influence from other relations?} In addition to target relation representations, other relations imply the topic information of a document. We learn context relation representations with self-attention \cite{sorokin2017context} to make final relation prediction.
In summary, our main contribution is twofold:
\begin{compactitem}
\item We propose a novel model, called \emph{GLRE}, for document-level RE. To predict relations between entities, GLRE synthesizes entity global representations, entity local representations and context relation representations integrally. For details, please see Section~\ref{sect:model}.
\item We conducted extensive experiments on two public document-level RE datasets. Our results demonstrated the superiority of GLRE compared with many state-of-the-art competitors. Our detailed analysis further showed its advantage in extracting relations between entities of long distance and having multiple mentions. For details, please see Section~\ref{sect:exp}.
\end{compactitem}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{model.pdf}
\caption{Architecture of the proposed model.}
\label{fig:model}
\end{figure*}
\section{Related Work}
\label{sect:work}
RE has been intensively studied in a long history.
In this section, we review closely-related work.
\smallskip
\noindent\textbf{Sentence-level RE.} Conventional work addressed sentence-level RE by using carefully-designed patterns \cite{soderland1995crystal}, features \cite{kambhatla2004combining} and kernels \cite{culotta2004dependency}. Recently, deep learning-based work has advanced the state-of-the-art without heavy feature engineering. Various neural networks have been exploited, e.g., CNN \cite{zeng2014relation}, RNN \cite{zhang2015bidirectional,cai2016bidirectional} and GNN \cite{zhang2018graph}. Furthermore, to cope with the wrong labeling problem caused by distant supervision, \citet{zeng2015distant} adopted Piecewise CNN (PCNN), \citet{lin2016neural,zhang2017position} employed attention mechanisms, and \citet{zhang2019long,qu2019a} leveraged knowledge graphs as external resources. All these models are limited to extracting intra-sentential relations. They also ignore the interactions of entities outside a target entity pair.
\smallskip
\noindent\textbf{Document-level RE.} As documents often provide richer information than sentences, there has been an increasing interest in document-level RE. \citet{gu2017chemical,nguyen2018convolutional,gupta2019neural,wang2019fine} extended the sentence-level RE models to the document level. \citet{ye2020coreferential} explicitly incorporated coreference information into language representation models (e.g., BERT). \citet{zheng2018effective,tang2020hin} proposed hierarchical networks to aggregate information from the word, sentence and document levels.
\citet{quirk2017distant} proposed the notion of document-level graphs, where nodes denote words and edges incorporate both syntactic dependencies and discourse relations. Following this, \citet{peng2017cross} first splitted a document-level graph into two directed acyclic graphs (DAGs), then used a graph LSTM for each DAG to learn the contextual representation of each word, which was concatenated and finally fed to the relation classifier. Differently, \citet{song2018nary} kept the original graph structure and directly modeled the whole document-level graph using graph-state LSTM. These models only predict the relation of a single mention pair in a document at a time, and ignore multiple mentions of a target entity pair as well as other entities.
Several models predict the relation of a target entity pair by aggregating the scores of all mention pairs with multi-instance learning. \citet{verga2018simultaneously} proposed a Transformer-based model. Later, \citet{sahu2019inter} switched Transformer to GCN. The two models only consider one target entity pair per document, and construct the document-level graphs relying on external syntactic analysis tools. \citet{christopoulou2019connecting} built a document graph with heterogeneous types of nodes and edges, and proposed an edge-oriented model to obtain global representations for relation classification. Our model differs in further learning entity local representations to reduce the influence of irrelevant information and considering other relations in the document to refine the prediction. Recently, \citet{nan2020lsr} defined a document graph as a latent variable and induced it based on the structured attention. Unlike our work, it improves the performance of document-level RE models by optimizing the structure of the document graph.
Besides, a few models \cite{levy2017zero,qiu2018qa4ie} borrowed the reading comprehension techniques to document-level RE. However, they require domain knowledge to design question templates, and may perform poorly in zero-answer and multi-answers scenarios \cite{liu2019neural}, which are very common for RE.
\section{Proposed Model}
\label{sect:model}
We model document-level RE as a \emph{classification} problem. Given a document annotated with entities and their corresponding textual mentions, the objective of document-level RE is to identify the relations of all entity pairs in the document.
Figure~\ref{fig:model} depicts the architecture of our model, named GLRE. It receives an entire document with annotations as input. First, in (a) \emph{encoding layer}, it uses a pre-trained language model such as BERT \cite{devlin2019bert} to encode the document. Then, in (b) \emph{global representation layer}, it constructs a global heterogeneous graph with different types of nodes and edges, and encodes the graph using a stacked R-GCN \cite{schlichtkrull2018modeling} to capture entity global representations. Next, in (c) \emph{local representation layer}, it aggregates multiple mentions of specific entities using multi-head attention \cite{vaswani2017attention} to obtain entity local representations. Finally, in (d) \emph{classifier layer}, it combines the context relation representations obtained with self-attention \cite{sorokin2017context} to make final relation prediction. Please see the rest of this section for technical details.
\subsection{Encoding Layer}
Let $\mathcal{D}=[w_1,w_2,\ldots,w_k]$ be an input document, where $w_j$ ($1\leq j\leq k$) is the $j^\textrm{th}$ word in it. We use BERT to encode $\mathcal{D}$ as follows:
\begin{align}
\resizebox{.89\columnwidth}{!}{$\begin{aligned}
\mathbf{H} = [\mathbf{h}_1,\mathbf{h}_2,\ldots,\mathbf{h}_k] = \textrm{BERT}([w_1,w_2,\ldots,w_k]),
\end{aligned}$}
\end{align}
where $\mathbf{h}_j\in\mathbb{R}^{d_w}$ is a sequence of hidden states at the output of the last layer of BERT. Limited by the input length of BERT, we encode a long document sequentially in form of short paragraphs.
\subsection{Global Representation Layer}
Based on $\mathbf{H}$, we construct a \emph{global heterogeneous graph}, with different types of nodes and edges to capture different dependencies (e.g., co-occurrence dependencies, coreference dependencies and order dependencies), inspired by \citet{christopoulou2019connecting}.
Specifically, there are three types of nodes:
\begin{compactitem}
\item \emph{Mention nodes,} which model different mentions of entities in $\mathcal{D}$. The representation of a mention node $m_i$ is defined by averaging the representations of contained words. To distinguish node types, we concatenate a node type representation $\mathbf{t}_m\in\mathbb{R}^{d_t}$. Thus, the representation of $m_i$ is $\mathbf{n}_{m_i} = [\mathrm{avg}_{w_j\in m_i} (\mathbf{h}_j); \mathbf{t}_m]$, where $[\,;]$ is the concatenation operator.
\item \emph{Entity nodes,} which represent entities in $\mathcal{D}$. The representation of an entity node $e_i$ is defined by averaging the representations of the mention nodes to which they refer, together with a node type representation $\mathbf{t}_e\in\mathbb{R}^{d_t}$. Therefore, the representation of $e_i$ is $\mathbf{n}_{e_i} = [\mathrm{avg}_{m_j\in e_i} (\mathbf{n}_{m_j}); \mathbf{t}_e]$.
\item \emph{Sentence nodes,} which encode sentences in $\mathcal{D}$. Similar to mention nodes, the representation of a sentence node $s_i$ is formalized as $\mathbf{n}_{s_i} = [\mathrm{avg}_{w_j\in s_i} (\mathbf{h}_j); \mathbf{t}_s]$, where $\mathbf{t}_s\in\mathbb{R}^{d_t}$.
\end{compactitem}
Then, we define five types of edges to model the interactions between the nodes:
\begin{compactitem}
\item \emph{Mention-mention edges.} We add an edge for any two mention nodes in the same sentence.
\item \emph{Mention-entity edges.} We add an edge between a mention node and an entity node if the mention refers to the entity.
\item \emph{Mention-sentence edges.} We add an edge between a mention node and a sentence node if the mention appears in the sentence.
\item \emph{Entity-sentence edges.} We create an edge between an entity node and a sentence node if at least one mention of the entity appears in the sentence.
\item \emph{Sentence-sentence edges.} We connect all sentence nodes to model the non-sequential information (i.e., break the sentence order).
\end{compactitem}
Note that there are no entity-entity edges, because they form the relations to be predicted.
Finally, we employ an $L$-layer stacked R-GCN \cite{schlichtkrull2018modeling} to convolute the global heterogeneous graph. Different from GCN, R-GCN considers various types of edges and can better model multi-relational graphs. Specifically, its node forward-pass update for the $(l+1)^\textrm{th}$ layer is defined as follows:
\begin{align}
\resizebox{.89\columnwidth}{!}{$\begin{aligned}
\mathbf{n}^{l+1}_i = \sigma\Big( \sum_{x \in\mathcal{X}} \sum_{j \in\mathcal{N}^x_i} \frac{1}{|\mathcal{N}_i^x|} \mathbf{W}^l_x\mathbf{n}^l_j + \mathbf{W}^l_0\mathbf{n}^l_i \Big),
\end{aligned}$}
\end{align}
where $\sigma(\cdot)$ is the activation function. $\mathcal{N}^x_i$ denotes the set of neighbors of node $i$ linked with edge $x$, and $\mathcal{X}$ denotes the set of edge types. $ \mathbf{W}^l_x,\mathbf{W}^l_0\in\mathbb{R}^{d_n \times d_n}$ are trainable parameter matrices ($d_n$ is the dimension of node representations).
We refer to the representations of entity nodes after graph convolution as \emph{entity global representations}, which encode the semantic information of entities throughout the whole document. We denote an entity global representation by $\mathbf{e}_i^\textrm{glo}$.
\subsection{Local Representation Layer}
We learn \emph{entity local representations} for specific entity pairs by aggregating the associated mention representations with multi-head attention \cite{vaswani2017attention}. The ``local'' can be understood from two angles: (i) It aggregates the original mention information from the encoding layer. (ii) For different entity pairs, each entity would have multiple different local representations w.r.t. the counterpart entity. However, there is only one entity global representation.
Multi-head attention enables a RE model to jointly attend to the information of an entity composed of multiple mentions from different representation subspaces. Its calculation involves the sets of queries $\mathcal{Q}$ and key-value pairs $(\mathcal{K},\mathcal{V})$:
\begin{align}
\resizebox{.89\columnwidth}{!}{$\begin{aligned}
\mathrm{MHead}(\mathcal{Q},\mathcal{K},\mathcal{V}) = [\textrm{head}_1;\ldots;\textrm{head}_z] \mathbf{W}^\textrm{out},
\end{aligned}$} \\
\resizebox{.89\columnwidth}{!}{$\begin{aligned}
\textrm{head}_i = \mathrm{softmax}\Big( \frac{\mathcal{Q} \mathbf{W}^\mathcal{Q}_i {(\mathcal{K} \mathbf{W}^\mathcal{K}_i)}^\prime} {\sqrt{d_v}} \Big) \mathcal{V} \mathbf{W}^\mathcal{V}_i,
\end{aligned}$}
\end{align}
where $\mathbf{W}^\textrm{out}\in\mathbb{R}^{d_n \times d_n}$ and $\mathbf{W}^\mathcal{Q}_i, \mathbf{W}^\mathcal{K}_i, \mathbf{W}^\mathcal{V}_i\in\mathbb{R}^{d_n\times d_v}$ are trainable parameter matrices. $z$ is the number of heads satisfying that $z\times d_v = d_n$.
In this paper, $\mathcal{Q}$ is related to the entity global representations, $\mathcal{K}$ is related to the initial sentence node representations before graph convolution (i.e., the input features of sentence nodes in R-GCN), and $\mathcal{V}$ is related to the initial mention node representations.
Specifically, given an entity pair $(e_a,e_b)$, we define their local representations as follows:
\begin{align}
\resizebox{\columnwidth}{!}{$\begin{aligned}
\mathbf{e}_a^\textrm{loc} &= \mathrm{LN}\big( \mathrm{MHead}_0(\mathbf{e}_b^\textrm{glo}, \{\mathbf{n}_{s_i}\}_{s_i\in \mathcal{S}_a}, \{\mathbf{n}_{m_j}\}_{m_j\in \mathcal{M}_a}) \big), \\
\mathbf{e}_b^\textrm{loc} &= \mathrm{LN}\big( \mathrm{MHead}_1(\mathbf{e}_a^\textrm{glo}, \{\mathbf{n}_{s_i}\}_{s_i\in \mathcal{S}_b}, \{\mathbf{n}_{m_j}\}_{m_j\in \mathcal{M}_b}) \big), \\
\end{aligned}$}
\end{align}
where $\mathrm{LN}(\cdot)$ denotes layer normalization \cite{ba2016layer}. $\mathcal{M}_a$ is the corresponding mention node set of $e_a$, and $\mathcal{S}_a$ is the corresponding sentence node set in which each mention node in $\mathcal{M}_a$ is located. $\mathcal{M}_b$ and $\mathcal{S}_b$ are similarly defined for $e_b$. Note that $\mathrm{MHead}_0$ and $\mathrm{MHead}_1$ learn independent model parameters for entity local representations.
Intuitively, if a sentence contains two mentions $m_a,m_b$ corresponding to $e_a,e_b$, respectively, then the mention node representations $\mathbf{n}_{m_a},\mathbf{n}_{m_b}$ should contribute more to predicting the relation of $(e_a,e_b)$ and the attention weights should be greater in getting $\mathbf{e}_a^\textrm{loc},\mathbf{e}_b^\textrm{loc}$. More generally, a higher semantic similarity between the node representation of a sentence containing $m_a$ and $\mathbf{e}^\textrm{glo}_b$ indicates that this sentence and $m_b$ are more semantically related, and $\textbf{n}_{m_a}$ should get a higher attention weight to $\mathbf{e}^\textrm{loc}_a$.
\subsection{Classifier Layer}
To classify the target relation $r$ for an entity pair $(e_a,e_b)$, we firstly concatenate entity global representations, entity local representations and relative distance representations to generate entity final representations:
\begin{align}
\begin{aligned}
\mathbf{\hat{e}}_a &= [\mathbf{e}_a^\textrm{glo}; \mathbf{e}_a^\textrm{loc}; \mathbf{\Delta}(\delta_{ab})], \\
\mathbf{\hat{e}}_b &= [\mathbf{e}_b^\textrm{glo}; \mathbf{e}_b^\textrm{loc}; \mathbf{\Delta}(\delta_{ba})],
\end{aligned}
\end{align}
where $\delta_{ab}$ denotes the relative distance from the first mention of $e_a$ to that of $e_b$ in the document. $\delta_{ba}$ is similarly defined. The relative distance is first divided into several bins $\{1,2,\ldots,2^b\}$. Then, each bin is associated with a trainable distance embedding. $\mathbf{\Delta}(\cdot)$ associates each $\delta$ to a bin.
Then, we concatenate the final representations of $e_a,e_b$ to form the \emph{target relation representation} $\mathbf{o}_r = [\mathbf{\hat{e}}_a; \mathbf{\hat{e}}_b]$.
Furthermore, all relations in a document implicitly indicate the topic information of the document, such as \textit{``director"} and \textit{``character"} often appear in movies. In turn, the topic information implies possible relations. Some relations under similar topics are likely to co-occur, while others under different topics are not. Thus, we use self-attention \cite{sorokin2017context} to capture \emph{context relation representations}, which reveal the topic information of the document:
\begin{align}
\resizebox{.89\columnwidth}{!}{$\begin{aligned}
\mathbf{o}_c = \sum_{i=0}^p \theta_i \mathbf{o}_i = \sum_{i=0}^p \frac{\mathrm{exp}(\mathbf{o}_i \mathbf{W} \mathbf{o}_r^\prime)}
{\sum_{j=0}^p \mathrm{exp}(\mathbf{o}_j \mathbf{W} \mathbf{o}_r^\prime)} \mathbf{o}_i,
\end{aligned}$}
\end{align}
where $\mathbf{W}\in\mathbb{R}^{d_r\times d_r}$ is a trainable parameter matrix. $d_r$ is the dimension of target relation representations. $\mathbf{o}_i$ ($\mathbf{o}_j$) is the relation representation of the $i^\textrm{th}$ ($j^\textrm{th}$) entity pair. $\theta_i$ is the attention weight for $\mathbf{o}_i$. $p$ is the number of entity pairs.
Finally, we use a feed-forward neural network (FFNN) over the target relation representation $\mathbf{o}_r$ and the context relation representation $\mathbf{o}_c$ to make the prediction. Besides, considering that an entity pair may hold several relations, we transform the multi-classification problem into multiple binary classification problems. The predicted probability distribution of $r$ over the set $\mathcal{R}$ of all relations is defined as follows:
\begin{align}
\mathbf{y}_r = \mathrm{sigmoid}(\mathrm{FFNN}([\mathbf{o}_r; \mathbf{o}_c])),
\end{align}
where $\mathbf{y}_r\in\mathbb{R}^{|\mathcal{R}|}$.
We define the loss function as follows:
\begin{align}
\resizebox{.89\columnwidth}{!}{$\begin{aligned}
\mathcal{L} = -\sum_{r\in \mathcal{R}} \Big( y_r^* \log(y_r) + (1-y_r^*) \log(1-y_r) \Big),
\end{aligned}$}
\end{align}
where $y_r^*\in\{0,1\}$ denotes the true label of $r$. We employ Adam optimizer \cite{kingma2015adam} to optimize this loss function.
\section{Experiments and Results}
\label{sect:exp}
We implemented our GLRE with PyTorch 1.5. The source code and datasets are available online.\footnote{\url{https://github.com/nju-websoft/GLRE}} In this section, we report our experimental results.
\subsection{Datasets}
We evaluated GLRE on two public document-level RE datasets. Table \ref{tab:stat} lists their statistical data:
\begin{compactitem}
\item The Chemical-Disease Relations (\emph{CDR}) data set \cite{li2016biocreative} was built for the BioCreative V challenge and annotated with one relation \textit{``chemical-induced disease''} manually.
\item The \emph{DocRED} dataset \cite{yao2019docred} was built from Wikipedia and Wikidata, covering various relations related to science, art, personal life, etc. Both manually-annotated and distantly-supervised data are offered. We only used the manually-annotated data.
\end{compactitem}
\begin{table}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{|l|l|r|r|r|r|}
\hline \multicolumn{2}{|c|}{Datasets} & \#Doc. & \#Rel. & \#Inst. & \#N/A Inst. \\
\hline \multirow{3}{*}{CDR} & Train & 500 & 1 & 1,038 & 4,280 \\
~ & Dev. & 500 & 1 & 1,012 & 4,136 \\
~ & Test & 500 & 1 & 1,066 & 4,270 \\
\hline \multirow{3}{*}{DocRED} & Train & 3,053 & 96 & 38,269 & 1,163,035 \\
~ & Dev. & 1,000 & 96 & 12,332 & 385,263 \\
~ & Test & 1,000 & 96 & 12,842 & 379,316 \\
\hline
\end{tabular}}
\caption{Dataset statistics (Inst.: relation instances excluding N/A relation; N/A Inst.: negative examples).}
\label{tab:stat}
\end{table}
\subsection{Comparative Models}
First, we compared GLRE with five sentence-level RE models adapted to the document level:
\begin{compactitem}
\item \citet{zhang2018graph} employed GCN over pruned dependency trees.
\item \citet{yao2019docred} proposed four baseline models. The first three ones are based on CNN, LSTM and BiLSTM, respectively. The fourth context-aware model incorporates the attention mechanism into LSTM.
\end{compactitem}
We also compared GLRE with nine document-level RE models:
\begin{compactitem}
\item \citet{zhou2016exploiting} combined feature-, tree kernel- and neural network-based models.
\item \citet{gu2017chemical} leveraged CNN and maximum entropy.
\item \citet{nguyen2018convolutional} integrated character-based word representations in CNN.
\item \citet{panyam2018exploiting} exploited graph kernels.
\item \citet{verga2018simultaneously} proposed a bi-affine network with Transformer.
\item \citet{zheng2018effective} designed a hierarchical network using multiple BiLSTMs.
\item \citet{christopoulou2019connecting} put forward an edge-oriented graph neural model with multi-instance learning.
\item \citet{wang2019fine} applied BERT to encode documents, and used a bilinear layer to predict entity relations. It improved performance by two phases. First, it predicted whether a relation exists between two entities. Then, it predicted the type of the relation.
\item \citet{tang2020hin} is a sequence-based model. It also leveraged BERT and designed a hierarchical inference network to aggregate inference information from entity level to sentence level, then to document level.
\end{compactitem}
\subsection{Experiment Setup}
Due to the small size of CDR, some work \cite{zhou2016exploiting,verga2018simultaneously,zheng2018effective,christopoulou2019connecting} created a new split by unionizing the training and development sets, denoted by \emph{``train + dev"}. Under this setting, a model was trained on the train + dev set, while the best epoch was found on the development set. To make a comprehensive comparison, we also measured the corresponding precision, recall and F1 scores.
For consistency, we used the same experiment setting on DocRED. Additionally, the gold standard of the test set of DocRED is unknown, and only F1 scores can be obtained via an online interface. Besides, it was noted that some relation instances are present in both training and development/test sets \cite{yao2019docred}. We also measured F1 scores ignoring those duplicates, denoted by \emph{Ign F1}.
For GLRE and \citet{wang2019fine}, we used different BERT models in the experiments. For CDR, we chose BioBERT-Base v1.1 \cite{lee2019biobert}, which re-trained the BERT-Base-cased model on biomedical corpora. For DocRED, we picked up the BERT-Base-uncased model. For the comparative models without using BERT, we selected the PubMed pre-trained word embeddings \cite{chiu2016train} for CDR and GloVe \cite{pennington2014glove} for DocRED. For the models with source code, we used our best efforts to tune the hyperparameters. Limited by the space, we refer interested readers to the appendix for more details.
\subsection{Main Results}
\begin{table}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{l|ccc|ccc}
\hline \multirow{2}{*}{\textbf{Models}} & \multicolumn{3}{c|}{\textbf{Train}} & \multicolumn{3}{c}{\textbf{Train\,+\,Dev}} \\
\cline{2-7} & P & R & F1 & P & R & F1 \\
\hline \citeauthor{zhang2018graph}$^\P$ & 52.3 & \underline{72.0} & 60.6 & 58.1 & \textbf{74.6} & 65.3 \\
\hline
\citeauthor{zhou2016exploiting} & \underline{64.9} & 49.3 & 56.0 & 55.6 & 68.4 & 61.3 \\
\citeauthor{gu2017chemical} & 55.7 & 68.1 & 61.3 & - & - & - \\
\citeauthor{nguyen2018convolutional} & 57.0 & 68.6 & 62.3 & - & - & - \\
\citeauthor{panyam2018exploiting} & 55.6 & 68.4 & 61.3 & - & - & - \\
\citeauthor{verga2018simultaneously} & 55.6 & 70.8 & 62.1 & 63.3 & 67.1 & 65.1 \\
\citeauthor{zheng2018effective} & 45.2 & 68.1 & 54.3 & 56.2 & 68.0 & 61.5 \\
\citeauthor{christopoulou2019connecting}$^\P$ & 62.7 & 66.3 & 64.5 & 61.5 & 73.6 & 67.0 \\
\citeauthor{wang2019fine}$^\P$ & 61.9 & 68.7 & \underline{65.1} & \underline{66.0} & 68.3 & \underline{67.1} \\
\hline GLRE (ours) & \textbf{65.1} & \textbf{72.2} & \textbf{68.5} & \textbf{70.5} & \underline{74.5} & \textbf{72.5} \\
\hline \multicolumn{7}{l}{$^\P$ denotes that we performed hyperparameter tuning. For others,} \\
\multicolumn{7}{l}{\ \ \ we reused the reported results due to the lack of source code.}
\end{tabular}}
\caption{Result comparison on CDR.}
\label{tab:cdr}
\end{table}
\begin{table}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{l|cc|cc}
\hline \multirow{2}{*}{\textbf{Models}} & \multicolumn{2}{c|}{\textbf{Train}} & \multicolumn{2}{c}{\textbf{Train\,+\,Dev}} \\
\cline{2-5} & Ign F1 & F1 & Ign F1 & F1 \\
\hline \citeauthor{zhang2018graph}$^\P$ & 49.9 & 52.1 & 52.5 & 54.6 \\
\citeauthor{yao2019docred} (CNN) & 40.3 & 42.3 & - & - \\
\citeauthor{yao2019docred} (LSTM) & 47.7 & 50.1 & - & - \\
\citeauthor{yao2019docred} (BiLSTM) & 48.8 & 51.1 & - & - \\
\citeauthor{yao2019docred} (Context-aware) & 48.4 & 50.7 & - & - \\
\hline \citeauthor{christopoulou2019connecting}$^\P$ & 49.1 & 50.9 & 48.3 & 50.4 \\
\citeauthor{wang2019fine}$^\P$ & 53.1 & 55.4 & \underline{54.5} & \underline{56.5} \\
\citeauthor{tang2020hin} & \underline{53.7} & \underline{55.6} & - & - \\
\hline GLRE (ours) & \textbf{55.4} & \textbf{57.4} & \textbf{56.7} & \textbf{58.9} \\
\hline
\end{tabular}}
\caption{Result comparison on DocRED.}
\label{tab:docred}
\end{table}
Tables~\ref{tab:cdr} and \ref{tab:docred} list the results of the comparative models and GLRE on CDR and DocRED, respectively. We have four findings below:
\begin{compactenum}[(1)]
\item The sentence-level RE models \cite{zhang2018graph,yao2019docred} obtained medium performance. They still fell behind a few docu-ment-level models, indicating the difficulty of directly applying them to the document level.
\item The graph-based RE models \cite{panyam2018exploiting,verga2018simultaneously,christopoulou2019connecting} and the non-graph models \cite{zhou2016exploiting,gu2017chemical,nguyen2018convolutional,zheng2018effective} achieved comparable results, while the best graph-based model \cite{christopoulou2019connecting} outperformed the best non-graph \cite{nguyen2018convolutional}. We attribute it to the document graph on the entity level, which can better model the semantic information in a document.
\item From the results of \citet{wang2019fine,tang2020hin}, the BERT-based models showed stronger prediction power for document-level RE. They outperformed the other comparative models on both CDR and DocRED.
\item GLRE achieved the best results among all the models. We owe it to entity global and local representations. Furthermore, BERT and context relation representations also boosted the performance. See our analysis below.
\end{compactenum}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{distance.pdf}
\caption{Results w.r.t. entity distance.}
\label{fig:dist}
\end{figure}
\subsection{Detailed Analysis}
\noindent\textbf{Entity distance.}
We examined the performance of the open-source models in terms of entity distance, which is defined as the shortest sentence distance between all mentions of two entities. Figure~\ref{fig:dist} depicts the comparison results on CDR and DocRED using the training set only. We observe that:
\begin{compactenum}[(1)]
\item GLRE achieved significant improvement in extracting the relations between entities of long distance, especially when $\textrm{distance}\geq 3$. This is because the global heterogeneous graph can effectively model the interactions of semantic information of different nodes (i.e., mentions, entities and sentences) in a document. Furthermore, entity local representations can reduce the influence of noisy context of multiple mentions of entities in long distance.
\item According to the results on CDR, the graph-based model \cite{christopoulou2019connecting} performed better than the sentence-level model \cite{zhang2018graph} and the BERT-based model \cite{wang2019fine} in extracting inter-sentential relations. The main reason is that it leveraged heuristic rules to construct the document graph at the entity level, which can better model the semantic information across sentences and avoid error accumulation involved by NLP tools, e.g., the dependency parser used in \citet{zhang2018graph}.
\item On DocRED, the models \cite{wang2019fine,zhang2018graph} outperformed the model \cite{christopoulou2019connecting}, due to the power of BERT and the increasing accuracy of dependency parsing in the general domain.
\end{compactenum}
\smallskip
\noindent\textbf{Number of entity mentions.} To assess the effectiveness of GLRE in aggregating the information of multiple entity mentions, we measured the performance in terms of the average number of mentions for each entity pair. Similar to the previous analysis, Figure~\ref{fig:mention} shows the results on CDR and DocRED using the training set only. We see that:
\begin{compactenum}[(1)]
\item GLRE achieved great improvement in extracting the relations with average number of mentions $\geq$ 2, especially $\geq$ 4. The major reason is that entity local representations aggregate the contextual information of multiple mentions selectively. As an exception, when the average number of mentions was in $[1,2)$, the performance of GLRE was slightly lower than \citet{christopoulou2019connecting} on CDR. This is because both GLRE and \citet{christopoulou2019connecting} relied on modeling the interactions between entities in the document, which made them indistinguishable under this case. In fact, the performance of all the models decreased when the average number of mentions was small, because less relevant information was provided in the document, which made relations harder to be predicted. We will consider external knowledge in our future work.
\item As compared with \citet{zhang2018graph} and \citet{christopoulou2019connecting}, the BERT-based model \cite{wang2019fine} performed better in general, except for one interval. When the average number of mentions was in $[1,2)$ on CDR, its performance was significantly lower than other models. The reason is twofold. On one hand, it is more difficult to capture the latent knowledge in the biomedical field. On the other hand, the model \cite{wang2019fine} only relied on the semantic information of the mentions of target entity pairs to predict the relations. When the average number was small, the prediction became more difficult. Furthermore, when the average number was large, its performance increase was not significant. The main reason is that, although BERT brought rich knowledge, the model \cite{wang2019fine} indiscriminately aggregated the information of multiple mentions and introduced much noisy context, which limited its performance.
\end{compactenum}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{mention.pdf}
\caption{Results w.r.t. number of entity mentions.}
\label{fig:mention}
\end{figure}
\begin{table}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{l|ccc|cc}
\hline \multirow{2}{*}{\textbf{Models}} & \multicolumn{3}{c|}{\textbf{CDR}} & \multicolumn{2}{c}{\textbf{DocRED}} \\
\cline{2-6} & P & R & F1 & Ign F1 & F1 \\
\hline GLRE & 65.1 & 72.2 & \textbf{68.5} & \textbf{55.4} & \textbf{57.4} \\
\ \ w/o BERT & \textbf{69.6} & 66.5 & 68.0 & 51.6 & 53.6 \\
\ \ w/o Entity global rep. & 67.0 & 65.4 & 66.2 & 54.7 & 56.6 \\
\ \ w/o Entity local rep. & 60.9 & 68.5 & 64.5 & 54.6 & 56.4 \\
\ \ w/o Context rel. rep. & 60.5 & \textbf{75.1} & 67.1 & 54.6 & 56.8 \\
\hline
\end{tabular}}
\caption{Results of ablation study.}
\label{tab:ablation}
\end{table}
\smallskip
\noindent\textbf{Ablation study.} To investigate the effectiveness of each layer in GLRE, we conducted an ablation study using the training set only. Table~\ref{tab:ablation} shows the comparison results. We find that: (1) BERT had a greater influence on DocRED than CDR. This is mainly because BERT introduced valuable linguistic knowledge and common-sense knowledge to RE, but it was hard to capture latent knowledge in the biomedical field. (2) F1 scores dropped when we removed entity global representations, entity local representations or context relation representations, which verified their usefulness in document-level RE. (3) Particularly, when we removed entity local representations, F1 scores dropped more dramatically. We found that more than 54\% and 19\% of entities on CDR and DocRED, respectively, have multiple mentions in different sentences. The local representation layer, which uses multi-head attention to selectively aggregate multiple mentions, can reduce much noisy context.
\smallskip
\noindent\textbf{Pre-trained language models.} To analyze the impacts of pre-trained language models on GLRE and also its performance upper bound, we replaced BERT-Base with BERT-Large, XLNet-Large \cite{yang2019xlnet} or ALBERT-xxLarge \cite{lan2020albert}. Table~\ref{tab:berts} shows the comparison results using the training set only, from which we observe that larger models boosted the performance of GLRE to some extent. When the ``train\,+\,dev" setting was used on DocRED, the Ign F1 and F1 scores of XLNet-Large even reached to 58.5 and 60.5, respectively. However, due to the lack of biomedical versions, XLNet-Large and ALBERT-xxLarge did not bring improvement on CDR. We argue that selecting the best pre-trained models is not our primary goal.
\begin{table}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{l|ccc|cc}
\hline \multirow{2}{*}{\textbf{GLRE}} & \multicolumn{3}{c|}{\textbf{CDR}} & \multicolumn{2}{c}{\textbf{DocRED}} \\
\cline{2-6} & P & R & F1 & Ign F1 & F1 \\
\hline BERT-Base & 65.1 & 72.2 & 68.5 & 55.4 & 57.4 \\
BERT-Large & 65.3 & 72.3 & \textbf{68.6} & \textbf{56.8} & 58.9 \\
XLNet-Large & \textbf{66.1} & 70.5 & 68.2 & \textbf{56.8} & \textbf{59.0} \\
ALBERT-xxLarge & 57.5 & \textbf{80.6} & 67.1 & 56.3 & 58.3 \\
\hline
\end{tabular}}
\caption{Results w.r.t. different pre-training models.}
\label{tab:berts}
\end{table}
\smallskip
\noindent\textbf{Case study.} To help understanding, we list a few examples from the CDR test set in Table \ref{tab:case}. See Appendix for more cases from DocRED.
\begin{compactenum}[(1)]
\item From Case 1, we find that logical reasoning is necessary. Predicting the relation between \textit{``rofecoxib"} and \textit{``GI bleeding"} depends on the bridge entity \textit{``non-users of aspirin"}. GLRE used R-GCN to model the document information based on the global heterogeneous graph, thus it dealt with complex inter-sentential reasoning better.
\item From Case 2, we observe that, when a sentence contained multiple entities connected by conjunctions (such as \textit{``and"}), the model \cite{wang2019fine} might miss some associations between them. GLRE solved this issue by building the global heterogeneous graph and considering the context relation information, which broke the word sequence.
\item Prior knowledge is required in Case 3. One must know that \textit{``fatigue"} belongs to \textit{``adverse effects"} ahead of time. Then, the relation between \textit{``bepridil"} and \textit{``dizziness"} can be identified correctly. Unfortunately, both GLRE and \citet{wang2019fine} lacked the knowledge, and we leave it as our future work.
\end{compactenum}
\begin{table}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{llll}
\hline
\multicolumn{4}{|p{1.12\columnwidth}|}{... [S8] Among \textbf{\textcolor{cyan}{non-users of aspirin}}, the adjusted hazard ratios were: \textbf{\textcolor{red}{rofecoxib}} 1.27, naproxen 1.59, diclofenac 1.17 and ibuprofen 1.05. ... [S10] CONCLUSION: Among \textbf{\textcolor{cyan}{non-users of aspirin}}, naproxen seemed to carry the highest risk for AMI / \textbf{\textcolor{red}{GI bleeding}}. ...} \\
\hline
\textbf{Case 1} & Label: CID & GLRE: CID & \citeauthor{wang2019fine}: N/A \\ \\
\hline
\multicolumn{4}{|p{1.12\columnwidth}|}{... [S2] \textbf{\textcolor{cyan}{S-53482}} and \textbf{\textcolor{red}{S-23121}} are N-phenylimide herbicides and produced \textbf{\textcolor{red}{embryolethality}}, teratogenicity. ...} \\
\hline
\textbf{Case 2} & Label: CID & GLRE: CID & \citeauthor{wang2019fine}: N/A \\ \\
\hline
\multicolumn{4}{|p{1.12\columnwidth}|}{[S1] Clinical evaluation of \textbf{\textcolor{cyan}{adverse effects}} during \textbf{\textcolor{red}{bepridil}} administration for atrial fibrillation and flutter. ... [S8] There was marked QT prolongation greater than 0.55 s in 13 patients ... and general \textbf{\textcolor{red}{fatigue}} in 1 patient each. ...} \\
\hline
\textbf{Case 3} & Label: CID & GLRE: N/A & \citeauthor{wang2019fine}: N/A
\end{tabular}}
\caption{Case study on the CDR test set. CID is short for the \textit{``chemical-induced disease"} relation. \textbf{\textcolor{red}{Target entities}} and \textbf{\textcolor{cyan}{related entities}} are colored accordingly.}
\label{tab:case}
\end{table}
We analyzed all 132 inter-sentential relation instances in the CDR test set that were incorrectly predicted by GLRE. Four major error types are as follows: (1) Logical reasoning errors, which occurred when GLRE could not correctly identify the relations established indirectly by the bridge entities, account for 40.9\%. (2) Component missing errors, which happened when some component of a sentence (e.g., subject) was missing, account for 28.8\%. In this case, GLRE needed the whole document information to infer the lost component and predict the relation, which was not always accurate. (3) Prior knowledge missing errors account for 13.6\%. (4) Coreference reasoning errors, which were caused by pronouns that could not be understood correctly, account for 12.9\%.
\section{Conclusion}
\label{sect:concl}
In this paper, we proposed GLRE, a global-to-local neural network for document-level RE. Entity global representations model the semantic information of an entire document with R-GCN, and entity local representations aggregate the contextual information of mentions selectively using multi-head attention. Moreover, context relation representations encode the topic information of other relations using self-attention. Our experiments demonstrated the superiority of GLRE over many comparative models, especially the big leads in extracting relations between entities of long distance and with multiple mentions. In future work, we plan to integrate knowledge graphs and explore other document graph modeling ways (e.g., hierarchical graphs) to improve the performance.\\
\noindent\textbf{Acknowledgments.} This work is supported partially by the National Key R\&D Program of China (No. 2018YFB1004300), the National Natural Science Foundation of China (No. 61872172), and the Water Resource Science \& Technology Project of Jiangsu Province (No. 2019046).
|
1,477,468,750,589 | arxiv | \section{Introduction}
In recent years, {\em artificial neural networks} have been applied to ever more general problems, incorporating the most diverse operators and intricate architectures. Unlike the initial definitions~\cite{rumelhart1986learning}, which could be easily formalized as directed graphs, modern neural networks do not obey a precise mathematical definition.
We use the general language of category theory to define a broad class of linear layer structures, which encompasses most classically known examples---dense and convolutional layers, as well as geometric deep learning layers.
The key ingredient is a general, categorical definition of {\em integration theory} that, combined with the notion of {\em parametric spans}, yields a flexible framework where layer-like bilinear operators can be studied.
For machine learning applications, not only the \textit{activation values} of a model are important, but also its derivatives with respect to the parameters. Reverse-mode automatic differentiation~\cite{baydin2018automatic} is a modern, popular technique to address this issue. It attempts to define rules to backpropagate dual vectors of the output to dual vectors of the parameters or of the input. The existence of such rules is a guiding principle for our framework: we will show that, for parametric span-based layers, the reverse-mode differentiation rule can be obtained by permuting the legs of the span.
\paragraph{Structure.} In \cref{sec:integration_theories}, we introduce the notion of {\em integration theory}, which generalizes Lebesgue integration to arbitrary source categories. We show how such notion can be used in tandem with parametric spans to define bilinear operator with a straightforward reverse-mode differentiation rule. In \cref{sec:submersions} we show how the category of manifolds and submersions provides a natural example of integration theory, which we use in \cref{sec:classical_architectures} to recover several well-known linear neural network layers.
\section{Integration theories}
\label{sec:integration_theories}
Our goal is to represent the {\em structure} of a linear layer of a neural network---a bilinear map from the input and parameters to the output---via a collection of maps in a familiar category. We aim to build a simple framework that is sufficiently flexible to cover most popular linear neural network layers and allow for novel generalizations. As backpropagation is crucial for deep learning, we also require that the dual of the linear layers we define can be computed effectively in our framework. These two assumptions---duality and bilinearity---naturally lead to the following definition.
\begin{definition}\label{def:integration_theory}
Let $\Cat$ be a category. Let $\Vect_K$ and $\CAlg_K$ denote the categories of vector spaces and commutative algebras over a base field $K$. An {\em integration theory} is an extranatural transformation (denoted by $\extranat$)~\cite{eilenberg1966generalization}
\begin{equation*}
\int \colon \func \otimes \meas \extranat K,
\quad \text{ where }
\func \colon\Cat\superscript{op}\rightarrow \CAlg_K
\text{ and }
\meas \colon \Cat \rightarrow \Vect_K.
\end{equation*}
\end{definition}
\begin{remark}
To introduce a notion of continuity, one could replace $\Vect_K$ with the category of normed vector spaces with the projective tensor product. Our main results---\cref{prop:bilinear_operator,prop:operator_adjoint}---can be proved diagrammatically and hold for any symmetric monoidal category.
\end{remark}
For simplicity, given a morphism $f\colon X\rightarrow Y$ in $\Cat$, we will use the pullback and pushforward notation to refer to $\func(f)$ and $\meas(f)$:
\begin{equation*}
f^* := \func(f)
\quad \text{ and } \quad
f_* := \meas(f).
\end{equation*}
In practice, the extranaturality condition on $\int$ can be rewritten as
\begin{equation}\label{eq:extranatural}
\int_X f^*y \otimes \mu = \int_Y y \otimes f_*\mu,
\end{equation}
where $y \in \func(Y)$ and $\mu \in \meas(X)$. Note that we use the letter $y$ to denote an element of $\func(Y)$ and not a point of $Y$.
To form an intuition on \cref{def:integration_theory}, it is helpful to think of $\func(X)$ as {\em functions} over $X$, and of $\meas(X)$ as {\em measures} over $X$. Integrating a function against a measure would then return a scalar value. Indeed, an important example of integration theory comes from the category of measurable spaces $\Meas$, whose objects are measurable spaces equipped with a $\sigma$-ideal of measure $0$ subsets, and whose morphisms are equivalence classes of nullset-reflecting measurable functions. There, we can consider the Lebesgue integral
\begin{equation*}
\int \colon L^\infty \otimes ba \rightarrow \R
\end{equation*}
of an essentially bounded measurable function against an element of the Banach space $ba$ of bounded and finitely additive signed measures. Extranaturality corresponds to an adjunction between the pullback of a function and the pushforward of a measure. In formulas, given a nullset-reflecting measurable function $f\colon X \rightarrow Y$, for all $y \in L^\infty(Y)$ and $\mu\in ba(X)$,
\begin{equation}\label{eq:extranatural_measure}
\int_X f^* y \, d\mu = \int_Y y \, df_*\mu.
\end{equation}
The following definition introduces a tool that will allow us to seamlessly transition from $\func(X)$ to $\meas(X)$. It is in some sense analogous to equipping a space $X$ with a measure. The key intuition is that we need a map $\func(X) \rightarrow \meas(X)$ that behaves well with respect to the unity and multiplication of $\func(X)$.
\begin{definition}\label{def:transmutation}
Let $\int \colon \func \otimes \meas \extranat K$ be an integration theory. A {\em transmutation} on an object $X \in \ob(\Cat)$ is a $K$-linear map $\tau \colon \func(X) \rightarrow \meas(X)$ such that, for all $x_1, x_2 \in \func(X)$,
\begin{equation}\label{eq:transmutation}
\int_X x_1 \otimes \tau(x_2) = \int_X x_1x_2 \otimes \tau(1).
\end{equation}
\end{definition}
To formally describe neural network layers with locality and weight sharing constraints, we introduce the notion of {\em parametric span}---a span with an added space of parameters (or weights). It is represented by the following diagram.
\begin{diagram}\label{diagram:parametric_span}
\begin{equation}
\begin{tikzcd}
& E \arrow[swap]{ld}{s} \arrow{d}{\pi} \arrow{rd}{t} & \\
X & W & Y
\end{tikzcd}
\end{equation}
\end{diagram}
In this representation, $X$ represents the space of {\em input data}, $Y$ the space of {\em output data}, $E$ the space of {\em edges}, and $W$ the space of {\em weights}.
Intuitively, this is an abstract representation of the notions of {\em locality} and {\em weight sharing} in deep learning. The span
\begin{equation*}
\begin{tikzcd}
& E \arrow[swap]{ld}{s} \arrow{rd}{t} & \\
X & & Y
\end{tikzcd}
\end{equation*}
determines the connectivity structure of the network (which inputs are connected to which outputs).
The map
\begin{equation*}
\begin{tikzcd}
E \arrow{d}{\pi} \\
W
\end{tikzcd}
\end{equation*}
enforces weight sharing along the fibers of $\pi$.
\begin{proposition}\label{prop:bilinear_operator}
A parametric span, as in \cref{diagram:parametric_span}, together with a transmutation $\tau$ on $E$, induces a $K$-linear map
\begin{equation}\label{eq:bilinear_operator}
\begin{aligned}
\func(X) \otimes \func(W) &\rightarrow \meas(Y)\\
(x, w) &\mapsto t_* (\tau (s^* x \cdot \pi^* w)),
\end{aligned}
\end{equation}
where $\cdot$ is the multiplication in $\func(E)$.
\end{proposition}
\begin{proof}
The above map can be obtained as
\begin{equation*}
\func(X) \otimes \func(W) \xrightarrow{s^* \, \otimes \, \pi^*}
\func(E) \otimes \func(E) \xrightarrow{\cdot}
\func(E) \xrightarrow{\tau}
\meas(E) \xrightarrow{t_*}
\meas(Y).
\end{equation*}
\end{proof}
\begin{proposition}\label{prop:operator_adjoint}
For all parametric span as in \cref{diagram:parametric_span} and transmutation $\tau$ on $E$, the following diagram commutes.
\begin{equation*}
\begin{tikzcd}
\func(X) \otimes \func(W) \otimes \func(Y)
\arrow{r}{}
\arrow{d}{}
& \meas(Y) \otimes \func(Y) \arrow{d}{}\\
\func(X) \otimes \meas(X) \arrow{r}{} & \R
\end{tikzcd}
\end{equation*}
Equivalenty, in formulas, for all $x\in \func(X), \, w\in \func(W), y\in \func(Y)$,
\begin{equation}\label{eq:operator_adjoint}
\int_Y y \otimes t_* (\tau (s^* x \cdot \pi^* w)) = \int_X x \otimes s_* (\tau (t^* y \cdot \pi^* w)).
\end{equation}
\end{proposition}
\begin{proof}
\Cref{eq:operator_adjoint} can be proved via direct calculation:
\begin{align*}
\int_Y y \otimes t_* (\tau (s^* x \cdot \pi^* w))
&= \int_E t^*y \otimes \tau (s^* x \cdot \pi^* w) &&\text{ by extranaturality}\\
&= \int_E s^* x \cdot \pi^* w \cdot t^*y \otimes \tau (1) &&\text{ by \cref{eq:transmutation}}\\
&= \int_E s^* x \otimes \tau (t^*y \cdot \pi^* w) &&\text{ by \cref{eq:transmutation}}\\
&= \int_X x \otimes s_* (\tau (t^*y \cdot \pi^* w)) &&\text{ by extranaturality}.
\end{align*}
\end{proof}
\Cref{prop:operator_adjoint} is especially relevant for reverse-mode differentiation, i.e., mapping a dual vector of the output to the corresponding dual vector of the input.
If the dual vector of the output is of the form
\begin{equation*}
\int_Y y \otimes \anon
\end{equation*}
for some $y \in \func(Y)$, then thanks to \cref{eq:operator_adjoint}, the reverse-mode differentiation rule with respect to the input is
\begin{equation*}
\int_Y y \otimes t_* (\tau (s^* \anon \cdot \pi^* w)) = \int_X \anon \otimes s_* (\tau (t^* y \cdot \pi^* w)).
\end{equation*}
Thus, the dual vector represented by $y \in \func(Y)$ is mapped to the dual vector represented by $s_* (\tau (t^* y \cdot \pi^* w))\in \meas(X)$. In other words, the reverse-mode differentiation rule for the input---and, by symmetry, for the parameters---can be obtained by reordering the legs of the parametric span.
\section{Submersions}
\label{sec:submersions}
The aim of this section is to define an {\em integration theory} based on smooth spaces, which we will use to give practical examples of neural network layers. To proceed, we will need a few technical assumptions. Whenever we use the word manifold, we refer to smooth manifolds. Furthermore, we require manifolds to be paracompact Hausdorff spaces. We remind the reader that a submersion is a smooth map whose differential is, at every point, surjective. We denote $\Subm$ the category of manifolds and submersions.
We can associate to a manifold its space of smooth real-valued functions $\sections{X}$.
This extends to a functor $\Subm\superscript{op} \rightarrow \CAlg_\R$ via pullback of functions (precomposition). Given a submersion $f\colon X \rightarrow Y$ and $y \in \sections{Y}$, we denote the pullback $f^*y$.
Smooth densities~\cite[Sect.~1.1]{berline2003heat} of compact support induce a covariant functor $\Subm\rightarrow\Vect_\R$, which we denote $\sectionscs{\density_X}$. Given a submersion $f\colon X \rightarrow Y$ and a density of compact support $\mu \in \sectionscs{\density{_X}}$, we denote the pushforward $f_*\mu$.
\begin{remark}
The pushforward of a smooth density is well defined and smooth for {\em proper} submersions. However, here we are working with densities of compact support, so $f$ is automatically proper on the support of $\mu$.
\end{remark}
Integrating functions against densities of compact support yields an extranatural transformation between $\sections{\anonfirst}\otimes\sectionscs{\density{_\anonsecond}}$ and the constant functor $\R$.
Analogously to \cref{eq:extranatural,eq:extranatural_measure}, extranaturality correspond to the equality
\begin{equation}\label{eq:extranatural_smooth}
\int_X f^* y \, \mu = \int_Y y \, f_* \mu.
\end{equation}
Hence, we can conclude that
\begin{equation*}
\int\colon \sections{\anon} \otimes \sectionscs{\density{_\anonsecond}} \extranat \R
\end{equation*}
is an integration theory. In the following section we will use this particular integration theory to recover several classical neural network layers.
\section{Classical architectures}
\label{sec:classical_architectures}
Our framework encompasses radically different classical neural architectures. Roughly speaking, we will discuss discrete and continuous architectures, with or without symmetry (weight sharing).
\paragraph{Dense layer.} Multi-Layer Perceptrons (MLPs)~\cite{rumelhart1986learning} are the simplest neural network, as they are a discrete architecture with no symmetry based on matrix multiplication. In this non-equivariant case, i.e., when the network does not respect any symmetries of the problem, the map $\pi$ is an isomorphism. To see this in practice, let us consider a layer with $n_i$ input nodes and $n_o$ output nodes. For ease of notation, we identify each natural number element $n$ with the set $\range{0}{n-1}$. We define a discrete parametric span as follows.
\begin{diagram}\label{eq:MLP}
\begin{equation}
\begin{tikzcd}
& n_i \times n_o \arrow[swap]{ld} \arrow{d} \arrow{rd} & \\
n_i & n_i\times n_o & n_o
\end{tikzcd}
\end{equation}
\end{diagram}
Source and target maps are given by the product projections.
\paragraph{Convolutional layer.} Convolutional Neural Networks (CNNs)~\cite{lecun1989backpropagation} represent a more interesting case, as they introduce spatial symmetry and locality. Let us, for simplicity, consider a purely convolutional layer with $n_i$ input channels and $n_o$ output channels. Let $S_i, S_o$ denote the shapes of the input and output images, and let $F$ denote the shape of the filter. To define the parametric span, we can proceed as in the dense layer case, with an important difference: the map $\pi$ is no longer trivial, and fibers along $\pi$ represent output image shapes.
\begin{diagram}\label{eq:conv}
\begin{equation}
\begin{tikzcd}
& n_i \times n_o \times F \times S_o \arrow[swap]{ld} \arrow{d} \arrow{rd} & \\
n_i \times S_i & n_i \times n_o \times F & n_o \times S_o
\end{tikzcd}
\end{equation}
\end{diagram}
The target morphism and the weight sharing morphism are projections, whereas the source morphism relies on a linear map
\begin{equation*}
F \times S_o \rightarrow S_i.
\end{equation*}
The coefficient of the first argument encodes the {\em dilation} of the convolutional layer, whereas the coefficient of the second argument encodes the {\em stride}.
\paragraph{Geometric deep learning.}
Neural networks for non-Euclidean domains, such as graphs or manifolds, share many features with CNNs and can be handled in a similar way. The formulation in~\cite{monti2017geometric} is particularly suitable to our framework for two reasons. On the one hand, it encompasses many other approaches (Geodesic CNN~\cite{masci2015geodesic}, Anisotropic CNN~\cite{boscaini2016learning}, Diffusion CNN~\cite{atwood2015diffusion}, Graph CN~\cite{kipf2016semi}). On the other hand, it can be directly translated into our formalism. The authors of~\cite{monti2017geometric} postulate a {\em neighborhood relation} $q \in \mathcal{N}(p)$ on a Riemannian manifold $X$, together with local $d$-dimensional coordinates $\mathbf{u}(p, q)$ on pairs of neighbors. In our framework, this translates to the following parametric span.
\begin{diagram}\label{eq:geometric}
\begin{equation}
\begin{tikzcd}
& \set{(p, q) \mid q \in \mathcal{N}(p)} \arrow[swap]{ld} \arrow{d}{\mathbf{u}} \arrow{rd} & \\
X & \R^d & X
\end{tikzcd}
\end{equation}
\end{diagram}
The source and target maps are projections, whereas the weight sharing map is given by the local coordinates $\mathbf{u}$. The Riemannian structure, on which geometric deep learning is based, naturally induces a density.
\section{Discussion}
We provide categorical foundations to the study of linear layers in deep learning. We abstract away the key ingredients to define linear layers (i.e., bilinear maps) and describe them in categorical terms. Our framework is based on two pillars: integration theories and parametric spans. Both notions are valid in arbitrary source categories, thus granting full generality to our approach.
Not only value computation (the forward pass), but also derivative computation (the backward pass) is crucial for deep learning models. Guided by this principle, we devise our framework in such a way that the backward pass has the same structure as the forward pass and, therefore, a comparable computational cost.
To examine concrete examples, we primarily explore integration theories on the category of nullset-reflecting measurable functions and on the category of smooth submersions. The latter, in particular, is a rich source of examples of linear layers. We recover dense and convolutional layers, as well as most complex structures arising in geometric deep learning. Indeed, a general approach to geometric deep learning, described in~\cite{monti2017geometric}, was an important inspiration for this work.
Describing a linear layer structure by means of smooth submersions between manifolds has unique advantages. We show that, in the case of convolutional layers, the smooth submersion determines the hyperparameters of the layer (such as stride or dilation). We envision that such smooth maps could be optimized (together with the regular parameters) during gradient descent. In our view, this is a promising, efficient alternative to the nested optimization schemes for hyperparameters proposed in~\cite{bengio2000gradient,lorraine2020optimizing}.
Describing single linear layers represents only a small fraction of a successful deep learning framework. We have been exploring in~\cite{vertechi2020parametric,vertechi2022machines} possible formalizations of the notion of {\em global neural network architecture}. First, we developed a framework, based on category theory, where neural architectures could be formally defined and implemented. Then, borrowing tools from functional analysis, we discussed the necessary assumptions to allow for backpropagation. Those works lie at the basis of the proposed single-layer framework. We believe that, in the future, it will be valuable to combine these approaches to define global architectures by means of parametric spans.
\bibliographystyle{abbrv}
|
1,477,468,750,590 | arxiv | \section{Introduction}\label{sec:intro}
Planetesimal formation is one of the unsolved issues in planet formation theory.
There are several obstacles to the planetesimal formation.
One obstacle is self-induced turbulence \citep[e.g.,][]{Sekiya1998}.
In a protoplanetary disk, sub-$\mathrm{\mu m}$-sized dust grains settle to the disk midplane as they grow.
Such dust settling induces shear instability and then turbulence.
This self-induced turbulence prevents dust grains from settling and a dense dust disk cannot form.
As a result, the gravitational instability (GI) of this dust disk, which leads to rapid planetesimal formation \citep[e.g.,][]{Goldreich1973}, does not occur.
Another obstacle is radial drift \citep[e.g.,][]{Adachi1976}.
In a protoplanetary disk, dust grains orbit around the central star with the Keplerian velocity, while gas orbits with the sub-Keplerian velocity because of the pressure gradient.
Consequently, dust grains experience gas drag, lose their angular momenta, and migrate toward the central star.
They have the fastest drift speed when their Stokes number is unity, corresponding to cm- or m-sized compact bodies with internal density $\sim 1\mathrm{\ g\ cm^{-3}}$.
For 1-m-sized compact bodies at 1 au, the radial drift timescale is around 100 years.
This is much shorter than the disk lifetime $\sim$ several Myr.
Recently, it has been proposed that dust grains become not compact but porous by pairwise accretion \citep[e.g.,][]{Dominik1997,Blum2000,Wada2007,Suyama2008}.
Such porous dust grains, which are called dust aggregates, have fractal dimension $\sim2.5$ \citep{Wada2008} and internal density $\sim10^{-5}$--$10^{-3}\mathrm{\ g\ cm^{-3}}$ \citep{Okuzumi2012}.
They have larger cross-sections than compact dust grains, which means that their collision timescale is shorter.
Also, they have a different law of gas drag from compact dust grains because of their porosity.
In the case of icy dust aggregates with 0.1-$\mathrm{\mu}$m-sized constituent grains, which are called monomers, they can avoid the radial drift and icy planetesimals form \citep{Kataoka2013L}.
During the growth, the GI of the layer of the icy dust aggregates may occur \citep{Michikoshi2016GI}.
However, in contrast to ice, silicate dust grains cannot stick together but, instead, they fragment when they collide \citep[e.g.,][]{Blum1993}.
Silicate dust aggregates with 0.1-$\mathrm{\mu}$m-sized monomers have the critical velocity of catastrophic disruption, which we call the collisional fragmentation velocity, $\sim6\mathrm{\ m\ s^{-1}}$, while icy dust aggregates have $\sim50\mathrm{\ m\ s^{-1}}$ \citep{Wada2009}.
\cite{Arakawa2016} proposed that silicate dust monomers can stick together if they are smaller than $\sim10$ nm because the collisional fragmentation velocity increases with a decreasing monomer radius.
Indeed, it is suggested that dust monomers in a protoplanetary disk are not sub-$\mathrm{\mu}$m-sized interstellar dust grains, but they have experienced evaporation and condensation.
Moreover, some matrix grains in primitive meteorites and interplanetary dust particles contain nm-sized grains \citep{Toriumi1989, Keller2011}.
As nm-sized silicate dust grains grow, they become porous, which can avoid the radial drift.
However, the rocky planetesimal formation mechanism is still unclear, because the stability of the layer consisting of such dust aggregates has not been investigated.
Whether the GI occurs or not is important since the formation process determines the mass and size distributions of planetesimals and thus affects the later formation from planetesimals to planets.
In this paper, we investigate the GI of a dust layer composed of porous dust aggregates of $\sim2.5$--$10$-nm-sized silicate monomers using the method of \cite{Michikoshi2016GI}, which applies the dynamical evolution of planetesimals to such porous dust aggregates.
In Section \ref{sec:model}, we describe models of protoplanetary disks and dust aggregates, and methods to evaluate the stability of the dust layer, which include how to calculate the equilibrium random velocity of dust aggregates.
We present the results in Section \ref{sec:results}.
Finally, Section \ref{sec:sum} is devoted to a summary and discussions.
\section{Models and Methods}\label{sec:model}
To evaluate the stability of the dust layer, we calculate Toomre's stability parameter $Q$ \citep{Toomre1964}, for which we need to evaluate the equilibrium random velocity of dust aggregates.
We describe models of the protoplanetary disk and dust aggregates in Section \ref{subsec:disk} and \ref{subsec:dust}, respectively.
The calculation method of the equilibrium random velocity \citep{Michikoshi2016GI} is presented in Section \ref{subsec:vel}.
Section \ref{subsec:GI} shows conditions of the GI considering the calculated equilibrium random velocity and timescales.
\subsection{Protoplanetary Disks}\label{subsec:disk}
Using the minimum mass solar nebula (MMSN) model \citep{Hayashi1981}, we define the surface densities of gas $\Sigma_\mathrm{g}$ and dust $\Sigma_\mathrm{d}$, and the temperature $T$ as
\begin{eqnarray}
\Sigma_\mathrm{g}&=&1700f_\mathrm{g}\left(\frac{a}{1\ \mathrm{au}}\right)^{-3/2}\mathrm{\ g\ cm^{-2}},\\
\Sigma_\mathrm{d}&=&f_\mathrm{d}\Sigma_\mathrm{g}=1700f_\mathrm{g}f_\mathrm{d}\left(\frac{a}{1\ \mathrm{au}}\right)^{-3/2}\mathrm{\ g\ cm^{-2}},\\
T&=&280\left(\frac{a}{1\ \mathrm{au}}\right)^{-1/2}\mathrm{\ K},
\end{eqnarray}
where $a$ is the orbital radius, $f_\mathrm{g}$ is the ratio to the MMSN model, and $f_\mathrm{d}$ is the dust-to-gas ratio within the H$_2$O snow line.
The MMSN model corresponds to $f_\mathrm{g}=1$ and $f_\mathrm{d}=0.0042$.
From this temperature profile, we can find that the H$_2$O snow line, where the disk temperature is $T=170$ K, is located at 2.7 au.
We define four disk models as shown in Table \ref{tab:diskmodels} using $f_\mathrm{g}$, $f_\mathrm{d}$, and the dimensionless turbulent strength $\alpha$ \citep[e.g.,][]{Shakura1973}.
Observationally estimated values of $\alpha$ are $10^{-4}\lesssim\alpha\lesssim0.1$ \citep{Andrews2010}, which are derived under the assumption that turbulent viscous diffusion causes the gas disk to accrete onto the central star.
Because smaller $\alpha$ is favorable for the gust growth and the GI, we adopt $10^{-4}$ as the fiducial value.
We assume that the central star is the solar mass, i.e., $M_\ast=M_\odot$.
\begin{table}[htbp]
\centering
\caption{Four disk models} \label{tab:diskmodels}
\begin{tabular}{cccc}
\tablewidth{0pt}
\hline
\hline
Name & $f_\mathrm{g}$ & $f_\mathrm{d}$ & $\alpha$\\
\hline
MMSN & 1 & 0.0042 & $10^{-4}$\\
MMSN weak turbulence & 1 & 0.0042 & $10^{-5}$\\
Massive disk & 2 & 0.0042 & $10^{-4}$\\
Dust-rich disk & 1 & 0.0084 & $10^{-4}$\\
\hline
\end{tabular}
\end{table}
Other disk parameters are as follows.
The isothermal sound speed is
\begin{equation}
c_\mathrm{s}=\sqrt{\frac{k_\mathrm{B}T}{\mu m_\mathrm{H}}}\simeq1.0\times10^5\left(\frac{a}{1\ \mathrm{au}}\right)^{-1/4}\mathrm{\ cm\ s^{-1}},
\end{equation}
where $k_\mathrm{B}$ is the Boltzmann constant, $\mu=2.34$ is the mean molecular weight, and $m_\mathrm{H}$ is the hydrogen mass.
The gas density at the disk midplane is
\begin{equation}
\rho_\mathrm{g}=\frac{\Sigma_\mathrm{g}}{\sqrt{2\pi}c_\mathrm{s}/\Omega_\mathrm{K}}\simeq1.4\times10^{-9}f_\mathrm{g}\left(\frac{a}{1\ \mathrm{au}}\right)^{-11/4}\mathrm{\ g\ cm^{-3}},\label{eq:gasdensity}
\end{equation}
where $\Omega_\mathrm{K}=\sqrt{GM_\ast/a^3}$ is the Keplerian angular velocity and $G$ is the gravitational constant.
The mean free path of gas molecules is
\begin{equation}
l=\frac{\mu m_\mathrm{H}}{\sigma_\mathrm{H_2}\rho_\mathrm{g}}\simeq1.4f_\mathrm{g}^{-1}\left(\frac{a}{1\ \mathrm{au}}\right)^{11/4}\mathrm{\ cm},
\end{equation}
where $\sigma_\mathrm{H_2}=2\times10^{-15}\mathrm{\ cm^2}$ is the collision cross-section of the hydrogen molecule.
The gas-pressure support parameter is
\begin{equation}
\eta=-\frac{1}{2}\left(\frac{c_\mathrm{s}}{a\Omega_\mathrm{K}}\right)^2\frac{\partial\ln(\rho_\mathrm{g}c_\mathrm{s}^2)}{\partial\ln a}\simeq1.8\times10^{-3}\left(\frac{a}{1\ \mathrm{au}}\right)^{1/2}.
\end{equation}
The azimuthal gas velocity is given as $(1-\eta)v_\mathrm{K}$, where $v_\mathrm{K}=a\Omega_\mathrm{K}$ is the Keplerian velocity.
The azimuthal velocity of dust that is decoupled from gas corresponds to $v_\mathrm{K}$.
\subsection{Dust Aggregates}\label{subsec:dust}
We assume that dust aggregates consist of monomers with radius $r_0$ and material density $\rho_0$.
The fiducial radius $r_0=2.5\mathrm{\ nm}$ is selected because \cite{Toriumi1989} found that matrix grains in Allende CV3.2 chondrite have a size distribution with a peak at 5 nm in diameter.
We vary this monomer radius in Section \ref{subsec:resultdust} to investigate the dependence on $r_0$.
The material density of silicate is $\rho_0=3\mathrm{\ g\ cm^{-3}}$.
The static compression pressure $P$ of highly porous dust aggregates is given by \cite{Kataoka2013} as
\begin{equation}
P=\frac{E_\mathrm{roll}}{r_0^3}\left(\frac{\rho_\mathrm{int}}{\rho_0}\right)^3,
\end{equation}
where
\begin{equation}
E_\mathrm{roll}=6\pi^2\gamma r_0\xi=1.1\times10^{-11}\left(\frac{\gamma}{25\mathrm{\ erg\ cm^{-2}}}\right)\left(\frac{r_0}{2.5\mathrm{\ nm}}\right)\left(\frac{\xi}{0.3\mathrm{\ nm}}\right)\mathrm{\ erg}
\end{equation}
is the rolling energy of monomers and $\rho_\mathrm{int}$ is the mean internal density of dust aggregates.
The rolling energy $E_\mathrm{roll}$ is the energy needed to rotate a sphere around an another sphere by $90^\circ$ \citep{Dominik1997}.
To follow \cite{Arakawa2016}, we assume that the surface energy and the critical displacement of silicate are $\gamma=25\mathrm{\ erg\ cm^{-2}}$ and $\xi=0.3$ nm, respectively.
The theoretical critical displacement is $\xi=0.2$ nm \citep{Dominik1997}, while the experimental one is $\xi=3.2$ nm \citep{Heim1999}, and therefore we discuss the uncertainty of $E_\mathrm{roll}$ in Section \ref{sec:sum}.
The self-gravitational pressure $P_\mathrm{grav}$ is given by \cite{Kataoka2013L} as
\begin{equation}
P_\mathrm{grav}=\frac{Gm_\mathrm{d}^2/r_\mathrm{d}^2}{\pi r_\mathrm{d}^2}=\frac{Gm_\mathrm{d}^2}{\pi r_\mathrm{d}^4},
\end{equation}
where $m_\mathrm{d}$ and $r_\mathrm{d}$ are mass and radius of dust aggregates, respectively.
We assume that a dust aggregate has a spherical body, and thus the relationship among $m_\mathrm{d}$, $r_\mathrm{d}$, and $\rho_\mathrm{int}$ is given as $m_\mathrm{d}=(4/3)\pi r_\mathrm{d}^3\rho_\mathrm{int}$.
For simplicity, we do not consider the size distribution of monomers $r_0$ and assume that all dust aggregates have the identical mass $m_\mathrm{d}$ with the mean internal density $\rho_\mathrm{int}$.
We derive the equation of dust evolution via quasi-static self-gravitational compression by equating $P$ and $P_\mathrm{grav}$, which is described as
\begin{equation}
\rho_\mathrm{int}=\left(\frac{r_0^3}{E_\mathrm{roll}}\frac{Gm_\mathrm{d}^2}{\pi r_\mathrm{d}^4}\right)^{1/3}\rho_0
=0.14\left(\frac{G}{\gamma\xi}\right)^{3/5}r_0^{6/5}\rho_0^{9/5}m_\mathrm{d}^{2/5} \label{eq:evoltrack}.
\end{equation}
This self-gravitational compression dominated other compression mechanisms when $m_\mathrm{d}\gtrsim10^{13}$ g \citep{Arakawa2016}.
\subsection{Random Velocity}\label{subsec:vel}
To calculate the equilibrium random velocity of dust aggregates $v$, we divide $\mathrm{d}v^2/\mathrm{d}t$ into five components, which is given as
\begin{equation}
\frac{\mathrm{d}v^2}{\mathrm{d}t}=\left(\frac{\mathrm{d}v^2}{\mathrm{d}t}\right)_\mathrm{grav}+\left(\frac{\mathrm{d}v^2}{\mathrm{d}t}\right)_\mathrm{col}+\left(\frac{\mathrm{d}v^2}{\mathrm{d}t}\right)_\mathrm{gas,drag}+\left(\frac{\mathrm{d}v^2}{\mathrm{d}t}\right)_\mathrm{turb,stir}+\left(\frac{\mathrm{d}v^2}{\mathrm{d}t}\right)_\mathrm{turb,scat}=0.
\label{eq:randomvel}
\end{equation}
Each component represents from left to right gravitational scattering between dust aggregates, collisions between them, drag by mean flow of gas, stirring by gas turbulence, and gravitational scattering by gas density fluctuation due to turbulence.
We assume that the velocity distribution is isotropic, i.e., $v_x\simeq v_y\simeq v_z\simeq v/\sqrt{3}$, where $v_x$, $v_y$, and $v_z$ are $x$, $y$, and $z$ components of $v$, respectively.
In reality, the velocity distribution is anisotropic, but the effects are not significant \citep{Michikoshi2017}.
\subsubsection{Dust-Dust Interaction}
The velocity change by gravitational scattering between dust aggregates is given as
\begin{equation}
\left(\frac{\mathrm{d}v^2}{\mathrm{d}t}\right)_\mathrm{grav}=n_\mathrm{d}\pi\left(\frac{2Gm_\mathrm{d}}{v^2_\mathrm{rel}}\right)^2v_\mathrm{rel}v^2\ln\Lambda,
\label{eq:grav}
\end{equation}
which is derived from the Chandrasekhar's two-body relaxation time \citep[e.g.,][]{Ida1990}.
The number density of dust aggregates is
\begin{equation}
n_\mathrm{d}\simeq\frac{\Sigma_\mathrm{d}/m_\mathrm{d}}{\sqrt{2\pi}v_z/\Omega_\mathrm{K}},
\end{equation}
the typical relative velocity between them is $v_\mathrm{rel}\simeq\sqrt{2}v$, and $\Lambda$ is defined as
\begin{equation}
\Lambda=v_\mathrm{rel}^2\frac{v_z/\Omega_\mathrm{K}+r_\mathrm{H}}{2Gm_\mathrm{d}},
\end{equation}
where
\begin{equation}
r_\mathrm{H}=\left(\frac{2m_\mathrm{d}}{3M_\ast}\right)^{1/3}a
\end{equation}
is the Hill radius \citep{Stewart2000}.
In equation (\ref{eq:grav}), $\pi(2Gm_\mathrm{d}/v_\mathrm{rel}^2)^2$ means the gravitational scattering cross-section.
The velocity change by collisions between dust aggregates is given as
\begin{equation}
\left(\frac{\mathrm{d}v^2}{\mathrm{d}t}\right)_\mathrm{col}=-C_\mathrm{col}n_\mathrm{d}\pi(2r_\mathrm{d})^2\left(1+\frac{v_\mathrm{esc}^2}{v_\mathrm{rel}^2}\right)v_\mathrm{rel}v^2,
\label{eq:col}
\end{equation}
where $C_\mathrm{col}=1/2$ \citep[e.g.,][]{Inaba2001} is the rate of change of kinetic energy during a collision and $v_\mathrm{esc}=\sqrt{2Gm_\mathrm{d}/r_\mathrm{d}}$ is the surface escape velocity.
In equation (\ref{eq:col}), $\pi(2r_\mathrm{d})^2(1+v_\mathrm{esc}^2/v_\mathrm{rel}^2)$ means the collision cross-section including gravitational focusing.
We assume that results of all collisions are accretion.
\subsubsection{Dust-Gas Interaction}
Drag by mean flow of gas is given as
\begin{equation}
\left(\frac{\mathrm{d}v^2}{\mathrm{d}t}\right)_\mathrm{gas,drag}=-\frac{2}{t_\mathrm{s}}v^2,
\label{eq:gasdrag}
\end{equation}
where
\begin{equation}
t_\mathrm{s}=\frac{2m_\mathrm{d}}{\pi C_\mathrm{D}r_\mathrm{d}^2\rho_\mathrm{g}u}
\label{eq:stoptime}
\end{equation}
is the stopping time, $C_\mathrm{D}$ is the dimensionless drag coefficient, and $u\simeq\sqrt{v^2+\eta^2v_\mathrm{K}^2}$ is the relative velocity between dust and gas \citep[e.g.,][]{Adachi1976}.
In the case of $m_\mathrm{g}\gtrsim10^{13}$, we confirm that the Stokes number $\tau_\mathrm{s}=\Omega_\mathrm{K} t_\mathrm{s}\gg1$ (see Figure \ref{fig:St}), and therefore the dust aggregates are decoupled from gas.
We adopt the expression of $C_\mathrm{D}$ from \cite{Brown2003}
\begin{equation}
C_\mathrm{D}=\begin{cases}
\cfrac{8v_\mathrm{th}}{3u} & (r_\mathrm{d}<9l/4)\\
\cfrac{0.407}{1+8710/\mathrm{Re}}+\cfrac{24}{\mathrm{Re}}(1+0.150\mathrm{Re}^{0.681}) & (r_\mathrm{d}>9l/4)
\end{cases},
\label{eq:CD}
\end{equation}
where $v_\mathrm{th}=\sqrt{8/\pi}c_\mathrm{s}$ is the thermal velocity, $\mathrm{Re}=2r_\mathrm{d}u/\nu$ is the Reynolds number, and $\nu=v_\mathrm{th}l/2$ is the viscosity.
When dust aggregates are smaller than the mean free path of gas molecules $(r_\mathrm{d}<9l/4)$, drag felt by dust aggregates is the Epstein drag.
In the other regime $(r_\mathrm{d}>9l/4)$, there are the Stokes and the Newton drag for low and high Reynolds numbers, respectively.
Stirring by gas turbulence is given as
\begin{equation}
\left(\frac{\mathrm{d}v^2}{\mathrm{d}t}\right)_\mathrm{turb,stir}=\frac{2\tau_\mathrm{e}v_\mathrm{t}^2\Omega_\mathrm{K}}{\tau_\mathrm{s}(\tau_\mathrm{e}+\tau_\mathrm{s})},
\label{eq:turbstir}
\end{equation}
which is derived from the equilibrium velocity by turbulent stirring $v_\mathrm{t}^2\tau_\mathrm{e}/(\tau_\mathrm{e}+\tau_\mathrm{s})$ \citep{Youdin2007}.
The dimensionless eddy turnover time is $\tau_\mathrm{e}=1$ \citep[e.g.,][]{Youdin2011} and the turbulent velocity is $v_\mathrm{t}=\sqrt{\alpha}c_\mathrm{s}$.
Gravitational scattering by gas density fluctuation due to turbulence changes the velocity as
\begin{equation}
\left(\frac{\mathrm{d}v^2}{\mathrm{d}t}\right)_\mathrm{turb,scat}=C_\mathrm{turb}\alpha\left(\frac{\Sigma_\mathrm{g}a^2}{M_\ast}\right)^2\Omega_\mathrm{K}^3a^2,
\label{eq:turbscat}
\end{equation}
which is derived by \cite{Okuzumi2013}.
The dimensionless coefficient determined by disk structure $C_\mathrm{turb}$ is given as
\begin{equation}
C_\mathrm{turb} = \frac{0.94{\cal{L}}}{(1+4.5H_\mathrm{res,0}/H)^2},
\end{equation}
where $\cal{L}$ is the saturation limiter, $H_\mathrm{res,0}$ is the vertical dead zone half width, and $H$ is the gas scale height.
We adopt ${\cal{L}}=1$, which means that turbulence occurs due to the magneto-rotational instability (MRI).
We use $C_\mathrm{turb}=3.1\times10^{-2}$ by assuming $H_\mathrm{res,0}=H$.
\subsection{GI Conditions}\label{subsec:GI}
\subsubsection{Toomre's $Q$}
We define the condition of the GI using Toomre's stability parameter $Q$ \citep{Toomre1964}
\begin{equation}
Q = \frac{v_x\Omega_\mathrm{K}}{3.36G\Sigma_\mathrm{d}},
\label{eq:Q}
\end{equation}
where $v_x$ is derived from equation (\ref{eq:randomvel}).
The axisymmetric mode grows when $Q<1$ \citep{Toomre1964}.
When $1\lesssim Q\lesssim2$, non-axisymmetric mode or self-gravity wakes grow \citep[e.g.,][]{Toomre1981}, and planetesimals are formed in these self-gravity wakes \citep[e.g.,][]{Michikoshi2007}.
Therefore, we define the condition of the GI as $Q<2$.
\subsubsection{Timescales}
We compare timescales of growth by pairwise accretion $t_\mathrm{grow}$, the radial drift $t_\mathrm{drift}$, and the GI $t_\mathrm{GI}$.
The substantial radial drift occurs when $t_\mathrm{grow}>(1/30)t_\mathrm{drift}$ \citep{Okuzumi2012}.
The growth timescale considering gravitational focusing is given as
\begin{equation}
t_\mathrm{grow}\equiv\frac{m_\mathrm{d}}{\mathrm{d}m_\mathrm{d}/\mathrm{d}t}=\frac{1}{n_\mathrm{d}\pi r_\mathrm{d}^2(1+v_\mathrm{esc}^2/v_\mathrm{rel}^2)v_\mathrm{rel}},\label{eq:tgrow}
\end{equation}
and the radial drift timescale due to the gas drag \citep[e.g.,][]{Adachi1976} is given as
\begin{equation}
t_\mathrm{drift}\equiv\frac{a}{\mathrm{d}a/\mathrm{d}t}=\frac{a}{2\tau_\mathrm{s}\eta v_\mathrm{K}/(1+\tau_\mathrm{s}^2)}.\label{eq:tdrift}
\end{equation}
The GI timescale \citep[e.g.,][]{Sekiya1983,Goldreich1973} is on the order of orbital period
\begin{equation}
t_\mathrm{GI}\sim\Omega_\mathrm{K}^{-1}.
\label{eq:tGI}
\end{equation}
If the GI timescale is the shortest when $Q<2$, we conclude that the GI occurs.
\subsubsection{Velocity Range}
In order for the GI to occur in our models, we confirm that runaway growth and fragmentation do not occur when $Q<2$.
The dust aggregates cannot grow if the fragmentation occurs, and the assumption of no mass distribution is broken if the runaway growth occurs.
Conditions of the runaway growth and the fragmentation are $v_\mathrm{rel}<v_\mathrm{esc}$ \citep{Kokubo1996} and $v_\mathrm{rel}>v_\mathrm{frag,cr}$, respectively, where
\begin{equation}
v_\mathrm{frag,cr}=6\times10^2\left(\frac{r_0}{100\mathrm{\ nm}}\right)^{-5/6}\mathrm{\ cm\ s^{-1}}
\label{eq:frag}
\end{equation}
is the critical velocity of catastrophic disruption \citep{Dominik1997,Wada2009}.
Therefore, $v_\mathrm{esc}<v_\mathrm{rel}<v_\mathrm{frag,cr}$ is needed if dust aggregates are not to experience the runaway growth and the fragmentation.
\section{Results}\label{sec:results}
First, we investigate the stability of the dust layer for the fiducial model, whose orbital radius is 1 au and dust monomer radius is $r_0=2.5$ nm in Section \ref{subsec:resultQ}, \ref{subsec:resulttime}, and \ref{subsec:resultvel}.
We use the four disk models shown in Table \ref{tab:diskmodels}.
Next, we calculate the dependence on parameters in Section \ref{subsec:resultdep}.
\subsection{Toomre's $Q$}\label{subsec:resultQ}
We calculate Toomre's $Q$ at 1 au of four disk models shown in Table \ref{tab:diskmodels} and draw contours in the $m_\mathrm{d}$-$\rho_\mathrm{int}$ plane in Figure \ref{fig:ToomresQ}.
Also, we show the mass and internal density relation under the self-gravitational compression of dust aggregates with $r_0=2.5$ nm using equation (\ref{eq:evoltrack}).
All models show a tendency for $Q$ to decrease and then increase as dust aggregates grow.
The GI occurs in three of them: the MMSN weak turbulence, massive, and dust-rich disk models.
\begin{figure}[htbp]
\plotone{Q_out.pdf}
\caption{Toomre's $Q$ in the $m_\mathrm{d}$-$\rho_\mathrm{int}$ plane at 1 au of the MMSN (top left), MMSN weak turbulence (top right), massive (bottom left), and dust-rich disk (bottom right) models.
The dash-dotted, solid, and dash contours correspond to $Q=1$, 2, and 4, respectively.
The dotted lines and the red arrows show the mass and internal density relation under self-gravitational compression of dust aggregates with $r_0=2.5$ nm.}
\label{fig:ToomresQ}
\end{figure}
We also plot the densities of dust aggregates and gas at 1 au for four disk models in Figure \ref{fig:massdensity}, where the density of dust aggregates is given as
\begin{equation}
\rho_\mathrm{d} \simeq n_\mathrm{d}m_\mathrm{d}.
\end{equation}
Note that the dust internal density $\rho_\mathrm{int}$ is determined by self-gravitational compression.
The critical density for the GI, which is calculated from equation (\ref{eq:Q}) as
\begin{equation}
\rho_\mathrm{GI}\simeq\frac{\Omega_\mathrm{K}^2}{3.36\sqrt{2\pi}QG}\simeq3.5\times10^{-8}\left(\frac{Q}{2}\right)^{-1}\left(\frac{a}{\mathrm{1\ au}}\right)^{-3}\mathrm{\ g\ cm^{-3}},
\end{equation}
is also shown in Figure \ref{fig:massdensity}.
It is found that the dust-to-gas ratio at the disk midplane is $\sim26$ when the GI occurs.
\begin{figure}[htbp]
\plotone{massdensity.pdf}
\caption{Densities of dust aggregates $\rho_\mathrm{d}$ (solid) and gas $\rho_\mathrm{g}$ (dot), and the critical density for the GI $\rho_\mathrm{GI}$ (dash) against $m_\mathrm{d}$ at 1 au of the MMSN (green), MMSN weak turbulence (black), massive (red), and dust-rich (blue) disk models of dust aggregates with $r_0=2.5$ nm.}
\label{fig:massdensity}
\end{figure}
\subsection{Timescales}\label{subsec:resulttime}
Timescales of the growth $t_\mathrm{grow}$ (equation (\ref{eq:tgrow})), the radial drift $t_\mathrm{drift}$ (equation (\ref{eq:tdrift})), and the GI $t_\mathrm{GI}$ (equation (\ref{eq:tGI})) at 1 au of the three models where the GI occurs are shown in Figure \ref{fig:timescale}.
We use equation (\ref{eq:evoltrack}) with $r_0=2.5$ nm.
The growth timescale $t_\mathrm{grow}\propto m_\mathrm{d}^{3/5}$ and the radial drift timescale $t_\mathrm{drift}\propto m_\mathrm{d}^{3/5}$ increase with $m_\mathrm{d}$ monotonically, while the GI timescale $t_\mathrm{GI}$ is independent of $m_\mathrm{d}$.
At $Q=2$, the shortest timescale of all models is the GI.
\begin{figure}[htbp]
\plotone{TimeScale.pdf}
\caption{Timescales of the growth $t_\mathrm{grow}$ (dash), the radial drift $t_\mathrm{drift}$ (dot), and the GI $t_\mathrm{GI}$ (solid) against $m_\mathrm{d}$ at 1 au of the MMSN weak turbulence (black), massive (red), and dust-rich (blue) disk models of dust aggregates with $r_0=2.5$ nm.
The vertical lines show when the GI occurs.}
\label{fig:timescale}
\end{figure}
\subsection{Equilibrium Random Velocity}\label{subsec:resultvel}
To check that the runaway growth and the fragmentation do not occur, we plot $v_\mathrm{rel}\simeq\sqrt{2}v$, $v_\mathrm{esc}=\sqrt{2Gm_\mathrm{d}/r_\mathrm{d}}\propto m_\mathrm{d}^{2/5}$, and $v_\mathrm{frag,cr}$ (equation (\ref{eq:frag})) at 1 au of the three models where the GI occurs in Figure \ref{fig:velocity}.
Obviously, $v_\mathrm{esc}$ and $v_\mathrm{frag,cr}$ are independent of disk models.
For small $m_\mathrm{d}$, the condition of $v_\mathrm{frag,cr}>v_\mathrm{rel}>v_\mathrm{esc}$ is satisfied, which means that the runaway growth and the fragmentation do not occur.
When the GI occurs, $v_\mathrm{rel}$ is still larger than $v_\mathrm{esc}$.
\begin{figure}[htbp]
\plotone{Velocity.pdf}
\caption{Relative velocity between dust aggregates $v_\mathrm{rel}$ (dash), the escape velocity $v_\mathrm{esc}$ (dot), and the fragmentation velocity $v_\mathrm{frag,cr}$ (solid) against $m_\mathrm{d}$ at 1 au of the MMSN weak turbulence (black), massive (red), and dust-rich (blue) disk models of dust aggregates with $r_0=2.5$ nm.
The vertical lines show when the GI occurs.}
\label{fig:velocity}
\end{figure}
Figure \ref{fig:heat} shows the relative contribution of each increasing mechanism in $\mathrm{d}v^2/\mathrm{d}t$ of each increasing mechanism, which includes gravitational scattering between dust aggregates (equation (\ref{eq:grav})), stirring by gas turbulence (equation (\ref{eq:turbstir})), and gravitational scattering by gas density fluctuation due to turbulence (equation (\ref{eq:turbscat})).
The main increasing mechanism is stirring by gas turbulence before the GI occurs.
Gravitational scattering dominates others when $m_\mathrm{d}\gtrsim10^{15}$ g.
\begin{figure}[htbp]
\plotone{HeatRate.pdf}
\caption{The relative contribution of each increasing mechanism in $\mathrm{d}v^2/\mathrm{d}t$, which includes gravitational scattering between dust aggregates (dash), stirring by gas turbulence (solid), and gravitational scattering by gas density fluctuation due to turbulence (dot) at 1 au of the MMSN weak turbulence (black), massive (red), and dust-rich (blue) disk models of dust aggregates with $r_0=2.5$ nm.
Each $\mathrm{d}v^2/\mathrm{d}t$ is divided by $\Sigma_+\mathrm{d}v^2/\mathrm{d}t=(\mathrm{d}v^2/\mathrm{d}t)_{\mathrm{grav}}+(\mathrm{d}v^2/\mathrm{d}t)_{\mathrm{turb,stir}}+(\mathrm{d}v^2/\mathrm{d}t)_{\mathrm{turb,scat}}$.
The vertical lines show when the GI occurs.}
\label{fig:heat}
\end{figure}
Figure \ref{fig:cool} is the same as Figure \ref{fig:heat} but for the decreasing mechanism, which includes collisions between dust aggregates (equation (\ref{eq:col})) and drag by mean flow of gas (equation (\ref{eq:gasdrag})).
The difference between the two decreasing mechanisms is a few factors, which means that they have comparable effects.
\begin{figure}[htbp]
\plotone{CoolRate.pdf}
\caption{The relative contribution of each decreasing mechanism in $\mathrm{d}v^2/\mathrm{d}t$, which includes collisions between dust aggregates (solid) and drag by mean flow of gas (dash) at 1 au of the MMSN weak turbulence (black), massive (red), and dust-rich (blue) disk models of dust aggregates with $r_0=2.5$ nm.
Each $\mathrm{d}v^2/\mathrm{d}t$ is divided by $\Sigma_-\mathrm{d}v^2/\mathrm{d}t=(\mathrm{d}v^2/\mathrm{d}t)_{\mathrm{col}}+(\mathrm{d}v^2/\mathrm{d}t)_{\mathrm{gas,drag}}$.
The vertical lines show when the GI occurs.}
\label{fig:cool}
\end{figure}
In addition, we plot the Stokes number $\tau_\mathrm{s}$ to show the effect of coupling between dust and gas in Figure \ref{fig:St}.
The MMSN weak turbulence and dust-rich disk models are not distinguishable.
The Stokes number is always much larger than unity.
\begin{figure}[htbp]
\plotone{St.pdf}
\caption{The Stokes numbers $\tau_\mathrm{s}$ at 1 au of the MMSN weak turbulence (black-dashed), massive (red-solid), and dust-rich (blue-dotted) disk models of dust aggregates with $r_0=2.5$ nm.
The vertical lines show when the GI occurs.}
\label{fig:St}
\end{figure}
\subsection{Dependence on Parameters}\label{subsec:resultdep}
\subsubsection{Disk Parameters}
Figure \ref{fig:dependalpha} shows the GI and no GI regions in the $f_\mathrm{g}$-$f_\mathrm{d}$ plane.
The orbital radius and the dust monomer radius are fixed at 1 au and $r_0=2.5$ nm, respectively.
We vary the turbulent strength $\alpha$ and draw boundaries between the two regions.
The GI is found to occur easily in the weak turbulence, massive, and dust-rich disks.
For example, the GI occurs when $\alpha\lesssim10^{-5}$ in the MMSN model at 1 au.
The reason why the GI occurs easily in the massive and/or dust-rich disk is that $Q$ decreases as the dust surface density increases in equation (\ref{eq:Q}).
In the case of the weak turbulent disk, the main increasing mechanism at $Q=2$ is stirring by gas turbulence (Figure \ref{fig:heat}).
When the effect of stirring decreases, the equilibrium random velocity of dust aggregates also decreases, and then, $Q$ decreases.
\begin{figure}[htbp]
\plotone{Dependence_alpha.pdf}
\caption{Boundaries between the GI and no GI regions in the $f_\mathrm{g}$-$f_\mathrm{d}$ plane.
The orbital radius is 1 au and the dust monomer radius is $r_0=2.5$ nm.
The red, blue, black, and green lines show boundaries for $\alpha=10^{-2}$, $10^{-3}$, $10^{-4}$, and $10^{-5}$, respectively.
The dotted lines show the MMSN model: $f_\mathrm{g}=1$ and $f_\mathrm{d}=0.0042$.}
\label{fig:dependalpha}
\end{figure}
\subsubsection{Orbital Radius}
Next, we vary the orbital radius $a$ and draw boundaries between the GI and no GI regions in Figure \ref{fig:dependau}.
The turbulent strength and the dust monomer radius are fixed at $\alpha=10^{-4}$ and $r_0=2.5$ nm, respectively.
The GI is found to occur more easily at larger orbital radius.
For example, the GI occurs when the orbital radius is larger than 2 au in the MMSN model with $\alpha=10^{-4}$.
The dependence of $Q$ on $a$ is the same as that of the equilibrium random velocity $v_x$ on $a$ because $Q\propto v_x\Omega_\mathrm{K}\Sigma_\mathrm{d}^{-1}\propto v_xa^{-3/2}a^{3/2}\propto v_x$ (equation (\ref{eq:Q})).
At the disk's outer region, the turbulent velocity decreases because of the low temperature, which leads to the weak effect of stirring by gas turbulence.
As a result, the equilibrium random velocity and $Q$ decreases because stirring dominates other increasing mechanisms at $Q=2$.
\begin{figure}[htbp]
\plotone{Dependence_au.pdf}
\caption{Boundaries between the GI and no GI regions in the $f_\mathrm{g}$-$f_\mathrm{d}$ plane.
The turbulent strength is $\alpha=10^{-4}$ and the dust monomer radius is $r_0=2.5$ nm.
The red, blue, and black lines show boundaries for $a=0.5$ au, $a=1$ au, and $a=2$ au, respectively.
The dotted lines show the MMSN model: $f_\mathrm{g}=1$ and $f_\mathrm{d}=0.0042$.}
\label{fig:dependau}
\end{figure}
\subsubsection{Dust Parameters}\label{subsec:resultdust}
Finally, we investigate the dependence of the GI region on the dust monomer radius $r_0$ in Figure \ref{fig:monomersize}.
The turbulent strength and the orbital radius are fixed at $\alpha=10^{-4}$ and $a=1$ au, respectively.
We find that the GI occurs more easily when the dust monomer radius is larger.
In the MMSN model with $\alpha=10^{-4}$ at 1 au, for example, the GI does not occur when the dust monomer radius is less than 10 nm.
However, it becomes more difficult for dust aggregates to stick and grow as the monomer radius becomes larger.
The maximum monomer radius is determined by comparing the maximum collision velocity and the critical velocity of catastrophic disruption $v_\mathrm{frag,cr}$.
Assuming that the maximum collision velocity is the same as the turbulent velocity $v_\mathrm{t}=\sqrt{\alpha}c_\mathrm{s}$, the condition for dust growth is $\sqrt{\alpha}c_\mathrm{s}<v_\mathrm{frag,cr}$.
Thus the monomer radius must be
\begin{equation}
r_0<54\left(\frac{\alpha}{10^{-4}}\right)^{-3/5}\left(\frac{a}{1\mathrm{\ au}}\right)^{3/10}\mathrm{\ nm}.
\end{equation}
\begin{figure}[htbp]
\plotone{Dependence_monomerradius.pdf}
\caption{Boundaries between the GI and no GI regions in the $f_\mathrm{g}$-$f_\mathrm{d}$ plane.
The turbulent strength is $\alpha=10^{-4}$ and the orbital radius is 1 au.
The red, blue, black, and green lines show boundaries for $r_0=2.5$ nm, $r_0=5$ nm, $r_0=10$ nm, and $r_0=54$ nm, respectively.
The dotted lines show the MMSN model: $f_\mathrm{g}=1$ and $f_\mathrm{d}=0.0042$.}
\label{fig:monomersize}
\end{figure}
Figure \ref{fig:Qandmonomer} shows how the mass and internal density relation of dust aggregates changes in the $m_\mathrm{d}$-$\rho_\mathrm{int}$ plane.
Only the MMSN weak turbulence disk model is shown here because this relation is independent of disk models.
The mean internal density of the dust aggregates increases as their monomer radius increases.
This leads to the weak effect of stirring by gas turbulence.
Finally, the equilibrium random velocity and $Q$ decreases because the main increasing mechanism at $Q=2$ is stirring.
\begin{figure}[htbp]
\plotone{Q_out_w_r0.pdf}
\caption{Toomre's $Q$ in the $m_\mathrm{d}$-$\rho_\mathrm{int}$ plane at 1 au of the MMSN weak turbulence disk model.
The dash-dotted, solid, and dash contours correspond to $Q=1$, 2, and 4, respectively.
The red, blue, and black dotted lines show the mass and internal density relation under gravitational compression of dust aggregates with $r_0=2.5$, 5, and 10 nm, respectively.}
\label{fig:Qandmonomer}
\end{figure}
\section{Conclusions}\label{sec:sum}
We have investigated the gravitational instability (GI) of a dust layer composed of porous dust aggregates of $\sim2.5$--$10$-nm-sized silicate monomers.
To evaluate the disk stability, we calculated Toomre's stability parameter $Q$ from the equilibrium random velocity of dust aggregates.
We calculated the equilibrium random velocity considering five processes: gravitational scattering between dust aggregates, collisions between them, drag by mean flow of gas, stirring by gas turbulence, and gravitational scattering by gas density fluctuation due to turbulence.
We derived the GI condition as a function of five disk and dust parameters: disk mass, dust-to-gas ratio, turbulent strength, orbital radius, and dust monomer radius.
In the case of the minimum mass solar nebula model at 1 au, for example, the dust layer becomes gravitationally unstable when the turbulent strength $\alpha\lesssim10^{-5}$ and the monomer radius $r_0=2.5$ nm.
If the dust-to-gas ratio is increased twice, the GI occurs for $\alpha\lesssim10^{-4}$.
We found that the GI occurs more easily in the more massive and more dust-rich disks with weaker turbulence at outer regions.
The larger monomer radius is preferable to the GI.
In this paper, we only investigated the condition of $Q<2$, which leads to the growth of self-gravity wakes in a dust layer.
However, it is unknown how such wakes fragment to form planetesimals.
For the fragmentation of a gas disk, the cooling timescale should be comparable to or shorter than the orbital timescale \citep{Gammie2001}.
\cite{Michikoshi2017} pointed out the existence of a similar condition for the wake fragmentation.
However to clarify this condition is out of the present scope and will be a future work.
If planetesimals are formed in the self-gravity wakes, their typical mass $m_\mathrm{p}$ can be estimated by
\begin{equation}
m_\mathrm{p}\simeq\lambda_\mathrm{cr}^2\Sigma_\mathrm{d}\simeq1.6\times10^{18}f_\mathrm{g}^3\left(\frac{f_\mathrm{d}}{0.0042}\right)^3\left(\frac{a}{1 \mathrm{\ au}}\right)^{3/2}\mathrm{\ g},
\end{equation}
where $\lambda_\mathrm{cr}=4\pi^2G\Sigma_\mathrm{d}/\Omega_\mathrm{K}^2$ is the critical wavelength of the GI.
In addition, the rolling energy $E_\mathrm{roll}$ of silicate monomers has a large uncertainty because the critical displacement $\xi$ is different between theoretical and experimental values.
Note that the dependence on $\xi$ is weaker than that on the monomer radius $r_0$ as suggested by equation (\ref{eq:evoltrack}).
Moreover, the assumption of single dust aggregate radius $r_\mathrm{d}$ may not be appropriate even if the runaway growth does not occur.
In reality, the dust aggregates have the size distribution due to collisional fragmentation.
The effects of the dust size distribution on the GI have to be investigated.
\acknowledgments{
We thank Akimasa Kataoka and Tetsuo Taki for fruitful discussions.
We appreciate the careful reading and valuable comments by the anonymous referee and the editor, Judy Pipher.
}
|
1,477,468,750,591 | arxiv | \section{Introduction}
\label{intro}
Distributions of genealogical features such as shapes, subtrees, and clades are of interest in phylogenetic and population genetics. By comparing biological data with these distributions, which can be derived from null models such as the Yule-Harding-Kingman (YHK) model and proportional to distinguishable arrangements (PDA) model, we can obtain insights into macro-evolutionary processes underlying the data \citep{felsenstein04a,mooers97a,mooers02a,nordborg98a,nordborg01a}. For instance, phylogenetic tree statistics were used to study variation in speciation and extinction rates (see, e.g.~\citet{agapow02a,mooers97a,rogers96a}).
As a basic concept in phylogenetic studies and systematic classification of species, a clade, also known as a monophyletic group, is a subset of extant species containing all the descendants of a common ancestor. In this paper, we are interested in the distributions of clade size in a random tree generated under the null models.
Such distributions have been utilized in hypothesis testing as to whether a set of extant }%\textcolor{red}{taxa} forms a clade \citep{hudson02a,rosenberg07a}, and
are relevant to the Bayesian approach to phylogenetic reconstruction \citep{PR05,PS06}.
Two well-studied and commonly used null models in evolutionary biology are the Yule-Harding model~\citep{yule25a, harding71a} and the PDA model (also known as the uniform model) \citep{Aldous2001}. Loosely speaking, under the PDA model all rooted binary trees are chosen with equal probabilities, while under the Yule-Harding model each tree is chosen with a probability proportion to the number of total orders that can be assigned to internal nodes of the tree so that the relative (partial) order is preserved~\citep[see, e.g.~][]{semple03a}.
More precisely, the Yule-Harding model assumes a speciation process with a constant pure-birth rate \citep{Blum2006,Pinelis2003}, }%\textcolor{red}{ which generates the same probability distributions of tree topologies as Kingman's coalescent process \citep{Kingman1982}.
Therefore, we will refer to it as the Yule-Harding-Kingman (YHK) model~\citep{aldous96a}. }
Both the YHK model and PDA model are used to generate prior probabilities of tree topologies in Bayesian phylogenetic analyses \citep{Li2000,Rannala1996}.
Comparison studies of various tree statistics between the YHK and PDA models have been reported in the literature. For example,
\citet{McKenzie2000} derive the asymptotic probability distributions of cherries in phylogenetic trees; \citet{Steel2012} discusses the root location in a random Yule or PDA tree; \citet{Blum2006} obtain formulas for the mean, variance, and covariance of the Sackin \citep{Sackin1972} and Colless \citep{Colless1982} indices, two popular indices used to measure the balance of phylogenetic trees.
Note that in Bayesian analyses, the output is often clade support calculated from the consensus of the approximated posterior distribution of the topologies. However, the relationships between topological priors and clade priors are often not straightforward. For instance, it is observed that the uniform topological prior, which is induced by the PDA model, }%\textcolor{red}{leads to} non-uniform clade priors \citep{PR05}. Indeed, for $n>4$, neither the PDA model nor the YHK model gives rise to }%\textcolor{red}{a uniform prior} on clades \citep{PS06}. }%\textcolor{red}{As an attempt to further elucidate these relationships, in this paper we study the distributions of clade sizes in the PDA model, and then conduct a comparison study of these distributions with those in the YHK model. In addition, we conduct a similar study on clans, the counterpart of clades for unrooted trees.}
The remainder of the paper is organized as follows. Sections 2 and 3 contain necessary notation and background used in the paper and a brief review of the YHK and PDA models. We then present in Section 4 the results concerning clade probabilities under the two null models,
and those related to clan probabilities in Section 5. Finally, we conclude in Section 6 with discussions and remarks.
\section{Preliminaries}
\label{sec:preliminaries}
In this section, we present some basic notation and background concerning phylogenetic trees and log-convexity that will be used in this paper.
From now on, $X$ will be used to denote the leaf set, and we assume that $X$ is a finite set of size $n=|X|\geqslant 3$ unless stated otherwise.
\bigskip
\noindent
\subsection{Phylogenetic trees}
A {\em tree} is a connected acyclic graph. A vertex will be referred to as a {\em leaf} if its degree is one, and an {\em interior vertex} otherwise. }%\textcolor{red}{An unrooted tree is {\it binary} if all interior vertices have degree three. A {\em rooted} tree is a tree that has exactly one distinguished node designated as the {\em root}, which is usually denoted by $\rho$. A rooted tree is binary if the root has degree two and all other interior vertices have degree three. }
A {\em phylogenetic tree} on $X$ is a binary tree with leaves bijectively labeled by elements of $X$.
The set of rooted and unrooted phylogenetic trees on $X$ are denoted by $\mathcal{T}_X$ and $\mathcal{T}^*_X$, respectively. Two examples of phylogenetic trees on $X=\{1,\dots,7\}$, one rooted and the other unrooted, are presented in Figure \ref{fig:trees}.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.7]{Fig1a.eps} & \includegraphics[scale=0.7]{Fig1b.eps}
\end{tabular}
\caption{Example of a rooted phylogenetic tree (left) and an unrooted phylogenetic tree (right).
}
\label{fig:trees}
\end{figure}
Let $T$ be a rooted phylogenetic tree on $X$.
Given two vertices $v$ and $u$ in tree $T$, $u$ is {\em below} $v$ if $v$ is contained in the path between $u$ and the root of $T$. In this case, we also say $u$ is a {\em descendant} of $v$ if $v$ and $u$ are distinct.
A {\em clade} of $T$ is a subset of $X$ that contains precisely all the leaves below a vertex in $T$.
A clade $A$ is called trivial if $|A|=1$ or $|A|=X$ holds, and non-trivial otherwise.
Since $T$ has $2n-1$ vertices, it contains precisely $2n-1$ clades, including $n+1$ trivial ones. For example, the rooted phylogenetic tree on $X=\{1,\dots,7\}$ depicted in Figure~\ref{fig:trees} has 13 clades: the five non-trivial ones are $\{1,2\}, \{3,4\}, \{1,2,3,4\}, \{6,7\}$ and $\{5,6,7\}$.
Suppressing the root of a tree $T$ in $\mathcal{T}_X$, that is, removing $\rho$ and replacing the two edges incident with $\rho$ with an edge connecting the two vertices adjacent to $\rho$, results in an unrooted tree in $\mathcal{T}^*_X$, which will be denote by $\rho^{-1}(T)$. For instance, for the rooted tree $T$ and unrooted tree $T^*$ in Figure \ref{fig:trees}, we have $T^*=\rho^{-1}(T)$. Note that for each $T^*$ in $\mathcal{T}^*_X$, there are precisely $2n-3$ rooted trees $T$ in $\mathcal{T}_X$ such that $T^*=\rho^{-1}(T)$ holds.
Recall that a {\em split} $A|B$ on $X$ is a bipartition of $X$ into two disjoint non-empty sets $A$ and $B$, that is, $A\cap B=\emptyset$ and $A\cup B=X$. Let $T^*$ be an unrooted tree in $\mathcal{T}^*_X$. Every edge $e$ of $T^*$ induces a necessarily unique split $A|B$ of $X$ obtained as the two sets of leaves separated by $e$. }%\textcolor{red}{In other words, the path between a pair of leaves in $X$ contains $e$ if and only if one of these two leaves is in $A$ and the other one is in $B$.} In this case, we say $A|B$ is a split contained in $T^*$.
A {\em clan} $A$ of $T^*$ is a subset of $X$ such that $A|(X\setminus A)$ is a split contained in $T^*$.
Since $T^*$ has $2n-3$ edges and each edge induces two distinct clans, it contains precisely $2(2n-3)$ clans.
\bigskip
\noindent
\subsection{Log-convexity}
A sequence $\{y_1,\dots,y_m\}$ of real numbers is called {\em positive} if each
number }%\textcolor{red}{contained} in the sequence is }%\textcolor{red}{greater than} zero.
It is called {\em log-convex} if
$y_{k-1}y_{k+1}\geqslant y_k^2$ holds for $2\leqslant k \leqslant m-1$. Clearly, a positive sequence $\{y_k\}_{1\leqslant k \leqslant m}$ is log-convex if and only if the sequence $\{y_{k+1}/y_k\}_{1\leqslant k \leqslant m-1}$ is increasing. Therefore, a log-convex sequence }%\textcolor{red}{of positive numbers} is necessarily {\em unimodal}, that is, there exists an index $1\leqslant k \leqslant m$ such that
\begin{equation}
\label{def:unimodal}
y_1\geqslant y_2 \geqslant \dots \geqslant y_k~~~\mbox{and}~~~y_k \leqslant y_{k+1} \leqslant \cdots \leqslant y_m
\end{equation}
hold.
Recall that a sequence $\{y_i\}_{1\leqslant i \leqslant m}$ is also called unimodal if
$y_1\leqslant y_2 \leqslant \dots \leqslant y_k$ and $y_k \geqslant y_{k+1} \geqslant \cdots \geqslant y_m$ hold for some $1\leqslant k\leqslant m$. However, in this paper, unimodal is always referred to the situation specified in Eq.~(\ref{def:unimodal}).
For later use, we end this section with the following results concerning log-convex sequences (see, e.g.~\citet{LW}).
\begin{lemma}
\label{lem:log-convex}
If $\{y_i\}_{1\leqslant i \leqslant m}$ and $\{y'_i\}_{1\leqslant i \leqslant m}$ are two positive and log-convex sequences, then the sequences $\{y_i+y'_i\}_{1\leqslant i \leqslant m}$ and $\{y_i\cdot y'_i\}_{1\leqslant i \leqslant m}$ are positive and log-convex.
\hfill $\square$
\end{lemma}
\section{The PDA and YHK models}
In this section, we present a formal definition of the two null models investigated in this paper: the {\it proportional to distinguishable arrangements} (PDA) model and {\it Yule--Harding--Kingman} (YHK) model.
To begin with, recall that the number of rooted phylogenetic trees with leaf set $X$ with $n=|X|$ is
$$\varphi(n):= (2n-3)!! = 1\cdot 3 \dotsb (2n-3)=\frac{(2n-2)!}{2^{n-1}(n-1)!}.$$
Here we will use the convention that $\varphi(1)=1$. Under the PDA model, each tree has the same probability to be generated, that is, we have
\begin{equation} \label{eq:rooted-pda-prob}
\mathbb{P}_{\text{PDA}}(T) = \frac{1}{\varphi(n)}
\end{equation}
for every $T$ in $\mathcal{T}_X$.
Under the Yule--Harding model,
a rooted phylogenetic tree on $X$ is generated as follows. Beginning with a two leafed tree, we ``grow'' it by repeatedly splitting a leaf into two new leaves. The splitting leaf is chosen randomly and uniformly among all the present leaves in the current tree. After obtaining an unlabeled tree with $n$ leaves, we label each of its leaves with a label sampled randomly uniformly (without replacement) from $X$. When branch lengths are ignored, the Yule--Harding model is shown by~\citet{aldous96a} to be equivalent to the trees generated by Kingman's coalescent process,
and so we call it the YHK model. Under this model, the probability of generating a tree $T$ in $\mathcal{T}_X$ is \citep{semple03a}:
\begin{equation} \label{eq:rooted-yule-prob}
\mathbb{P}_{\text{YHK}}(T) = \frac{2^{n-1}}{n!}\prod_{v \in \mathring{V}(T)} \frac{1}{\lambda_v},
\end{equation}
where $\mathring{V}(T)$ is the set of interior nodes of $T$,
and $\lambda_v$ is the number of interior nodes of $T$ that are below $v$.
For example, the probability of the rooted tree in Figure~\ref{fig:trees} is
$
{2^{7-1}}/{(7!\times 3\times 2\times 6)}.
$
\bigskip
}%\textcolor{red}{For an unrooted tree $T^*$ in $\mathcal{T}^*_X$, let $\rho(T^*)$ denote the set of rooted trees $T$ in $\mathcal{T}_X$ with $T^*=\rho^{-1}(T)$.
As noted previously in Section~\ref{sec:preliminaries}, $T^*$ can be obtained from each of the $2n-3$ rooted trees $T$ in $\rho(T^*)$ by removing the root of $T$. Using this correspondence scheme, a probability measure $\mathbb{P}$ on $\mathcal{T}_X$ induces a probability }%\textcolor{red}{measure} $\mathbb{P}_u$ on the set $\mathcal{T}^*_X$.
That is, we have
\begin{equation} \label{eq:unrooted-yule-prob}
\mathbb{P}_u(T^*) = \sum_{T\in \rho(T^*)} \mathbb{P}(T).
\end{equation}
} In particular, }%\textcolor{red}{let $\puy$ and $\puu$ denote the probability measures on $\mathcal{T}^*_X$ induced by $\mathbb{P}_{\text{YHK}}$ and $\mathbb{P}_{\text{PDA}}$, respectively.}
Note that this implies
\begin{equation} \label{eq:unrooted-pda-prob}
\puu(T^*) = \frac{1}{\varphi(n-1)}
\end{equation}
for every $T^*$ in $\mathcal{T}^*_X$. Since the number of unrooted phylogenetic trees on $X$ is $|\mathcal{T}^*_X| =\varphi(n-1)= (2n-5)!!$, each tree in $\mathcal{T}^*_X$ has the same probability under $\puu$.
}%\textcolor{red}{We end this section with a property of the PDA and YHK models that will play an important role in obtaining our results. Recall that a probability measure $\mathbb{P}$ on $\mathcal{T}_X$ has the {\it exchangeability property} if $\mathbb{P}$ depends only on tree shapes, that is,
if two rooted trees $T'$ and $T$ can be obtained from each other by permuting their leaves, then $\mathbb{P}(T)=\mathbb{P}(T')$ holds.
Similarly, a probability measure on $\mathcal{T}^*_X$ has the {exchangeability property} if it depends only on tree shapes.
It is well-known that both $\mathbb{P}_{\text{YHK}}$ and $\mathbb{P}_{\text{PDA}}$, the probability measures on the set of rooted trees $\mathcal{T}_X$ induced by the YHK and PDA models, have the exchangeability property~\citep{aldous96a},
By Eqs.~\eqref{eq:unrooted-pda-prob} and \eqref{eq:unrooted-yule-prob}, we can conclude that the probability measures $\puy$ and $\puu$ on the set of unrooted trees $\mathcal{T}^*_X$ also have the exchangeability property.
}
\section{Clade probabilities}
\label{sec:clade}
In this section, we shall present our main results on clade probabilities. To this end, we need some further notation and definitions. Given a rooted binary tree $T$,
let
\begin{equation}
\mathbb{I}_T(A) = \begin{cases} 1, &\text{if $A$ is a clade of $T$},\\ 0, &\text{otherwise,} \end{cases}
\end{equation}
be the `indicator' function that maps a subset $A$ of $X$ to 1 if $A$ is a clade of $T$, and 0 otherwise. Now for a subset $A$ of $X$, the probability of $X$ being a clade of a random tree sampled according to a probability distribution $\mathbb{P}$ on $\mathcal{T}_X$ is defined as
\begin{equation} \label{eq:clade-prob}
\mathbb{P}(A)= \sum_{T \in \mathcal{T}_X} \mathbb{P}(T) \mathbb{I}_T(A).
\end{equation}
Since $\sum_{A \subseteq X} \mathbb{I}_T(A) = 2n-1$ for each $T\in \mathcal{T}_X$ and
$\sum_{T \in \mathcal{T}_X} \mathbb{P}(T) = 1$, we have
\begin{equation*}
\sum_{A \subseteq X} \mathbb{P}(A) = \sum_{A \subseteq X} \sum_{T \in \mathcal{T}_X} \mathbb{P}(T) \mathbb{I}_T(A) = \sum_{T \in \mathcal{T}_X} \mathbb{P}(T) \sum_{A \subseteq X} \mathbb{I}_T(A)=2n-1.
\end{equation*}
}%\textcolor{red}{ By the last equation, we note that each probability measure $\mathbb{P}$ on $\mathcal{T}_X$ induces a measure on the set of all subsets of $X$, which can be normalized to a probability measure by a factor of $1/(2n-1)$.
}
\bigskip
The above definitions on a subset of $X$ can be extended to a collection
of subsets of $X$. That is, given a collection of subsets $\{A_1, \dotsc, A_k\}$ of $X$, we have
\begin{equation}
\mathbb{I}_T(A_1, \dotsc, A_m) = \mathbb{I}_T(A_1) \dotsb \mathbb{I}_T(A_m),
\end{equation}
and
\begin{equation} \label{eq:mulclade-prob}
\mathbb{P}(A_1, \dotsc, A_m) = \sum_{T \in \mathcal{T}_X} \mathbb{P}(T) \big(\mathbb{I}_T(A_1) \dotsb \mathbb{I}_T(A_m)\big).
\end{equation}
Note that $\mathbb{I}_T(A_1, \dotsc, A_m)=1$ if and only if each $A_i$ is a clade of $T$ for $1\leqslant i \leqslant m$. On the other hand, it is well known~(see, e.g.~\citet{semple03a}) that given a collection of subsets $\{A_1, \dotsc, A_k\}$ of $X$, there exists a tree $T\in \mathcal{T}_X$ with $\mathbb{I}_T(A_1, \dotsc, A_m)=1$ if and only if $\{A_1, \dotsc, A_k\}$ forms a {\em hierarchy}, that is, $A_i\cap A_j\in \{\emptyset, A_i,A_j\}$ holds for $1\leqslant i <j \leqslant m$.
\bigskip
}%\textcolor{red}{The following result shows that if a probability measure depends only on tree shapes, then the clade probabilities derived from it are also independent of the `labeling' of the elements.}
\begin{lemma}
\label{lem:set:EP}
Let $\mathbb{P}$ be a probability measure on $\mathcal{T}_X$ that has the }%\textcolor{red}{exchangeability property}. Then for each pair of subsets $A$ and $A'$ of $X$ with $|A|=|A'|$, we have
\begin{equation}
\label{eq:set:ep}
\mathbb{P}(A)=\mathbb{P}(A')~~~\mbox{and}~~~~~~\mathbb{P}(A,X\setminus A)=\mathbb{P}(A',X\setminus A').
\end{equation}
\end{lemma}
\begin{proof}
Suppose that $A$ and $A'$ are two subsets of $X$ that have the same size. Then there exists a permutation $\pi$ on $X$ such that $A'=A^{\pi}:=\{\pi(x)\mid x\in A\}$. Now for each tree $T$ in $\mathcal{T}_X$, let $T^\pi$ be the tree obtained from $T$ by relabeling the leaves of $T$ according to permutation $\pi$. Then $A$ is a clade of $T$ if and only if $A^\pi$ is a clade of $T^\pi$. Together with Eq.~(\ref{eq:clade-prob}), we have
\begin{eqnarray*}
\mathbb{P}(A)&=&\sum_{T\in \mathcal{T}_X} \mathbb{P}(T) \mathbb{I}_T(A)
=\sum_{T\in \mathcal{T}_X} \mathbb{P}(T) \mathbb{I}_{T^\pi}(A^\pi)\\
&=&\sum_{T\in \mathcal{T}_X} \mathbb{P}(T^\pi) \mathbb{I}_{T^\pi}(A^\pi)
=\sum_{T^\pi\in \mathcal{T}_X} \mathbb{P}(T^\pi) \mathbb{I}_{T^\pi}(A^\pi)=\mathbb{P}(A^\pi),
\end{eqnarray*}
where the third equality follows from the }%\textcolor{red}{exchangeability property} of $\mathbb{P}$. This shows
$\mathbb{P}(A)=\mathbb{P}(A')$, and a similar argument leads to $\mathbb{P}(A,X\setminus A)=\mathbb{P}(A',X\setminus A')$.
\hfill $\square$
\end{proof}
Since $\mathbb{P}_{\text{YHK}}$ has the }%\textcolor{red}{exchangeability property}, by Lemma~\ref{lem:set:EP} we know that $\mathbb{P}_{\text{YHK}}(A)$ is determined by the size of $A$ only. Therefore, we denote
\[
p_n(a) = \mathbb{P}_{\text{YHK}}(A),
\]
as the probability that a random tree in $\mathcal{T}_X$, where $n = |X|$, induces a specific clade $A$ of size $a$ under the YHK model. Similarly, we let
\[
q_n(a) = \mathbb{P}_{\text{PDA}}(A),
\]
be the probability that a random tree in $\mathcal{T}_X$ induces a specific clade $A$ of size $a$ under the PDA model.
In addition, we also denote
\[
p_n(a, n-a) = \mathbb{P}_{\text{YHK}}(A, X\setminus A), \quad \text{and} \quad q_n(a, n-a) = \mathbb{P}_{\text{PDA}}(A, X\setminus A),
\]
the probabilities that both $A$ and $X\setminus A$ are clades of a tree in $\mathcal{T}_X$ generated under the YHK and PDA models, respectively. Note that if both $A$ and $X \setminus A$ are clades of a tree $T$, then they are precisely the clades consisting of the leaves below the two children of the root of $T$.
\begin{corollary}
\label{cor:set:EP}
Let $\mathbb{P}$ be a probability measure on $\mathcal{T}_X$ that has the }%\textcolor{red}{exchangeability property}. For each $1\leqslant a \leqslant n$, the expected number of clades with size $a$ contained in a random tree sampled according to $\mathbb{P}$ is
$${n\choose a} \mathbb{P}(A),$$
where $A$ is an arbitrary subset of $X$ with $|A|=a$.
\end{corollary}
\begin{proof}
}%\textcolor{red}{Denote the collection of subsets of $X$ with size $a$ by $\mathcal{X}_a$ and fix a subset $A\in \mathcal{X}_a$.
Let $Z_T(a):= \sum_{Y\in \mathcal{X}_a} \mathbb{I}_T(Y)$ be the number of clades with size $a$ contained in a tree $T$.
Then the expected number of clades with size $a$ contained in a random tree sampled according to $\mathbb{P}$ is given by
\begin{eqnarray*}
\sum_{T\in\mathcal{T}_X} \mathbb{P}(T)Z_T(a)=\sum_{T\in \mathcal{T}_X}\sum_{Y\in \mathcal{X}_a} \mathbb{P}(T)\mathbb{I}_T(Y)=\sum_{Y\in \mathcal{X}_a}\sum_{T\in \mathcal{T}_X}\mathbb{P}(T)\mathbb{I}_T(Y)
= \sum_{Y\in \mathcal{X}_a} \mathbb{P}(Y)={n\choose a} \mathbb{P}(A),
\end{eqnarray*}
where the last equality holds because by Lemma~\ref{lem:set:EP} we have $\mathbb{P}(Y)=\mathbb{P}(A)$ for all $Y\in \mathcal{X}_a$.
\hfill $\square$}
\end{proof}
\subsection{Clade probabilities under the YHK model}
\label{subsec:yule-clade}
In this subsection we study the clade probabilities under the YHK model. First, we have the following theorem concerning the computation of $p_n(a)$ and $p_n(a,n-a)$, which was discovered and rediscovered several times in the literature (see, e.g.,~\cite{blum05a,brown94a, heard92a,rosenberg03a,rosenberg06a}).
\begin{theorem} \label{thm:yule-clade}
For a positive integer $a \leqslant n-1$ we have:
\begin{enumerate}
\item[{\rm (i)}] $p_n(a) = \frac{2n}{a(a+1)}\binom{n}{a}^{-1}$.
\item[{\rm (ii)}] $p_n(a,n-a) = \frac{2}{n-1}\binom{n}{a}^{-1}$.
\end{enumerate}
\end{theorem}
By the above results, we show below that clade probabilities under the YHK model form a log-convex sequence. This implies that the clades with small or large size are more likely to be generated than those with middle size under the model.
\begin{theorem}
\label{thm:yhk:convex}
For $n\geqslant 3$, the sequence $\{p_n(a)\}_{1\leqslant a \leqslant n}$ and
$\{p_n(a,n-a)\}_{1\leqslant a < n}$ are log-convex. Moreover, let
\[
\Delta(n):=\sqrt{n+\Big(\frac{n-3}{4}\Big)^2 }+\frac{n-3}{4};
\]
then we have
\begin{enumerate}
\item[{\rm (i)}] $p_n(a)\geqslant p_n(a+1)$ for $a\leqslant \Delta(n)$, and $p_n(a) < p_n(a+1)$ for $a > \Delta(n)$, and
\item[{\rm (ii)}] $p_n(a,n-a)> p_n(a+1,n-a-1)$ for $a \leqslant n/2$ and $p_n(a,n-a)< p_n(a+1,n-a-1)$ for $a \geqslant n/2$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $y_a=\frac{2n}{a(a+1)}$ for $1\leqslant a \leqslant n-1$ and $y_n=1$, and $y'_a={n \choose a}^{-1}$ for $1\leqslant a \leqslant n$.
Since $\{y_a\}_{1\leqslant a \leqslant n}$ and $\{y'_a\}_{1\leqslant a \leqslant n}$ are both log-convex, by Lemma~\ref{lem:log-convex} and Theorem~\ref{thm:yule-clade} we can conclude that the sequence $\{p_n(a)\}_{1\leqslant a \leqslant n}$ is log-convex.
A similar argument shows that $\{p_n(a,n-a)\}_{1\leqslant a < n}$ is also log-convex.
By Theorem~\ref{thm:yule-clade}, we have
\[
\frac{p_n(a+1)}{p_n(a)} = \frac{a(a+1)\binom{n}{a}}{(a+1)(a+2)\binom{n}{a+1}}=\frac{a(a+1)}{(a+2)(n-a)},
\]
for $1 \leqslant a \leqslant n-2$. The last equation is less than or equal to $1$ if and only if
$$a(a+1) \leqslant (a+2)(n-a) \iff 2a^2 - (n-3)a - 2n \leqslant 0 .$$
Therefore, $p_n(a+1) \leqslant p_n(a)$ if and only if $a \leqslant \Delta(n)$. This establishes Part (i) of the theorem.
Part (ii) of the theorem follows from the fact that $\binom{n}{a} < \binom{n}{a+1}$ for $a \leqslant n/2 $ and $\binom{n}{a} > \binom{n}{a+1}$ for $a \geqslant n/2$.
\hfill $\square$
\end{proof}
\subsection{Clade probabilities under the PDA model}
\label{subsec:pda-clade}
Parallel to those in the Section~\ref{subsec:yule-clade}, in this subsection we derive results on clade probabilities under the PDA model.
\begin{theorem} \label{thm:pda-clade}
For a positive integer $a \leqslant n-1$ we have:
\begin{enumerate}
\item[{\rm (i)}] $q_n(a) = \frac{\varphi(a)\varphi(n-a+1)}{\varphi(n)} = \binom{n-1}{a-1} \binom{2n-2}{2a-2}^{-1}$.
\item[{\rm (ii)}] $q_n(a,n-a)=\frac{\varphi(a)\varphi(n-a)}{\varphi(n)}=\frac{1}{(2n-2a-1)}\binom{n-1}{a-1}\binom{2n-2}{2a-2}^{-1}.$
\end{enumerate}
\end{theorem}
\begin{proof}
To derive the formula for $q_n(a)$, it suffices to show that there are $\varphi(a)\varphi(n-a+1)$ trees in $\mathcal{A}$, the subset of trees in $\mathcal{T}_X$ containing $A$ as a clade, because the probability of each tree in $\mathcal{T}_X$ is $1/\varphi(n)$. Without loss of generality, we can assume that $X=\{1,2,\cdots,n\}$ and $A=\{n-a+1,\cdots,n\}$. Let
}%\textcolor{red}{
$$X':=(X-A)\cup \{n-a+1\}=\{1,2,\cdots,n-a,n-a+1\};
$$
} then each tree in $\mathcal{A}$ can be generated by the following two steps: picking up a tree in $\mathcal{T}_{X'}$ and replacing the leaf with label $n-a+1$ by a tree from $\mathcal{T}_{A}$. In addition, a different choice of trees in the first step or the second step will result in a different tree in $\mathcal{A}$. Since there are $\varphi(n-a+1)$ possible choices in the first step and $\varphi(a)$ ones in second step, we can conclude that the number of trees $\mathcal{A}$ is $\varphi(a)\varphi(n-a+1)$. }%\textcolor{red}{ In addition, using the fact that
$$
\varphi(m)=(2m-3)!!=\frac{(2m-2)!!}{2^{m-1}(m-1)!}
$$
holds for $m\geqslant 1$, we have
$$q_n(a) = \frac{\varphi(a)\varphi(n-a+1)}{\varphi(n)} =
\frac{(2a-2)!(2n-2a)!(n-1)!}{(2n-2)!(a-1)!(n-a)!}
= \binom{n-1}{a-1} \binom{2n-2}{2a-2}^{-1}.$$
}
}%\textcolor{red}{
The proof of the formula for $q_n(a,n-a)$ is similar to the one for $q_n(a)$. Let $\mathcal{A}^*$ be the collection of the trees in $\mathcal{T}_X$ containing both $A$ and $X-A$ as clades. Then a tree in $\mathcal{A}^*$ is uniquely determined by choosing a tree in $\mathcal{T}_A$, and subsequently another tree from $\mathcal{T}_{X-A}$.
} This implies the number of trees in $\mathcal{A}^*$ is $\varphi(a)\varphi(n-a)$. Hence
\begin{align*}
q_n(a,n-a) &= \frac{\varphi(a)\varphi(n-a)}{\varphi(n)} = \frac{1}{(2n-2a-1)} q_n(a) \\
&= \frac{1}{(2n-2a-1)} \binom{n-1}{a-1}\binom{2n-2}{2a-2}^{-1}.
\end{align*}
\hfill $\square$
\end{proof}
Recall that in Theorem~\ref{thm:yhk:convex} we show that clade probabilities under the YHK model form a log-convex sequence. Here we establish a similar result for the PDA model, which implies that the sequences $\{q_n(a)\}_{1\leqslant a <n}$ and $\{q_n(a,n-a)\}_{1\leqslant a <n}$ are also unimodal.
\begin{theorem}
\label{thm:pda:convex}
For $n\geqslant 3$, the sequence $\{q_n(a)\}_{1\leqslant a \leqslant n}$ and
$\{q_n(a,n-a)\}_{1\leqslant a <n}$ are log-convex. Moreover,
we have
\begin{enumerate}
\item[{\rm (i)}] $q_n(a+1) \geqslant q_n(a)$ when $a \geqslant n/2$, and $q_n(a+1) \leqslant q_n(a)$ when $a \leqslant n/2$.
\item[{\rm (ii)}] $q_n(a+1,n-a-1) \geqslant q_n(a,n-a)$ when $a \geqslant (n-1)/2$, and $q_n(a+1,n-a-1) \geqslant q_n(a,n-a)$ when $a \leqslant (n-1)/2$.
\end{enumerate}
\end{theorem}
\begin{proof}
By Theorem~\ref{thm:pda-clade} and $q_n(n)=1$, for $1\leqslant a <n$ we have
\begin{align*}
\frac{q_n(a+1)}{q_n(a)}
&= \frac{2a-1}{2n-2a-1},
\end{align*}
which is greater than or equal to $1$ when $2a-1 \geqslant 2n-2a-1$, or equivalently when $a \geqslant n/2$. Thus Part (i) follows. Moreover, we have
\begin{align*}
\frac{q_n(a+1)q_n(a-1)}{q^2_n(a)} = \Big(\frac{2a-1}{2a-3}\Big)\Big(\frac{2n-2a+1}{2n-2a-1}\Big)\geqslant 1,
\end{align*}
for $2\leqslant a <n$, and hence $\{q_n(a)\}_{1\leqslant a \leqslant n}$ is log-convex.
Similarly, we have
\[
\frac{q_n(a+1,n-a-1)}{q_n(a,n-a)} = \Big(\frac{2n-2a-1}{2n-2a-3}\Big) \Big(\frac{q_n(a+1)}{q_n(a)} \Big)= \frac{2a-1}{2n-2a-3},
\]
which is greater than or equal to $1$ when $2a-1 \geqslant 2n-2a-3$, or equivalently when $a \geqslant (n-1)/2$. Moreover, we have
\begin{align*}
\frac{q_n(a+1,n-a-1)q_n(a-1,n-a+1)}{q^2_n(a,n-a)} =
\Big( \frac{2a-1}{2a-3}\Big)\Big(\frac{2n-2a-1}{2n-2a-3}\Big)\geqslant 1,
\end{align*}
and hence $\{q_n(a)\}_{1\leqslant a < n}$ is log-convex.
\hfill $\square$
\end{proof}
\subsection{A comparison between the PDA and YHK models}
Using the formulae for computing clade probabilities under the PDA and YHK models presented in the previous two subsections, here we investigate the differences between these two models. Let's begin with comparing $p_n(a)$ and $q_n(a)$, the probabilities of a specific (and fixed) clade of size $a$ under the YHK and PDA models, respectively. As an example, consider the ratio of $p_n(a)/q_n(a)$ with $n=30$ as depicted in Figure~\ref{fig:pq}. Then it is clear that, except for $a=1$ }%\textcolor{red}{for which} both $p_n(a) = q_n(a) = 1$, the ratio is strictly decreasing and is }%\textcolor{red}{less than} $1$ when $a$ is greater than certain value. This `phase transition' type phenomenon holds for all $n>3$, as the following theorem shows.
\begin{figure}
\centering
\includegraphics[scale=0.7]{Fig2.eps}
\caption{Plots of the ratios $p_n(a) / q_n(a)$ and $p_n(a,n-a)/q_n(a,n-a)$, with $n=30$ and $a=1,\dotsc,29$.}
\label{fig:pq}
\end{figure}
\begin{theorem}
\label{thm:clade:comp}
For $n> 3$, there exists a number $\kappa(n)$ in $[2,n-1]$, such that $p_n(a)>q_n(a)$ for $2\leqslant a<\kappa(n)$, and $p_n(a)<q_n(a)$ for $\kappa(n)<a \leqslant n-1$.
\end{theorem}
\begin{proof}
\rev{Let
\[
g_n(a) = \frac{p_n(a)}{q_n(a)} = \frac{2n}{a(a+1)} \binom{2n-2}{2a-2} \binom{n}{a}^{-1} \binom{n-1}{a-1}^{-1}.
\]
Using the identity $\binom{m}{k+1} = \frac{m-k}{k+1}\binom{m}{k}$, we obtain
\[
\frac{g_n(a+1)}{g_n(a)} = \frac{a(a+1)(2n-2a-1)}{(a+2)(2a-1)(n-a)}.
\]
We have
\[
a(a+1)(2n-2a-1) < (a+2)(2a-1)(n-a) \iff a > \frac{2n}{n+3},
\]
and hence $g_n(a) > g_n(a+1)$ for $2n/(n+3) < a \leqslant n-2$. Since $2n/(n+3) < 2$, we have $g_n(2) > g_n(3) > \dotsb > g_n(n-1)$.}
It is easy to see that for $n > 3$,
\[
g_n(2) = \frac{2(2n-3)}{3(n-1)} > 1
\]
and
\[
g_n(n-1) = \frac{2(2n-3)}{n(n-1)} < 1.
\]
This and the fact that $g_n(a)$ is strictly decreasing on $[2,n-1]$ imply the existence of the number $\kappa(n)$ in the theorem.
\hfill $\square$
\end{proof}
Next, we consider $p_n(a,n-a)$ and $q_n(a,n-a)$.
Note that by definition, both $p_n(a)$ and $q_n(a,n-a)$ are symmetric about $n/2$, as demonstrated by the plot of the ratio $p_n(a,n-a)/q_n(a,n-a)$ with $n=30$ in~Figure~\ref{fig:pq}. In addition, the figure shows that the ratio is strictly increasing on the interval $[1, \lfloor n/2 \rfloor]$ (and by the symmetry of the ratio, it is strictly decreasing on the interval $[\lceil n/2 \rceil, n-1]$). This observation is made precise and rigorous in the following theorem.
\begin{theorem}
For $n > 3$, there exists a number $\lambda(n)$ in $[1,\lfloor n/2 \rfloor]$, such that $p_n(a,n-a)<q_n(a,n-a)$ for $1\leqslant a \leqslant \lambda(n)$, and $p_n(a,n-a)>q_n(a,n-a)$ for $\lambda(n)<a \leqslant \lfloor n/2 \rfloor$.
\end{theorem}
\begin{proof}
\rev{
Let
\[
h_n(a) = \frac{p_n(a,n-a)}{q_n(a,n-a)} = \frac{2(2n-2a-1)}{n-1} \binom{2n-2}{2a-2} \binom{n}{a}^{-1} \binom{n-1}{a-1}^{-1}.
\]}
Then
\[
\frac{h_n(a+1)}{h_n(a)} = \frac{(a+1)(2n-2a-3)}{(2a-1)(n-a)} > 1,
\]
where the last inequality follows from }%\textcolor{red}{the observation} that
$$
(a+1)(2n-2a-3)-(n-a) (2a-1)=3(n-2a-1) > 0
$$
holds for $1\leqslant a \leqslant \lfloor n/2 \rfloor - 1$. This implies that the function $h_n(a)$ is strictly increasing on the interval $[1, \lfloor n/2 \rfloor]$.
Thus, it now suffices to show that $h_n(1) \leqslant 1$ and $h_n(\lfloor n/2 \rfloor) \geqslant 1$ in order to demonstrate the existence of $\lambda(n)$. We have
\[
h_n(1) = \frac{p_n(1, n-1)}{q_n(1, n-1)} = \frac{2(2n-3)}{n(n-1)} < 1,
\]
if $n > 3$. Let $k = \lfloor n/2 \rfloor$. If $n$ is even (i.e., $k=n/2$), then for $k \geqslant 2$
\begin{align*}
h_{2k}(k) &= \frac{2(4k-2k-1)}{(2k-1)} \binom{4k-2}{2k-2} \binom{2k}{k}^{-1} \binom{2k-1}{k-1}^{-1} \\
&= \binom{4k-2}{2k-2} \binom{2k-1}{k-1}^{-2} > 1.
\end{align*}
The inequality in the last equation can be seen as follows. Let $A$ and $B$ be two sets, each having $(2k-1)$ elements. The number of subsets of $A \cup B$ that have $k-1$ elements from each of $A$ and $B$ is $\binom{2k-1}{k-1}^2$. On the other hand, the total number of $(2k-2)$-subsets of $A \cup B$ is $\binom{4k-2}{2k-2}$.
If $n$ is odd (i.e., $k=(n-1)/2$), then
\begin{align*}
h_{2k+1}(k)
&= \frac{2(2k+1)}{2k} \binom{4k}{2k-2}\binom{2k+1}{k}^{-1}\binom{2k}{k-1}^{-1}\\
&= \frac{2k+1}{k} \binom{4k}{2k-2} \frac{k}{2k+1} \binom{2k}{k-1}^{-2}\\
&= \binom{4k}{2k-2} \binom{2k}{k-1}^{-2}.
\end{align*}
Using the same argument as in proving $h_{2k}(k) > 1$, we also have $h_{2k+1}(k) \geqslant 1$ for $k \geqslant 1$.
\hfill $\square$
\end{proof}
Let $A$ be a fixed subset of $X$ with size $a$, where $1 \leqslant a \leqslant n-1$.
In the previous two theorems, we present comparison results for $\mathbb{P}(A)$ and $\mathbb{P}(A,X\setminus A)$ under }%\textcolor{red}{the YHK and PDA models}. We end this subsection with a comparison study of $\mathbb{P}(A,X\setminus A)/\mathbb{P}(A)$, that is,
the probability that a tree $T \in \mathcal{T}_X$ sampled according to probability measure $\mathbb{P}$ contains both $A$ and $X\setminus A$ as its clades (which means that $A$ and $X\setminus A$ are the clades below the two children of the root of $T$), given that $A$ is a clade of $T$. To this end, let
\[
u_n(a) = \frac{p_n(a,n-a)}{p_n(a)}-\frac{q_n(a,n-a)}{q_n(a)}=\frac{a(a+1)}{n(n-1)}-\frac{1}{2n-2a-1}
\]
be the difference between the two conditional probabilities under the two models. }%\textcolor{red}{We are interested in the sign changes of $u_n(a)$ as it indicates a `phase transitions' between these two models.
For instance, considering the values of $u_n(a)$ for $n=30$ as depicted in~Figure~\ref{fig:u}, then there exists a unique change of sign.
Indeed, the observation that there exists a unique change of sign of $u_n(a)$ holds for general $n$, as the following theorem shows.
}
\begin{figure}
\centering
\includegraphics[scale=0.7]{Fig3.eps}
\caption{Plot of function $u_n(a)$
with $n=30$.
}
\label{fig:u}
\end{figure}
\begin{theorem}
For $n \geqslant 3$, there exists $\tau(n) \in [1, n-1]$ such that $u_n(a) \leqslant 0$ if $a \leqslant \tau(n)$ and $u_n(a) \geqslant 0$ if $a \geqslant \tau(n)$.
\end{theorem}
\begin{proof}
\rev{Consider the function
\[
f_n(x) = \frac{x(x+1)}{n(n-1)} - \frac{1}{2n-2x-1}, \quad x \in \mathbb{R}.
\]
Clearly $f_n(x)$ agrees with $u_n(a)$ when $x = a$. Then
\[
f_n'(x) = \frac{2x+1}{n(n-1)} - \frac{2}{(2n-2x-1)^2} = \frac{t(2n-t)^2 - 2n(n-1)}{n(n-1)(2n-t)^2},
\]
where $t = 2x+1$. The sign of $f_n'(x)$ thus depends on the sign of
\[
g_n(t) = t(2n-t)^2 - 2n(n-1).
\]
We see that $g_n(t)$ is a polynomial of $t$ of degree $3$, and hence it can have at most three (real) roots. On the other hand, for $n \geqslant 3$, we have:
\begin{align*}
g_n(0) &= -2n(n-1) < 0,\\
g_n(1) &= n^2 + (n-1)^2 > 0,\\
g_n(2n-1) &= -2n(n-2)-1 < 0,
\end{align*}
and
$$\lim_{t \to \infty} g_n(t) = \infty.$$
Therefore, $g_n(t)$ has exactly three roots $t_1 \in (0, 1)$, $t_2 \in (1, 2n-1)$, and $t_3 > 2n-1$. Note further that $g_n(n) = n^3 - 2n(n-1) = n((n-1)^2 + 1) > 0$, and hence $t_2 > n$. Denoting $x_i = (t_i-1)/2$ for $1\leqslant i \leqslant3$, then we have $f'_n(x)=0$ for $x\in \{x_1,x_2,x_3\}$,
$f'_n(x)<0$ for $x\in (-\infty,x_1)\cup (x_2,x_3)$, and $f'_n(x)>0$ for $x\in (x_1,x_2)\cup (x_3,\infty)$. }
\rev{Since $x_1 = (t_1-1)/2 < 0$ and $f_n(a) = u_n(a)$, the sign of $f_n'(x)$ implies that $u_n(1) < u_n(2) < \dotsb < u_n(\lfloor x_2 \rfloor)$. Similarly, we also have $u_n(\lceil x_2 \rceil) > \dotsb > u_n(n-2) > u_n(n-1).$ It is easy to see that for $n \geqslant 3$
\begin{gather*}
u_n(1) = \frac{2}{n(n-1)} - \frac{1}{2n-3} = -\frac{(n-2)(n-3)}{n(n-1)(2n-3)} \leqslant 0, \\
u_n(n-1) = \frac{n(n-1)}{n(n-1)} - \frac{1}{2n-2(n-1)-1} = 0.
\end{gather*}
}
\rev{Since $x_2 = (t_2 - 1) / 2 < n-1$ and $x_3 = (t_3 - 1)/2 > n-1$, $\lceil x_2 \rceil \leqslant n-1 < x_3$. This implies that $u_n(\lceil x_2 \rceil) > \dotsb > u_n(n-2) > u_n(n-1) = 0$. Therefore, there exists a positive number $\tau(n) \in [1, x_2]$ such that $u_n(a) \leqslant 0$ if $a \leqslant \tau(n)$ and $u_n(a) \geqslant 0$ if $a \geqslant \tau(n)$.}
\hfill $\square$
\end{proof}
\subsection{Correlation results on the PDA model}
\label{sec:corr:clade:PDA}
In this section, we generalize results in Section~\ref{subsec:pda-clade} for a collection of disjoint subsets of $X$, and then show that the two indicator variables $\mathbb{I}_T(A)$ and $\mathbb{I}_T(B)$ are positively correlated.
\begin{theorem}
\label{thm:partition:pda}
Let $A_1, \dotsc, A_k$ be $k$ disjoint (nonempty) subsets of $X$.
Denoting $ |A_1|+\dotsb+|A_k|$ by $m$, then we have
\begin{align*}
\mathbb{P}_{\text{PDA}}(A_1, \dotsc, A_k)
&= \frac{\varphi(n-m+k)\prod_{i=1}^k \varphi(\vert A_i \vert)}{\varphi(n)}
\end{align*}
\end{theorem}
\begin{proof}
We first compute the number of trees that have $A_1, \dotsc, A_k$ as clades. To this end, note that such a tree can be constructed in two steps:
\begin{enumerate}
\item Build a tree on $\left(X \setminus \bigcup_{i=1}^k A_i \right) \cup\{x_1,\dotsc,x_k\}$, where $x'_1, \dotsc, x'_k$ are leaves not in $X$ serving as ``placeholders'' used in the second step.
\item Replace each $x'_i$ with a tree in $\mathcal{T}_{A_i}$.
\end{enumerate}
There are $\varphi(n-m+k)$ different choices for a tree in the first step, and $\prod_{i=1}^k \varphi(|A_i|)$ different ways to replace $x'_1, \dotsc, x'_k$ by trees in $\mathcal{T}_{A_1}, \dotsc, \mathcal{T}_{A_k}$ in the second step.
Therefore the number of trees that have $A_1, \dotsc, A_k$ as clades is
$\varphi(n-m+k)\prod_{i=1}^k \varphi(|A_i|)$.
Together with the fact that each tree in $\mathcal{T}_X$ is chosen with probability $1/\varphi(n)$ under the PDA model, this implies the theorem.
\hfill $\square$
\end{proof}
Note that $|A_1|+\dotsb+|A_k|=n$ when $A_1, \dotsc, A_k$ form a partition of $X$. Therefore, we obtain the following result as a simple consequence of Theorem~\ref{thm:partition:pda} (see {Theorem 5.1} in~\citet{zhu11a} for a parallel result on the YHK model).
\begin{corollary}
If $A_1, \dotsc, A_k$ form a partition of $X$, then
\begin{align*}
\mathbb{P}_{\text{PDA}}(A_1, \dotsc, A_k) &= \frac{\varphi(k) \prod_{i=1}^k \varphi(|A_i|)}{\varphi(n)}.
\end{align*}
\end{corollary}
Theorem~\ref{thm:partition:pda} is a general result concerning a collection of clades. When there are only two clades, the below theorem provides a more detailed analysis.
\begin{theorem}
\label{thm:cor:PDA}
Let $A$ and $B$ be two subsets of $X$ with $a\leqslant b$, where $a=|A|$ and $b=|B|$.
Then we have
\begin{equation*}
\mathbb{P}_{\text{PDA}}(A,B) =
\begin{cases}
\frac{\varphi(a)\varphi(n-b+1)\varphi(b-a+1)}{\varphi(n)}, & \text{if $A\subseteq B$,}\\
\frac{\varphi(a)\varphi(b)\varphi(n-a-b+2)}{\varphi(n)}, & \text{if $A$ and $B$ are disjoint,}\\
0, & \text{otherwise.}
\end{cases}
\end{equation*}
\end{theorem}
\begin{proof}
The first case follows by applying Theorem 2 twice. The second case is a special case of Theorem~\ref{thm:partition:pda}. The third case holds because if $A\cap B\not \in \{A,B, \emptyset\}$, then there exists no tree that contains both $A$ and $B$ as its clades.
\hfill $\square$
\end{proof}
To establish the last result of this subsection, we need the following technical lemma.
\begin{lemma}
\label{lem:tree:ineq}
Let $m,n,m',n'$ be positive numbers with $(m-m')(n-n')\geqslant 0$, then
\begin{equation}\label{eq:tree:ineq}
\varphi(m'+n')\varphi(m+n)\geqslant\varphi(m+n')\varphi(m'+n).
\end{equation}
In particular, if $a\leqslant b\leqslant b'\leqslant a'$ are positive numbers with $a+a'=b+b'$, then we have
\begin{equation}
\varphi(a)\varphi(a')\geqslant \varphi(b)\varphi(b').
\end{equation}
\end{lemma}
\begin{proof}
To establish the first claim, we may assume $m\geqslant m'$ and $n\geqslant n'$, as the proof of the other case, $m \leqslant m'$ and $n\leqslant n'$, is similar. }%\textcolor{red}{ Now Eqn.~(\ref{eq:tree:ineq}) holds because we have
\begin{align}
\frac{\varphi(m+n)}{\varphi(m+n')}
&=\frac{(2(m+n)-3)\cdot(2(m+n)-5)\cdots 3 \cdot 1}{(2(m+n')-3)\cdot(2(m+n')-5)\cdots 3 \cdot 1} \notag \\
&=(2m+2n-3)(2m+2n-5) \cdots (2m+2n'+1) (2m+2n'-1) \label{eq:ineq:nbt:nprime}\\
& \geqslant (2m'+2n-3)(2m'+2n-5) \cdots (2m'+2n'+1) (2m'+2n'-1) \label{eq:ineq:nbt}\\
&= \frac{\varphi(m'+n)}{\varphi(m'+n')}. \notag
\end{align}
Here Eq.~(\ref{eq:ineq:nbt:nprime}) follows from $n\geqslant n'$ and Eq.~(\ref{eq:ineq:nbt}) from
$m\geqslant m'$.
The second assertion follows from the first one by }%\textcolor{red}{setting} $m'=n'=a/2$, $m=b-a/2$ and $n=b'-a/2$.
\hfill $\square$
\end{proof}
We end this section with the following result, which says that the random variables $\mathbb{I}_T(A)$ and $\mathbb{I}_T(B)$
are positively correlated when $A$ and $B$ are compatible, that is, $A\cap B\in \{\emptyset, A,B\}$.
\begin{theorem} \label{thm:positive-correlation}
Let $A$ and $B$ be two compatible non-empty subsets of $X$; then
$$
\mathbb{P}_{\text{PDA}}(A,B)\geqslant \mathbb{P}_{\text{PDA}}(A)\mathbb{P}_{\text{PDA}}(B).
$$
\end{theorem}
\begin{proof}
}%\textcolor{red}{
Set $a=|A|$ and $b=|B|$. By symmetry we may assume without loss of generality that $a\leqslant b$ holds. Since $A$ and $B$ are compatible, we have either $A\cap B=\emptyset$ or $A\subseteq B$.
}
}%\textcolor{red}{
Since $n-a-b+2\leqslant n-b+1\leqslant n-a+1 \leqslant n$, by Lemma~\ref{lem:tree:ineq} we have
$$
\varphi(n)\varphi(n-a-b+2)\geqslant \varphi(n-b+1)\varphi(n-a+1),
$$
and hence
\[
\frac{\varphi(a)\varphi(b)\varphi(n-a-b+2)}{\varphi(n)}\geqslant \frac{\varphi(b)\varphi(n-b+1)}{\varphi(n)}\frac{\varphi(a)\varphi(n-a+1)}{\varphi(n)}.
\]
Together with Theorem~\ref{thm:cor:PDA}, this shows that the theorem holds for the case $A\cap B=\emptyset$.}
}%\textcolor{red}{
On the other hand, noting that $b-a+1\leqslant b \leqslant n$ and $b-a+1\leqslant n-a+1 \leqslant n$ holds, by Lemma~\ref{lem:tree:ineq} we have
$$ \varphi(n)\varphi(b-a+1)\geqslant\varphi(b)\varphi(n-a+1),
$$
and hence
\[
\frac{\varphi(a)\varphi(b-a+1)}{\varphi(b)}\frac{\varphi(b)\varphi(n-b+1)}{\varphi(n)} \geqslant \frac{\varphi(b)\varphi(n-b+1)}{\varphi(n)}\frac{\varphi(a)\varphi(n-a+1)}{\varphi(n)}.
\]
Together with Theorem~\ref{thm:cor:PDA}, this shows that the theorem holds for the case $A\subseteq B$, as required.}
\hfill $\square$
\end{proof}
\section{Clan probabilities}
In this section, we study clan probabilities, the }%\textcolor{red}{counterpart of clade probabilities for unrooted trees}. To this end, given a subset $A\subseteq X$ and an unrooted tree $T^*\in \mathcal{T}^*_X$, let $\mathbb{I}_{T^*}(A)$ be the indicator function defined as
\[
\mathbb{I}_{T^*}(A) = \begin{cases} 1, &\text{if $A$ is a clan of $T^*$,}\\
0, &\text{otherwise.} \end{cases}
\]
Then the probability that clan $A$ is contained in a random unrooted tree sampled according to $\mathbb{P}_u$ is
$$
\mathbb{P}_u(A)=\sum_{T^* \in \mathcal{T}^*_X} \mathbb{P}_u(T^*) \mathbb{I}_{T^*}(A).
$$
Note that the the clan probability defined as above can be extended to a collection of subsets in a natural way, that is, we have
\begin{equation*}
\mathbb{P}_u(A_1, \dotsc, A_m) = \sum_{T^* \in \mathcal{T}^*_X} \mathbb{P}_u(T^*) \big(\mathbb{I}_{T^*}(A_1) \dotsb \mathbb{I}_{T^*}(A_m)\big).
\end{equation*}
As a generalization of Lemma~6.1 in~\citet{zhu11a}, the following technical result relates clan probabilities to clade probabilities.
\begin{lemma}
\label{lem:clan:clade}
Suppose that $\mathbb{P}$ is }%\textcolor{red}{a} probability measure on $\mathcal{T}_X$ and $\mathbb{P}_u$ is the probability measure on $\mathcal{T}^*_X$ induced by $\mathbb{P}$. Then for a nonempty subset $A\subset X$, we have
\[
\mathbb{P}_u(A)=\mathbb{P}(A)+\mathbb{P}(X \setminus A)-\mathbb{P}(A,X \setminus A).
\]
\end{lemma}
\begin{proof}
}%\textcolor{red}{It is well-known (see, e.g., Lemma~6.1 in~\citet{zhu11a}) that for a rooted binary tree $T$, a set $A$ is a clan of $\rho^{-1}(T)$ if and only if either $A$ is a clade of $T$ or $X \setminus A$ is a clade of $T$. Now the lemma follows from the definitions and the inclusion-exclusion principle.}
\hfill $\square$
\end{proof}
\bigskip
Now we proceed to studying the clan probabilities under the YHK and PDA models. To begin with, recall that the probabilities of an unrooted tree $T^* \in \mathcal{T}^*_X$ under the YHK and PDA models are
\begin{align*}
\puy(T^*) = \sum_{T \in \rho(T^*)} \mathbb{P}_{\text{YHK}}(T)~~\text{and}~~
\puu(T^*) = \sum_{T \in \rho(T^*)} \mathbb{P}_{\text{PDA}}(T),
\end{align*}
where $\rho(T^*)$ denotes the set of rooted trees $T$ in $\mathcal{T}_X$ with $T^*=\rho^{-1}(T)$.
By the definition of clan probabilities, we have
\begin{align*}
\puy(A) &= \sum_{T^* \in \mathcal{T}^*_X} \puy(T^*) \mathbb{I}_{T^*}(A),~~\text{and}\\
\puu(A) &= \sum_{T^* \in \mathcal{T}^*_X} \puu(T^*) \mathbb{I}_{T^*}(A). \notag
\end{align*}
It can be verified, as with the case of clade probabilities, that the }%\textcolor{red}{exchangeability property} of $\puy$ and $\puu$ implies that both $\puy(A)$ and $\puu(A)$ depend only on the size $a= |A|$, not on the particular elements in $A$. Therefore, we will denote them as $p_n^*(a)$ and $q_n^*(a)$, respectively.
By Lemma~\ref{lem:clan:clade}, we can derive the following formulae to calculate clan probabilities under the two models, }%\textcolor{red}{the first of which is established in~\citet{zhu11a}. Note that the second formula reveals an interesting relationship between clan probability and clade probability under the PDA model. Intuitively, it is related to the observation that
there exists a bijective mapping from $\mathcal{T}_X$ to $\mathcal{T}^*_{Y}$ with $Y=X\cup \{y\}$ for some $y \not \in X$ that maps each rooted tree $T$ in $\mathcal{T}_X$
to the unique tree in $\mathcal{T}^*_Y$ obtained from $T$ by adding the leaf $y$ to the root of $T$.
}
\begin{theorem}
\label{thm:prob:clan}
For $1\leqslant a <n$, we have
\begin{align}
p^*_n(a) &= 2n\Big[ \frac{1}{a(a+1)}+\frac{1}{(n-a)(n-a+1)}-\frac{1}{(n-1)n} \Big] {n \choose a}^{-1}; \label{eq:clan:yule}\\
q^*_n(a) &= \frac{\varphi(a)\varphi(n-a+1)+\varphi(n-a)\varphi(a+1)-\varphi(a)\varphi(n-a)}{\varphi(n)}\\
&=\frac{\varphi(a)\varphi(n-a)}{\varphi(n-1)} = \rev{q_{n-1}(a)}\notag. \label{eq:clan:pda}
\end{align}
\end{theorem}
\begin{proof}
Since the first equation is established in~\citet{zhu11a}, it remains to show the second one. The first equality follows from Lemma~\ref{lem:clan:clade} and Theorem~\ref{thm:pda-clade}. To establish the second equality, it suffices to see that
\begin{align*}
\varphi(n-1) &[\varphi(a)\varphi(n-a+1)+\varphi(n-a)\varphi(a+1)]\\
&=\varphi(n-1)\varphi(a)\varphi(n-a) [(2n-2a-1)+(2a-1)]\\
&=\varphi(n-1)(2n-2)\varphi(a)\varphi(n-a)\\
&=(\varphi(n)+\varphi(n-1))\varphi(a)\varphi(n-a).
\end{align*}
\hfill $\square$
\end{proof}
Recall that in Theorem~\ref{thm:yhk:convex} and~\ref{thm:pda:convex} we show that the sequence $\{p_n(a)\}_{1\leqslant a < n}$ and
$\{q_n(a)\}_{1\leqslant a < n}$ are log-convex. The theorem below establishes a similar result for clan probabilities.
\begin{theorem}
\label{thm:clan:convex}
For $n\geqslant 3$, the sequence $\{p^*_n(a)\}_{1\leqslant a < n}$ and
$\{q^*_n(a)\}_{1\leqslant a < n}$ are log-convex. Moreover,
we have
\begin{enumerate}
\item[{\rm (i)}] $p^*_n(a)=p^*_n(n-a)$ and $q^*_n(a)=q^*_n(n-a)$ for $1\leqslant a < n$.
\item[{\rm (ii)}] $q^*_n(a+1) \leqslant q^*_n(a)$ when $a \geqslant \lfloor (n-1)/2 \rfloor - 1$, and $q^*_n(a+1) \geqslant q^*_n(a)$ when $a \leqslant \lceil (n-1)/2 \rceil$.
\end{enumerate}
\end{theorem}
\begin{proof}
Part (i) follows from Theorem~\ref{thm:prob:clan}. Since $q_n^*(a) = q_{n-1}(a)$ by Theorem~\ref{thm:prob:clan}, Part (ii) and that $\{q^*_n(a)\}_{1\leqslant a <n}$ is log-convex follow from Theorem~\ref{thm:pda:convex}.
It remains to show that $\{p^*_n(a)\}_{1\leqslant a <n}$ is log-convex.
To this end, fix a number $n\geqslant 3$, and let $y_a=\frac{1}{a(a+1)}$ for $1\leqslant a <n$. Then clearly $\{y_a\}_{1\leqslant a <n}$ is log-convex. This implies $\{y'_a\}_{1\leqslant a <n}$ with $y'_a=y_{n-a}$ is also log-convex. In addition, since $2y_a\geqslant y_{a+1}+y_{a-1}$ for $2\leqslant a \leqslant n-2$, $\{y^*_a\}_{1\leqslant a <n}$ with $y^*_a=y_a-\frac{1}{n(n-1)}$ is log-convex as well.
By Lemma~\ref{lem:log-convex}, we know $\{y'_a+y^*_a\}_{1\leqslant a <n}$ is log-convex. As $\{{n \choose a}^{-1}\}_{1\leqslant a <n}$ is log-convex, by Lemma~\ref{lem:log-convex} and Theorem~\ref{thm:prob:clan} we conclude that $\{p^*_n(a)\}_{1\leqslant a <n}$ is log-convex, as required.
\hfill $\square$
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=0.7]{Fig4.eps}
\caption{Plot of the ratio $p^*_n(a) / q^*_n(a)$ with $n=30$ and $a=1,\dotsc,29$.}
\label{fig:pq:star}
\end{figure}
Next, we consider the relationships between clan probabilities under the two models. For instance, consider the ratio of $p^*_n(a)/q^*_n(a)$ with $n=30$ (see Figure~\ref{fig:pq:star}. Then the ratios are symmetric about $a=15$, which is consistent with Part(i) in Theorem~\ref{thm:clan:convex}. In addition, by the figure it is clear that, }%\textcolor{red}{except for $a=1$ for which $p^*_n(a) = q^*_n(a) = 1$}, the ratio is strictly decreasing on $[2,\lfloor n/2 \rfloor]$ and is }%\textcolor{red}{less than} $1$ when $a$ is greater than a critical value.
We shall show this observation holds for general $n$. To this end, we need the following technical lemma.
\begin{lemma}
\label{lem:clan:comp:bound}
For $n> 5$, we have $p^*_n(\lfloor n/2 \rfloor) < q^*_n(\lfloor n/2 \rfloor)$.
\end{lemma}
\begin{proof}
For simplicity, let $k = \lfloor n/2 \rfloor$.
To establish the lemma, we consider the following two cases.
The first case is when $n$ is even, that is, $n=2k$. Then we have
\begin{align*}
p_{2k}^*(k) &=4k\Big(\frac{2}{k(k+1)}-\frac{1}{2k(2k-1)}\Big) {2k \choose k}^{-1}\\
&=\Big( \frac{8}{k+1}-\frac{2}{2k-1} \Big) {2k \choose k}^{-1}
=\frac{2(7k-5)}{(k+1)(2k-1)} {2k \choose k}^{-1},
\end{align*}
and
\begin{align*}
\alpha(k):=\frac{q_{2k}^*(k)}{p_{2k}^*(k)}&=\frac{\varphi(k)\varphi(k)}{\varphi(2k-1)} {2k\choose k} \frac{(k+1)(2k-1)}{2(7k-5)} \\
&= \frac{(2k-2)!(2k-2)!(2k-2)!(2k)!}{(4k-4)!(k-1)!(k-1)!k!k!}\frac{(k+1)(2k-1)}{2(7k-5)}.
\end{align*}
Note that $\alpha(3)=\frac{15}{14}>1$, and $\alpha(k)$ is increasing for $k\geqslant 3$, because
\begin{eqnarray*}
\frac{\alpha(k+1)}{\alpha(k)} &=& \frac{2(2k-1)(2k+1)^2(k+2)(7k-5)}{(4k-1)(4k-3)(k+1)^2(7k+2)} \\
& =& \frac{112k^5+200k^4-116k^3-130k^2+22k+20}
{112k^5+144k^4-59k^3-96k^2+k+6} \\
&>& 1,
\end{eqnarray*}
holds for $k\geqslant 3$. In other words, for $k\geqslant 3$, we have $\alpha(k)>1$ and hence also ${q_{2k}^*(k)}>{p_{2k}^*(k)}$.
The second case is when $n$ is odd, that is, $n=2k+1$. Then we have
\begin{eqnarray*}
p_{2k+1}^*(k)&=&(4k+2)\Big(\frac{1}{k(k+1)}+\frac{1}{(k+1)(k+2)}-\frac{1}{2k(2k+1)}\Big) {2k+1 \choose k}^{-1}\\
& =& \frac{7k+2}{k(k+2)} {2k+1 \choose k}^{-1},
\end{eqnarray*}
and
\begin{eqnarray*}
\beta(k):=\frac{q_{2k+1}^*(k)}{p_{2k+1}^*(k)}&=& \frac{\varphi(k)\varphi(k+1)}{\varphi(2k)} {2k+1\choose k}\frac{k(k+2)}{7k+2} \\
&=& \frac{(2k-2)!(2k-1)!(2k)!(2k+1)!(k+2)}{(4k-2)!(k-1)!(k-1)!k!(k+1)!(7k+2)}.
\end{eqnarray*}
Now we have $\beta(3)=25/23>1$. In addition, $\beta(k)$ is increasing for $k\geqslant 3$ by noting that
\begin{eqnarray*}
\frac{ \beta(k+1)}{\beta(k)} &=&
\frac{(2k-1)(2k+1)(2k+2)(2k+3)(k+3)(7k+2)}{(k+2)^2(4k+1)(4k-1)k(7k+9)} \\
&=& \frac{112k^6+648k^5+1156k^4+630k^3-152k^2-198k-36}
{112k^6+592k^5+1017k^4+539k^3-64k^2-36k}\\
&\geqslant& 1
\end{eqnarray*}
holds for $k\geqslant 3$. In other words, for $k\geqslant 3$ and $n$ being odd, we also have $\beta(k)>1$ and hence also ${q_{2k+1}^*(k)}>{p_{2k+1}^*(k)}$. This completes the proof.
\hfill $\square$
\end{proof}
Parallel to Theorem~\ref{thm:clade:comp} which compares $p_n(a)$ and $q_n(a)$, the following theorem provides a comparison between $p^*_n(a)$ and $q^*_n(a)$.
\begin{theorem}
\label{thm:clan:comp}
For $n> 5$, there exists a number $\kappa^*(n)$ in $(1,\lfloor n/2 \rfloor)$, such that $p^*_n(a)>q^*_n(a)$ for $2\leqslant a \leqslant \kappa^*(n)$, and $p^*_n(a)<q^*_n(a)$ for $\kappa^*(n)<a \leqslant \lfloor n/2 \rfloor$.
\end{theorem}
\begin{proof}
For simplicity, let $b:=n-a$.
Since we have
$$
p_n^*(2)=\frac{4\Big(\frac{1}{6}+\frac{2}{n(n-1)(n-2)}\Big)}{n-1}>\frac{2}{3(n-1)}\geqslant \frac{1}{2n-5}=q_n^*(2),
$$
and $p^*_n(\lfloor n/2 \rfloor) < q^*_n(\lfloor n/2 \rfloor)$ by Lemma~\ref{lem:clan:comp:bound},
it suffices to prove that
\[
g_n(a) = \frac{p^*_n(a)}{q^*_n(a)}
\]
is strictly decreasing on $[2, \lfloor n/2 \rfloor]$. To this end, let
\[
f_n(a) = \frac{1}{a(a+1)} + \frac{1}{b(b+1)} - \frac{1}{n(n-1)}.
\]
From the definition of $g_n(a)$ and Theorem~\ref{thm:prob:clan}, we have
\[
\frac{g_{n}(a+1)}{g_n(a)} = \frac{f_{n}(a+1)}{f_n(a)} \frac{(a+1)(2b-3)}{b(2a-1)},
\]
which is }%\textcolor{red}{less than} $1$ for $2 \leqslant a \leqslant \lfloor n/2 \rfloor - 1$ if and only if
\begin{equation}
\label{eq:clan:comp:pf}
\beta_n(a):= f_n(a) b(2a-1)-f_n(a+1)(a+1)(2b-3) >0~~~~\text{for $2 \leqslant a \leqslant \lfloor n/2 \rfloor - 1$}.
\end{equation}
\bigskip
In the rest of the proof, we shall establish Eq.~(\ref{eq:clan:comp:pf}). To begin with, note that
\begin{align}
\label{eq:Delta}
\begin{split}
\beta_n(a)=\frac{3}{n-1}-\frac{3(2a+1)}{n(n-1)}+\frac{2a^2+an+5a-2n}{a(a+1)(a+2)}\\
+\frac{2a-3n}{(b-1)(b+1)}+\frac{a+2n+3}{(b-1)b(b+1)}.
\end{split}
\end{align}
This implies
\begin{align*}
\beta_n(2)
&=\frac{3n^4 - 18n^3 - 39n^2 + 342n - 360}{4n(n-1)(n-2)(n-3)}\\
&=\frac{3n^2(n^2-6n-13)+(342n-360)}{4n(n-1)(n-2)(n-3)}
> 0
\end{align*}
for $n\geqslant 6$ because $\beta_6(2)=1/5$, $\beta_7(2)=24/70$ and $n^2-6n-13>0$ for $n\geqslant 8$.
In addition, we have
\begin{eqnarray*}
\beta_{2t+1}(t) &=& \frac{4t^2+2t-2}{t(t+1)(t+2)}+\frac{-4t+2}{t(t+2)}+\frac{5}{t(t+2)}>0
\end{eqnarray*}
for $t\geqslant 3$ and
\begin{align*}
\beta_{2t+2}(t) &=
\frac{3}{2t-1}-\frac{3}{2t+2}+\frac{4t^2+3t-4}{t(t+1)(t+2)}-\frac{4t+6}{(t+1)(t+3)}+\frac{(5t+7)}{(t+1)(t+2)(t+3)} \\
&=\frac{9}{(2t-1)(2t+2)}+\frac{6t^2-12}{t(t+1)(t+2)(t+3)}\\
&> 0
\end{align*}
for $t\geqslant 2$. Therefore, we have $\beta_n(\lfloor n/2 \rfloor - 1)\geqslant 0$ for $n\geqslant 6$.
It remains to show that $\beta_n(a)$ is strictly decreasing, that is, $\beta_n(a)-\beta_n(a+1)> 0$ for $3\leqslant a \leqslant \lfloor n/2 \rfloor-1$. Indeed, by Eqn.~(\ref{eq:Delta}) we have
\begin{eqnarray*}
\beta_n(a)-\beta_n(a+1) &=& \frac{6}{n(n-1)}+\frac{2a^2+2an+8a-6n}{a(a+1)(a+2)(a+3)}
+\frac{2a^2-6an+4n^2-10n-8}{(b-2)(b-1)b(b+1)} \\
&>& \frac{n^2-7n-8+2a^2}{(b-2)(b-1)b(b+1)} \\
&>& 0.
\end{eqnarray*}
Here the first inequality follows from $a\geqslant 3$ and $a \leqslant \lfloor n/2 \rfloor-1 \leqslant (n-1)/2$ implying $3n^2-6an\geqslant 3n$, and the second one from $a\geqslant 3$ and $n\geqslant 6$. This completes the proof.
\hfill $\square$
\end{proof}
We end this section with some correlation results about clan probabilities under the PDA model.
\begin{theorem}
\label{thm:partition:unroot:pda}
}%\textcolor{red}{Let $A_1, \dotsc, A_k$ be $k$ disjoint (nonempty) subsets of $X$, and let $m = |A_1|+\dotsb+|A_k|$. Then we have}
\begin{align*}
\puu(A_1, \dotsc, A_k)
&= \frac{\varphi(n-m+k-1)\prod_{i=1}^k \varphi(|A_i|)}{\varphi(n-1)}.
\end{align*}
\end{theorem}
\begin{proof}
\rev{Since $\puu(T^*)=1/\varphi(n-1)$ for each tree $T^*$ in $\mathcal{T}_X$, it remains to compute the number of trees that have $A_1, \dotsc, A_k$ as clans is $\varphi(n-m+k-1)\prod_{i=1}^k \varphi(|A_i|)$. To this end, note that such a tree can be constructed in two steps:
\begin{enumerate}
\item Build an unrooted tree on $\left(X \setminus \bigcup_{i=1}^k A_i \right) \cup\{x_1,\dotsc,x_k\}$, where $x_1, \dotsc, x_k$ are leaves not in $X$ serving as ``placeholders'' used in the second step.
\item Replace each $x_i$ with a tree in $\mathcal{T}_{A_i}$.
\end{enumerate}
There are $\varphi(n-m+k-1)$ different choices for a tree in the first step, and there are $\prod_{i=1}^k \varphi(a_i)$ different ways to replace $x_1, \dotsc, x_k$ by trees in $\mathcal{T}_{A_1}, \dotsc, \mathcal{T}_{A_k}$. The claim then follows.}
\hfill $\square$
\end{proof}
\begin{theorem}
\label{thm:cor:unroot:PDA}
Let $A$ and $B$ be two subsets of $X$ with $a\leqslant b$, where $a=|A|$ and $b=|B|$.
Then we have
}%\textcolor{red}{
\begin{equation*}
\puu(A,B) =
\begin{cases}
\frac{\varphi(b)\varphi(n-b)\varphi(a)\varphi(b-a)}{\varphi(n-1)\varphi(b-1)}, & \text{if $A\subseteq B$,}\\
\frac{\varphi(a)\varphi(b)\varphi(n-a-b+1)}{\varphi(n-1)}, & \text{if $A$ and $B$ are disjoint,}\\
0, & \text{otherwise.}
\end{cases}
\end{equation*}
}
\end{theorem}
\begin{proof}
The first case follows by applying Theorem~\ref{thm:prob:clan} twice; the second case follows from Theorem~\ref{thm:partition:unroot:pda}.
\hfill $\square$
\end{proof}
\begin{corollary}
Let $A$ and $B$ be two compatible subsets of $X$. Then we have
$$
\puu(A,B)\geqslant \puu(A)\puu(B).
$$
\end{corollary}
\begin{proof}
}%\textcolor{red}{
Set $a=|A|$ and $b=|B|$. By symmetry we may assume without loss of generality that $a\leqslant b$ holds. Since $A$ and $B$ are compatible, we have either $A\cap B=\emptyset$ or $A\subseteq B$.
}
}%\textcolor{red}{To establish the theorem for the first case, note first that $n-a-b+1\leqslant n-b \leqslant n-a \leqslant n-1$ holds. Therefore by Lemma~\ref{lem:tree:ineq}, we have
$$
\varphi(n-a-b+1)\varphi(n-1)\geqslant \varphi(n-a)\varphi(n-a),
$$
and hence
$$
\frac{\varphi(a)\varphi(b)\varphi(n-a-b+1)}{\varphi(n-1)}\geqslant
\Big(\frac{\varphi(b)\varphi(n-b)}{\varphi(n-1)}\Big)\Big(\frac{\varphi(a)\varphi(n-a)}{\varphi(n-1)}\Big).
$$
Together with Theorem~\ref{thm:cor:unroot:PDA}, this shows that the theorem holds for the case $A\cap B=\emptyset$.
}
}%\textcolor{red}{
For the second case, note that $b-a\leqslant n-a \leqslant n-1$ and $b-a \leqslant b-1 \leqslant n-1$ hold. Therefore by by Lemma~\ref{lem:tree:ineq}, we have
$$
\varphi(n-1)\varphi(b-a)\geqslant\varphi(b-1)\varphi(n-a).
$$
and hence
$$
\frac{\varphi(b)\varphi(n-b)\varphi(a)\varphi(b-a)\varphi(n-b)}{\varphi(n-1)\varphi(b-1)}
\geqslant
\Big(\frac{\varphi(b)\varphi(n-b)}{\varphi(n-1)}\Big)\Big(\frac{\varphi(a)\varphi(n-a)}{\varphi(n-1)}\Big).
$$
Together withTheorem~\ref{thm:cor:unroot:PDA}, this shows that the theorem holds for the case $A\subseteq B$, as required.
}
\hfill $\square$
\end{proof}
\section{Discussion and concluding remarks}
Clade sizes are an important genealogical feature in the study of phylogenetic and population genetics. In this paper we present a comparison study between the clade probabilities under the YHK and PDA models, two null models which are commonly used in evolutionary biology.
Our first main result reveals a common feature, that is, the clade probability sequences are log-convex under both models. This implies that compared with `mid-sized' clades, very `large' clades and very `small' clades are more likely to occur under these two models, and hence provides a theoretical explanation for the empirical result on the PDA model observed by~\citet{PR05}.
One implication of this result is that in Bayesian analysis where the two null models are used as prior distribution, the distribution on clades is not uninformative as bias is given to those whose sizes are extreme. Therefore, further considerations or adjustment, such as introducing a Bayes factor to account for the bias on prior clade probabilities, is important to interpret posterior Bayesian clade supports.
The second result reveals a `phase transition' type feature when comparing the sequences of clade probabilities under the two null models. That is, we prove that there exists a critical value $\kappa(n)$ such that the probability that a given clade with size $k$ is contained in a random tree with $n$ leaves generated under the YHK model is smaller than that under the PDA model for $1<k\leqslant \kappa(n)$, and higher for all $\kappa(n)\leqslant k <n$.
This implies that typically the trees generated under the YHK model contains relatively more `small' clades than those under the PDA model.
The above two results are also extended to unrooted trees by considering the probabilities of `clans', the sets of }%\textcolor{red}{taxa} that are all on one side of an edge in an unrooted phylogenetic tree.
This extension is relevant because in many tree reconstruction approaches, the problem of finding the root is either ignored or left as the last step. Here
we study the sequences formed by clan probabilities for unrooted trees }%\textcolor{red}{generated by the two null models, and obtain several results similar to those for rooted trees.}
Note that the two models studied here are special instances of the $\beta$-splitting model introduced by~\citet{aldous96a}, a critical branching process in which the YHK model corresponds to $\beta=0$ and the PDA model to $\beta=-1.5$. Therefore, it would be of interest to study clade and clan probabilities under this more general model. In particular, it is interesting to see whether the relationships between two models revealed in this paper also hold for general $\beta$.
\begin{acknowledgements}
We thank Prof. Kwok Pui Choi and Prof. Noah A.
Rosenberg for simulating discussions and useful suggestions.
We would also like to thank two anonymous referees for their
helpful and constructive comments on the first version of this paper.
\end{acknowledgements}
\bibliographystyle{spbasic}
{\footnotesize
|
1,477,468,750,592 | arxiv | \section{Introduction}
\begin{figure*}[t]
\includegraphics[width=1.0\linewidth]{fig1}
\caption{Schematic of the 2D periodic lattices considered in the main text: (a), the honeycomb lattice with alternating values of on-site potential along the $y$-direction; and, (b), the transformed version obtained by straightening the bonds along the $y$-axis. The $y$-layers are indexed by $\ell=1,2,\ldots$ and are highlighted by blue (red) backgrounds indicating the presence of an on-site potential $W_0\,(-W_0)$ along the layer. The dashed black lines represent the bonds connecting the lattice across the boundaries. Each unit cell, delineated by a green rectangle, contains four sites labeled by the numbers $1$ to $4$. The stripes along the $y$-direction are indexed by $i=1,2,\ldots$ in panel (b).}\label{fig:lattice}
\end{figure*}
Since the discovery of topological insulators (TIs), tremendous effort has gone into the understanding of this novel phase of matter, both theoretically and experimentally \cite{hasan2010colloquium}. The hallmark of a TI is the existence of a bulk band gap, similar to an ordinary insulator, along with protected gapless surface states. While conducting surface states can also be observed in normal band insulators, the signature that makes the TIs unique is the topological protection of the surface states by time reversal symmetry. The application prospects of TIs in spintronic devices and quantum information technology have projected them as worthy candidates for frontier research in condensed matter physics.
A three-dimensional (3D) TI is identified by four $\mathbb{Z}_2$ topological indices $(\nu_0,\bm{\nu})$, where $\nu_0$ is the strong topological index and $\bm{\nu}$ represent the weak topological indices \cite{fu2007topological1,fu2007topological2,moore2007topological,roy2009topological}. A system with a non-trivial value of $\nu_0$ is known as a strong TI (STI), where gapless surface states are manifested on each two-dimensional (2D) surface of the system. On the other hand, if we try to form a 3D structure by stacking layers of 2D STIs along some particular direction, we end up with a system that exhibits gapless states on some of its 2D surfaces (depending on the stacking orientation), while the other surfaces remain gapped. This system is referred to as a weak TI (WTI) for which the strong topological index $\nu_0$ is zero, but some of the weak topological indices $\bm{\nu}$ attain non-trivial values.
In general, a $d$-dimensional WTI can be visualized as a system constructed by stacking $(d-1)$-dimensional STIs. For example, in 2D, for the case of the BD\textsf{I} symmetry class, according to the tenfold periodic table of topological phases \cite{chiu2016classification} the strong topological index is zero, whereas a STI phase is only manifested in one-dimension (1D) with a $\mathbb{Z}$ topological invariant. If we now think of a system obtained by stacking $L_y$ 1D BD\textsf{I} chains with topological index $\nu$ along the $y$-direction, the resulting 2D system will behave as a WTI as long as the BD\textsf{I} symmetries are preserved. This system will manifest conducting edge states only along the edges localized at the two ends of the lattice in the $x$-direction. The weak topological index $\nu_x$ in such a case can be measured by averaging the strong topological index over the $L_y$ layers.
The main difference between a STI and a WTI lies in the robustness of their edge states. While for STIs symmetry-preserving disorder can never gap the edge states, this is not the situation for WTIs. Instead, the protection of the edge states in WTIs appears to require lattice-translational symmetry, so it is natural to assume that even a small amount of disorder could destroy the topological phase. However, for a 3D WTI, it was demonstrated that a conducting edge state can actually persist in the presence of disorder, as long as time-reversal symmetry and the bulk gap are preserved~\cite{ringel2012strong}. While the experimental verification of STIs has been performed in diverse classes of materials \cite{chen2009experimental,xia2009observation,zhang2009topological} since its theoretical prediction, there are only a few examples of materials exhibiting WTI phases \cite{rasche2013stacked,pauly2015subnanometre,noguchi2019weak}. Further venues are needed for the study of WTI phases \cite{yan2012prediction}, their stability properties~\cite{mondragon2014topological,claes2020disorder}, and the effects of interactions~\cite{li2015interacting}.
In recent years, the study of topological phases in bosonic systems has been a center of attraction in condensed matter physics. Due to the condensation property of bosons, the realization of topological phases requires interaction among the particles. This could, in fact, enhance the richness of the various topological phases observed in a bosonic system. Moreover, recent advancements in optical lattice experiments have created a promising platform, where different phases of interacting and non-interacting bosonic systems can be realized in a controlled manner. These developments highlight the need for an extensive theoretical study of topological phases of bosons in the presence of interactions. In particular, a study of interacting bosonic analogues of WTIs can help identify natural, minimal models that nucleate such phases, explore the interplay of the various competing orders that arise in such systems, and determine the effect of interactions on the emerging phase diagram.
In this paper we study the infinite on-site repulsion limit of bosons [hard-core bosons (HCBs)] on the 2D honeycomb lattice in the presence of on-site potential and longer-range interactions. We demonstrate that weak topological phases arise quite naturally when the HCBs are simply subjected to an on-site potential with alternating signs along the different $y$-layers. Using quantum Monte Carlo (QMC) technique supported by analytical calculations, we find that the phase diagram of the model exhibits three insulating phases at densities $1/4$, $1/2$ and $3/4$, separated from each other by a superfluid region. Depending on the choice of the on-site potential form, either the insulator at $1/4$ or $3/4$ filling is found to be a WTI, which manifests a nontrivial Berry phase and the existence of edge states along the $x$-edges of the lattice. These WTI phases away from half-filling are a prime example of a mirror-protected WTI. We introduce a formula for the Berry phase that relies on the permanent (rather than a determinant) for the HCBs, and uncover a robust 1D superfluidity along the topological edge states. Finally, we demonstrate a remarkable stability of the topological phase against any amount of nearest-neighbor (NN) repulsion, as well as weak next-nearest-neighbor (NNN) repulsion among the HCBs. Through these developments we introduce a framework that could precipitate the study of additional bosonic TIs, see, e.g.,~\cite{ghosh2020chiral}.
The paper is organized as follows. In Section~\ref{sec:model} we present the model, the numerical techniques and the relevant order parameters. In Section~\ref{sec:phase_diagram} we present the phase diagram of the model and analyze the different phases. The edge states of the insulating phases of the model are analyzed using QMC methods in Section~\ref{sec:edgestates}. In Section~\ref{sec:topologicalinvariant} we calculate the topological invariants for the insulating phases. Next, in Section~\ref{sec:repulsion} the effect of NN and NNN repulsion on the WTI is presented. Lastly, in Section~\ref{sec:conclusion} we conclude. In Appendices~\ref{app:bandstructure} and \ref{app:protection} we analyze the band structure of the model and discuss the protection of the edge states.
\section{Model and Formulation}\label{sec:model}
We consider HCBs in a 2D periodic honeycomb lattice, as depicted in Fig.~\ref{fig:lattice}, governed by the Hamiltonian
\begin{align}
\hat{H}=-t\sum_{\langle i,j\rangle} \left(\hat{d}_i^\dagger \hat{d}_j+ {\rm h.c.} \right)+\sum_i W_i \hat{n}_i-\sum_i \mu \hat{n}_i.\label{eq:hamiltonian}
\end{align}
Here $\hat{d}_i^\dagger$ ($\hat{d}_i$) creates (annihilates) a HCB at site $i$, ${\langle i,j\rangle}$ represent NN pairs of sites, $t$ is the amplitude of NN hopping, $W_i$ is the on-site potential at site $i$ and $\mu$ denotes the chemical potential. We take the NN hopping as the unit of energy and set $t=1$ for our numerical calculations. In our study $W_i$ forms a periodic potential along the $y$-direction with a period of two lattice sites, i.e., we take $W_i= W_0\,(-W_0)$ for layers (along the $y$-direction) labeled by odd (even) values of $\ell$. We shall assume that the lattice constant is $a=1$ throughout.
To study the various phases of the Hamiltonian in Eq.~(\ref{eq:hamiltonian}), we use the Stochastic-Series-Expansion (SSE) technique \cite{sandvik1997finite,sandvik2010lecture}, a quantum Monte Carlo method, employing directed loop updates \cite{syljuaasen2002quantum,syljuaasen2003directed}. To capture the ground state-properties of a $L\times L$ honeycomb lattice using SSE, all simulations have been done at low enough temperatures such that the inverse temperature $\beta\sim L$ \cite{batrouni1995supersolids}.
To construct the phase diagram using SSE we use four order parameters: average density $\rho$, superfluid density $\rho_s$, structure factor $S(\boldsymbol{Q})$ and dimer structure factor $S_D(\boldsymbol{Q})$.
The average density of a system containing $N_s$ sites is $\hat{\rho}=\sum_i \hat{n}_i/N_s$, where $\hat{n}_i=\hat{d}_i^\dag \hat{d}_i$ gives the number of HCBs (either $0$ or $1$) at site $i$. To calculate the superfluid density using SSE, we employ the following expression in terms of the winding numbers $\Omega_x$ and $\Omega_y$ along $x$ and $y$-directions \cite{sandvik2010lecture},
\begin{align}
\rho_s=\frac{1}{2\beta}\left\langle \Omega_x^2+\Omega_y^2\right\rangle\equiv\rho_s^x+\rho_s^y,
\end{align}
where $\langle\cdots\rangle$ represents ensemble average. For example, the winding number $\Omega_x$ can be calculated by counting the total number of operators $N_x^+ (N_x^-)$ transporting particles in the positive (negative) $x$-direction, according to the formula $\Omega_x=\frac{1}{L_x}(N_x^+-N_x^-)$, where $L_x$ is the length of the lattice along the $x$-direction.
Next, the structure factor per site is expressed as,
\begin{align}
S(\boldsymbol{Q})=\frac{1}{N_s^2} \sum_{i,j} e^{i\boldsymbol{Q}\cdot(\boldsymbol{r}_i-\boldsymbol{r}_j)}\langle \hat{n}_i \hat{n}_j\rangle,\label{eq:struct_fac}
\end{align}
where $\boldsymbol{r}_i=(x_i,y_i)$ is the position of site $i$. To calculate the structure factor for particles in a $L\times L$ honeycomb lattice, we can always use a transformation on the lattice to straighten the bonds along the $y$-direction, such that the resulting lattice looks like the one depicted in Fig.~\ref{fig:lattice}\,b. With the use of the position vectors $\boldsymbol{r}$ of this new transformed lattice, the allowed values of the wavevector $\boldsymbol{Q}$ coincide with those of an $L\times L$ square lattice, i.e., $\boldsymbol{Q} =(2\pi p/L,2\pi q/L)$, where $p=0,1,\cdots,L-1$ and $q=0,1,\cdots,L-1$. To detect the presence of diagonal long-range orders in the system, we have calculated $S(\boldsymbol{Q})$ for all possible values of $\boldsymbol{Q}$ and identify the ones at which the structure factor displays peaks.
Lastly, we define the dimer structure factor as
\begin{align}
S_D(\boldsymbol{Q})=\frac{1}{N_b^2}\sum_{\alpha,\beta}e^{i\boldsymbol{Q}\cdot(\boldsymbol{R}_\alpha-\boldsymbol{R}_\beta)}\langle \hat{D}_\alpha \hat{D}_\beta\rangle, \label{eq:dimer_struct_fac}
\end{align}
where $\hat{D}_\alpha= \hat{d}^\dagger_{\alpha_L} \hat{d}_{\alpha_R}+\hat{d}^\dagger_{\alpha_R} \hat{d}_{\alpha_L}$ is the dimer operator defined on the $\alpha$-th NN bond aligned along $x$-axis with $\alpha_L$, $\alpha_R$ being the two lattice sites attached to this bond. In Eq.~\eqref{eq:dimer_struct_fac}, the summation runs over $N_b$ NN bonds oriented along $x$-axis and the vectors $\boldsymbol{R}$ represent the position coordinate corresponding to the midpoints of these bonds in the transformed lattice in Fig.~\ref{fig:lattice}\,b. The dimer operator is chosen in a way such that it will give a nonzero expectation value only when a dimer is formed, i.e., when the constituent particle hops back and forth along the NN bond.
\begin{figure}[t]
\includegraphics[width=\linewidth,angle=0]{fig2a}
\includegraphics[width=\linewidth,angle=0]{fig2b}
\caption{Plots of the four order parameters for $W_0=6.0$ as a function of the chemical potential $\mu$: (a), density $\rho$ and superfluid density $\rho_s$; and, (b), structure factor $S(0,\pi)$ and dimer structure factor $S_D(0,\pi)$. Here $t=1.0$ and the calcualtions are performed on a $20\times20$ periodic honeycomb lattice.}\label{fig:order_parameters}
\end{figure}
\section{Phase diagram}\label{sec:phase_diagram}
\begin{figure*}[t]
\includegraphics[width=\linewidth,angle=0]{fig3}
\caption{Spatial structures of the insulating phases: (a) The dimer insulator at density $\rho=1/4$; (b) Charge-density-wave insulator at half-filling; and, (c) The dimer insulator at filling fraction $3/4$. The red dashed lines depict the formation of dimers. The white, grey and black circles signify lattice sites with density $0.0$, $0.5$ and $1.0$ respectively. The blue dashed rectangles represent the underlying 1D SSH-like chains, at their, (a), topological and, (c), non-topological phases.}\label{fig:structure_insulators}
\end{figure*}
\begin{figure}[b]
\includegraphics[width=\linewidth,height=5.7cm,angle=0]{fig4}
\caption{Pictorial description of hopping processes in the superfluid region in-between the dimer insulator at $\rho=1/4$ and CDW structure at $\rho=1/2$.}\label{fig:superfluid}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=0.45\linewidth,angle=0]{fig5a}
\includegraphics[width=0.45\linewidth,angle=0]{fig5b}
\caption{Phase diagram of HCBs on the honeycomb lattice as function of the chemical potential ($\mu$) and the alternating on-site potential strength along the $y$-direction ($W_0$): (a) in the atomic limit; and, (b) in the presence of a finite hopping $t=1.0$. The $\rho=0$ (light grey), $\rho=1/2$ (pink) and $\rho=1$ (dark grey) regions denote the empty phase, the charge-density-wave insulator at half-filling and the Mott insulator at filling-fraction $1$, respectively. The blue (green) regions indicate the dimer insulator at $1/4$ ($3/4$)-filling. The yellow region depicts the superfluid phase. The solid lines with points indicate phase boundaries obtained using SSE (for $20\times20$ lattice with $\beta=120$), whereas the dashed lines indicate the calculated band edges.}\label{fig:phase_diagram}
\end{figure*}
To construct the phase diagram we study the Hamiltonian in Eq.~\eqref{eq:hamiltonian} at various values of $W_0$ by varying the chemical potential $\mu$. Fig.~\ref{fig:order_parameters} depicts the variations of HCB density $\rho$, superfluid density $\rho_s$, structure factor $S(0,\pi)$ and dimer structure factor $S_D(0,\pi)$ as a function of $\mu$, where the value of $W_0$ is fixed at $6.0$. The three plateaus in the $\rho-\mu$ curve (apart from the trivial ones at $\rho=0$ and $\rho=1$) clearly indicate the presence of three incompressible insulators at densities $1/4$, $1/2$ and $3/4$. At these plateaus the superfluid density becomes zero, whereas in the intermediate regions it attains some non-zero value, thus separating the three insulating regions by a superfluid phase. To understand the nature of the insulators we have calculated the structure factor $S(\boldsymbol{Q})$ and dimer structure factor $S_D(\boldsymbol{Q})$ for all possible values of $\boldsymbol{Q}$. We find that $S(\boldsymbol{Q})$ peaks only at wavevector $\boldsymbol{Q}=(0,\pi)$, whereas $S_D(\boldsymbol{Q})$ displays peaks for both $\boldsymbol{Q}=(0,\pi)$ and $\boldsymbol{Q}=(\pi,0)$ with the same peak value (in Fig.~\ref{fig:order_parameters}\,b only $S_D(0,\pi)$ is displayed).
We note that the results in Fig.~\ref{fig:order_parameters} are independent of the sign of $W_0$, i.e., whether we choose $W_i$ to be positive (negative) for odd (even) values of $\ell$ in Fig.~\ref{fig:lattice} or the reverse scenario, Fig.~\ref{fig:order_parameters} remains unaltered. In the following, we assume $W_0>0$ to analyze our results, but a similar analysis can be extended to the reverse scenario as well. In Section~\ref{sec:edgestates} we will see that the sign of $W_0$ nevertheless plays a role in the characterization of the different phases.
Since the on-site potential for the layers labeled by even values of $\ell$ is $-W_0$, upto half-filling the particles will prefer to occupy these layers only, keeping the odd $\ell$ layers completely empty. Due to the presence of NN hopping, at $1/4$-filling, it is energetically favorable for the system to fill the upper two sites of each unit cell (i.e., site $1$ and $4$) by one particle only, so that this particle can hop back and forth between sites $1$ and $4$ of two adjacent unit cells to further lower the energy of the system. As a result of this hopping process dimers are formed between two sites belonging to the upper half of two different unit cells. Due to the formation of these dimers there is no net flow of HCBs in the $x$ or $y$-directions, which makes the phase insulating in nature. The structure of this dimer insulator at $1/4$-filling is depicted in Fig.~\ref{fig:structure_insulators}\,a. We note that at each even $\ell$ level we have one dimer which involves two boundary sites when the open boundary condition is applied along $x$-direction with zigzag edges.
Now, let us analyze the structure factor, Eq.~\eqref{eq:struct_fac}, for $\boldsymbol{Q}=(0,\pi)$,
\begin{align}
S(0,\pi)=\frac{1}{N_s^2} \sum_{i,j} e^{i\pi(y_i-y_j)}\langle \hat{n}_i \hat{n}_j\rangle.\label{eq:S(0,pi)}
\end{align}
Although the summation in Eq.~\eqref{eq:S(0,pi)} is over all possible pairs of sites in the lattice, only those pairs for which both sites are occupied will have a non-zero contribution. Since for the dimer insulator at $\rho=1/4$ (see Fig.~\ref{fig:structure_insulators}\,a), all particles reside on the even layers only, for all contributing pairs $y_i-y_j$ is even, so Eq.~\eqref{eq:S(0,pi)} reduces to
\begin{align}
S(0,\pi)=\frac{1}{N_s^2} \sum_{i,j}\langle \hat{n}_i \hat{n}_j\rangle.\label{eq:S(0,pi)2}
\end{align}
At $1/4$-filling there are $N_s/4$ particles in the system and each of them participates in $N_s/4$ pairs in the summation (including the case where $i = j$) with a $+1$ contribution to the structure factor.
Therefore, for the dimer insulator at $\rho=1/4$, $S(0,\pi)$ attains the value
\begin{align}
S(0,\pi)=\frac{1}{N_s^2} \left(\frac{N_s}{4}\right)^2=0.0625.\label{eq:0.625-1/4}
\end{align}
This result matches well with the result in Fig.~\ref{fig:order_parameters}\,b. It is clear from the discussion above that at $1/4$-filling, as long as the particles are constrained to reside on alternate layers, the value of $S(0,\pi)$ will be $0.0625$. This value is independent, e.g., of whether the particles form a dimer insulator or arrange themselves in a charge-density-wave (CDW) pattern.
To manifest the formation of dimer insulator at density $\rho=1/4$, we next calculate the dimer structure factor $S_D(\boldsymbol{Q})$ as prescribed in Eq.~\eqref{eq:dimer_struct_fac}. Since in our system the dimers are formed along the NN bonds oriented along $x$-direction of the lattice, we have defined the dimer structure factor such that it will detect dimers along these bonds only. Now, if we think about the dimer-insulator structure corresponding to $\rho=1/4$ (as depicted in Fig.~\ref{fig:structure_insulators}\,a) for the transformed lattice in Fig.~\ref{fig:lattice}\,b, it is easy to see that for any two dimers with midpoints $\boldsymbol{R}_\alpha=(X_\alpha,Y_\alpha)$ and $\boldsymbol{R}_\beta=(X_\beta,Y_\beta)$, both $(X_\alpha-X_\beta)$ and $(Y_\alpha-Y_\beta)$ are even. As a result, the dimer structure factors for $\boldsymbol{Q}=(0,\pi)$ and $\boldsymbol{Q}=(\pi,0)$ reduce to the exact same expression,
\begin{align}
S_D(0,\pi)=S_D(\pi,0)=\frac{1}{N_b^2}\sum_{\alpha,\beta} \langle \hat{D}_\alpha \hat{D}_\beta\rangle.
\end{align}
In the dimer insulator phase, with $N_b$ being the total number of NN bonds along $x$-direction, there are $N_b/2$ dimers in the system. Each of these dimers will participate in $N_b/2$ pairs (of dimers) in the summation, with a $+1$ contribution towards the dimer structure factor. Therefore, the dimer structure factor reduces to,
\begin{align}
S_D(0,\pi)=S_D(\pi,0)=\frac{1}{N_b^2} \left(\frac{N_b}{2}\right)^2=0.25,
\end{align}
which is indeed attained in Fig.~\ref{fig:order_parameters}\,b at filling $1/4$.
Next, at $1/2$-filling, the layers labeled by even $\ell$ values are completely filled, such that the upper two sites of each unit cell are occupied by two HCBs. Therefore, at this density the dimers of $1/4$-filling disappear completely and we have a CDW, similar to the one depicted in Fig.~\ref{fig:structure_insulators}\,b, which is insulating in nature. This can further be verified from Fig.~\ref{fig:order_parameters}\,b, where we can see that at $\rho=1/2$ the dimer structure factor [$S_D(0,\pi)$] vanishes and the structure factor [$S(0,\pi)$] shows a peak with value $0.25$. The half-filled system contains $N_s/2$ particles in total and each of them participates in $N_s/2$ pairs of sites, which has a non-zero contribution towards the structure factor. So, Eq.~\eqref{eq:S(0,pi)2} in this case simply becomes
\begin{align}
S(0,\pi)=\frac{1}{N_s^2} \left(\frac{N_s}{2}\right)^2=0.25,
\end{align}
which is the maximum value attained by $S(0,\pi)$.
Finally, the structure of the insulator at $3/4$-filling is depicted in Fig.~\ref{fig:structure_insulators}\,c, where the even $\ell$ levels are completely filled and the odd ones are half-filled. In terms of unit cells this means that the upper two sites (sites $1$ and $4$) of each cell are fully occupied and the lower two sites (sites $2$ and $3$) share a single HCB. Again by virtue of NN hopping, the particles at odd $\ell$-levels can further lower their energy by hopping back and forth between sites $2$ and $3$ of each unit cell. As a result, dimers are formed in the lower half of each unit cell. The main difference between the dimers formed at $\rho=1/4$ and $\rho=3/4$ is that, the dimers at $1/4$-filling are formed between two sites belonging to two different unit cells, whereas the sites involved in $3/4$-filling are residents of the same unit cell. Since the number of dimers formed in this case coincides with the one for $\rho=1/4$, the dimer structure factor attains the same peak value $0.25$ in this situation as well. The value of the corresponding structure factor can be extracted by realizing that out of the $3N_s/4$ particles in the system, $N_s/4$ reside on the odd $\ell$ layers, whereas $N_s/2$ particles are located at even $\ell$ layers. So, in total there are $2(N_s/4)(N_s/2)$ pairs for which the separation between the particles along the $y$-axis (i.e., $y_i-y_j$ in Eq.~\eqref{eq:S(0,pi)}) is odd. Clearly each of these pairs will contribute $-1$ to the structure factor (as $e^{i\pi(y_i-y_j)}=-1$ for these cases). On the other hand, for $(N_s/4)^2+(N_s/2)^2$ number of pairs, $y_i-y_j$ is an even multiple of the lattice constant, which gives rise to a positive contribution to the structure factor. Therefore, the structure factor for this insulator attains the value,
\begin{align}
S(0,\pi)&=\frac{1}{N_s^2}\left[\left(\frac{N_s}{4}\right)^2+\left(\frac{N_s}{2}\right)^2-2\left(\frac{N_s}{4}\right)\left(\frac{N_s}{2}\right)\right]\nonumber\\
&=\frac{1}{N_s^2}\left(\frac{N_s}{2}-\frac{N_s}{4}\right)^2=0.0625,
\end{align}
which coincides with the one for $1/4$-filling, Eq.~\eqref{eq:0.625-1/4}.
\begin{figure*}[t]
\includegraphics[width=0.328\linewidth,angle=0]{fig6a}
\includegraphics[width=0.328\linewidth,angle=0]{fig6b}
\includegraphics[width=0.328\linewidth,angle=0]{fig6c}
\caption{The four branches of energy spectrum corresponding to (a) $W_0<t$, (b) $W_0=t$, and (c) $W_0>t$.}\label{fig:spectrum}
\end{figure*}
Next we turn to discuss the superfluid phase. In the intermediate regions between the insulating phases, where the superfluid density is finite, both the structure factor $S(0,\pi)$ and dimer structure factor $S_D(0,\pi)$ admit nonzero values, see Fig.~\ref{fig:order_parameters}. Interestingly, in these intermediate regions, we observed anisotropy in the superfluid density where $\rho_s^y$, the superfluid density along $y$-direction of the lattice, is much larger than the one along the $x$-direction, $\rho_s^x$. So while the superfluid retains some additional structure from the two neighboring insulators in the transition region, it can still superflow in both directions, as we now argue. The mechanism is described in Fig.~\ref{fig:superfluid}, where we take for example the transition between the insulators at fillings $1/4$ and $1/2$. Consider a situation where we add one HCB to the dimer insulator at $\rho=1/4$. At this range of fillings the particles will naturally prefer to occupy the even layers, having the lower on-site potential, so the extra particle chooses to reside on one of the sites attached to bond $b_1$ in Fig.~\ref{fig:superfluid}\,a, effectively generating a doubly occupied bond. Now, this extra particle can hop through the lattice giving rise to superfluidity in two different ways. Firstly, the extra particle can follow a two-step hopping process similar to the one depicted by blue arrows in Fig.~\ref{fig:superfluid}\,a. By virtue of this process, effectively the doubly occupied bond $b_1$ has hopped to bond $b_2$ (Fig. \ref{fig:superfluid}\,b) giving rise to superfluidity along $y$-direction. Since the difference of energies of the particle at the initial (or final) and intermediate step of this process is $2W_0$, the energy gained by the particle during this process is $\sim t^2/(2W_0)$. Secondly, the particle can also follow a three-step hopping process depicted by the green arrows in Fig.~\ref{fig:superfluid}\,a. This process results in a configuration as shown in Fig.~\ref{fig:superfluid}\,c, where the doubly occupied bond $b_1$ has effectively hopped along $x$-direction of the lattice to the bond $b_3$, contributing to a non-zero superfluid density $\rho_s^x$. On account of the fact that both the intermediate sites involved in this hopping process have energies higher than the initial or final sites, by an amount of $2W_0$, one can see that the energy gain in this process is $\sim t^3/(4W_0^2)$. Therefore, in comparison the doubly occupied bond can always gain more energy by hopping in the $y$-direction of the lattice than in the $x$-direction. Consequently, anisotropy is developed in the superfluid density with $\rho_s^y > \rho_s^x$. Similar arguments hold for the superfluid regions between any two insulators.
The complete phase diagram of the model in the $(\mu,W_0)$ plane is depicted in Fig.~\ref{fig:phase_diagram} in the atomic limit, i.e., when the hopping $t$ is turned off (Fig.~\ref{fig:phase_diagram}\,a) and for $t=1.0$ (Fig.~\ref{fig:phase_diagram}\,b). The phase boundaries are obtained from QMC (solid lines with points), performed for a $20 \times 20$ periodic honeycomb lattice with inverse temperature $\beta=120$.
In the presence of finite NN hopping $t=1.0$ (Fig.~\ref{fig:phase_diagram}\,b), for $W_0=0$, the superfluid phase fills the range between the Mott lobes at densities $\rho=0$ and $\rho=1$. Beyond some critical value of $W_0$, additional insulating lobes start to appear at densities $1/4$, $1/2$ and $3/4$ separated by superfluid regions. The insulator at half-filling is a CDW, whereas the other two are dimer insulators. One can see that the phase boundaries obtained from QMC are more or less consistent with the calculated band edges from Appendix~\ref{app:bandstructure} (dashed lines in Fig.~\ref{fig:phase_diagram}\,b), except in the neighborhood of $W_0=1$, where they slightly deviate. Since the analytical results demonstrate the phase boundaries for a lattice in the thermodynamic limit, we expect the deviations to be smaller for larger system sizes. Within the error bars of our QMC calculations the critical value of $W_0$ beyond which the insulating phases appear, is $1.4$ for the CDW at half-filling, whereas for the dimer insulators it appears to be $1.5$. However, the calculated band edges predict that the tips of all three insulating lobes lie at $W_0=1$.
Next, in the atomic limit (Fig.~\ref{fig:phase_diagram}\,a), we see that only the CDW insulator at half-filling survives and both the dimer insulators vanish completely. Indeed, the dimers in the dimer insulators are formed by virtue of the NN hopping in the presence of a finite $W_0$. On the other hand, in the atomic limit the CDW insulator appears as soon as we have a non-zero $W_0$. In fact, the presence of NN hopping can destroy this structure by transforming it into a superfluid, as is indeed observed in Fig.~\ref{fig:phase_diagram}\,b. Beyond some critical value of $W_0$ the CDW phase sets in because at this stage $W_0$ dominates over hopping $t$ and it becomes energetically favorable for the particles to be frozen in this structure instead of moving around in a superfluid phase. The boundaries of the CDW phase in Fig.~\ref{fig:phase_diagram}\,a can be determined by considering the change in the total energy of the system when we introduce an additional HCB in the half-filled system manifesting the CDW phase. At half-filling all the particles occupy the sites with on-site potential $-W_0$. As a result, in this situation the total energy of the system is simply $E[N_s/2]=-\mu N_s/2 -W_0 N_s/2$. Now, if we try to add another HCB to the system, this additional particle will have to occupy a site with on-site potential $W_0$. So, in this scenario the total energy of the system will be $E[N_s/2+1]=-\mu (N_s/2+1)-W_0 N_s/2 +W_0$. Thus, the change in the total energy of the system to add an additional HCB is $\Delta E=-\mu+W_0$. As long as $\Delta E>0$ the phase remains stable against the addition of an extra particle; therefore, the upper phase boundary of the CDW is given by the line $\mu=W_0$. Similarly the lower phase boundary can be determined by following the same procedure for the case when we reduce one particle from the half-filled system, for which the boundary will be given by the line $\mu=-W_0$.
The calculated band edges (see Appendix~\ref{app:bandstructure}) appear as dashed lines in Fig.~\ref{fig:phase_diagram}\,b. The nature of the different phases is further elucidated by the calculated spectrum, presented in Fig.~\ref{fig:spectrum}. For $W_0<t$ a Dirac semi-metal is observed at $\rho=1/2$, while for $W_0=t$ it is replaced by a nodal line semi-metal. Finally, for $W_0>t$, the phases at $\rho=1/4$ and $\rho=3/4$ develop a full gap, and the dimer insulators are formed. Since the dimer insulators have no corresponding atomic limits, they appear to be more interesting to investigate. In the next section, we shed some light on the nature of these insulators by exploring their edge structure.
\section{Edge States}\label{sec:edgestates}
\begin{figure}[t]
\includegraphics[width=0.9\linewidth,angle=0]{fig7ab}
\hbox{\hspace{-0.65cm}
\includegraphics[width=0.882\linewidth,angle=0]{fig7c}}
\caption{Splitting of the plateau at: (a), $\rho=1/4$ with $W_0=6.0$ ; and, (b), $\rho=3/4$ at $W_0=-6.0$, when open boundary conditions are applied along the $x$-direction on a $20\times 20$ honeycomb lattice. (c) The superfluid density $\rho_{s,i}^y$ for different stripes ($i$) of the same $20\times20$ open lattice measured for $\mu=-6.16$ ($\rho=1/4$) and $\mu=6.16$ ($\rho=3/4$) with $W_0=6.0$. Here we take $t=1.0$ and $\beta=120$.
}\label{fig:split}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=0.328\linewidth,angle=0]{fig8a}
\includegraphics[width=0.328\linewidth,angle=0]{fig8b}
\includegraphics[width=0.328\linewidth,angle=0]{fig8c}
\caption{Local density profile of a $20\times 20$ honeycomb lattice under open boundary condition along $x$-direction with $W_0=6$, $t=1$ and $\beta=120$, corresponding to: (a) $\mu=-6.5$ in the lower plateau, (b) $\mu=-5.8$ in the upper plateau, and (c) the difference between densities of (b) and (a).}\label{fig:density_profile}
\end{figure*}
To further explore the nature of the dimer insulators, we measure the shift in the average density $\rho$ when we switch to open boundary conditions along the $x$-direction, i.e., by turning off the horizontal dashed bonds in Fig.~\ref{fig:lattice}\,b \cite{li2015complete,wang2018topological}. We observe that as a function of the chemical potential $\mu$ the average density $\rho$ remains unaltered except for $\rho=1/4$, which splits into two plateaus corresponding to densities $\rho_1=0.225$ and $\rho_2=0.275$, see Fig.~\ref{fig:split}\,a. Further investigation reveals that the values of $\rho_1$ and $\rho_2$ depend on the size of the system: for an $8\times8$ system $\rho_1=0.1875$ and $\rho_2=0.3125$, whereas for a $10\times 10$ system $\rho_1=0.2$ and $\rho_2=0.3$. Generally speaking, we find that for a honeycomb lattice with $N_e$ edge sites (depicted by red circles in Fig.~\ref{fig:density_profile}\,c) and $N_s$ total number of sites, the plateau at $\rho=1/4$ splits into $\rho_1=\rho-N_e/(2N_s)$ and $\rho_2=\rho+N_e/(2N_s)$.
We now argue that the splitting of the plateau under open boundary conditions corresponds to the number of in gap edge states. At $1/4$-filling the dimer structure can be thought of as a series of 1D Su-Schrieffer-Heeger (SSH)-like chains (depicted by blue dashed rectangle in Fig.~\ref{fig:structure_insulators}\,a), where dimers of strength $\sim t$ are formed. The dimers in each chain are weakly coupled to each other via third-order hopping through the intermediate sites with higher on-site potential, $t_x\sim t^3/(2W_0^2)$. Now, each of these SSH-like chains will give rise to a pair of degenerate in-gap edge-states under open boundary condition, localized at the two ends of the chain. As is clear from Fig.~\ref{fig:structure_insulators}\,a, for a $L_x\times L_y$ lattice, there are $L_y/2$ different SSH chains weakly connected to each other via second order hopping, $t_y\sim t^2/(2W_0)$. Due to this inter-chain interaction, the degeneracy of the edge-states will be lifted and the resulting $L_y$ in-gap states will now form bands on the two edges, each with a bandwidth $\sim 4t_y$. The two plateaus therefore correspond to the situation when: (1), none of the edge sites are occupied; and, (2), all of the edge sites are completely occupied, beyond some critical chemical potential determined by the edge bandwidth. In our notations this reduces to $\rho_1=\rho-1/(2L_x)$ and $\rho_2=\rho+1/(2L_x)$.
To help visualize this, in Fig.~\ref{fig:density_profile}\,a, we plot the local density profile of a $20\times 20$ honeycomb lattice under open boundary condition along the $x$-direction, by choosing a value for chemical potential, $\mu=-6.5$, in the lower plateau $\rho_1$. We can clearly see that the lower plateau corresponds to a situation where bulk sites of the even $\ell$-layers have density $0.5$ each, while the edge sites have density close to zero. Contrarily, Fig.~\ref{fig:density_profile}\,b depicts the density profile corresponding to $\mu=-5.8$, a point in the upper plateau $\rho_2$. The density of the edge sites now becomes close to $1$, while the density of the bulk sites remains $0.5$ as before. So, at the $\rho_1$ ($\rho_2$) plateau, each of the edge sites have $1/2$ HCB less (more) compared to the sites in the bulk. The difference of these two local density profiles is shown in Fig.~\ref{fig:density_profile}\,c, which clearly demonstrates that the transition between the two plateaus, in Fig.~\ref{fig:split}\,a, corresponds to the occupation of the in-gap edge states. All this indicates towards the possible topological nature of the dimer insulator at $1/4$ filling-fraction.
Next, we study the effect of the reversal of the sign of $W_0$. In this case, at $1/4$-filling, the dimers are formed at odd $\ell$-layers and hence no dimers are split by the opening of the boundary conditions [see Fig.~\ref{fig:structure_insulators}\,c with black (filled) sites replaced by white (empty) sites]. On the other hand, at $\rho=3/4$, all the sites residing on the odd $\ell$-layers are completely occupied and the dimers are formed in the even $\ell$-layers [see Fig.~\ref{fig:structure_insulators}\,a with white (empty) sites replaced by black (filled) sites]. Therefore, in this case, it is the dimer insulator at $3/4$ filling which involves the formation of edge states. As a result, under open boundary condition along $x$-direction, the plateau corresponding to $\rho=3/4$ splits into two plateaus (as shown in Fig.\ref{fig:split}\,b) corresponding to densities $\rho_1=0.725$ and $\rho_2=0.775$, while the other parts of the $\rho-\mu$ curve remain unchanged.
In Fig.~\ref{fig:split}\,c we study the superfluid density, $\rho_{s,i}^y$, for the different stripes ($i$) along the $y$-direction (see Fig.~\ref{fig:lattice}\,b) of a $20\times20$ lattice with open boundary conditions along $x$-direction with $W_0=6.0$. The superfluid density is measured for two values of the chemical potential; $\mu=-6.16$ corresponding to the density $\rho=1/4$ in Fig.~\ref{fig:split}\,a, which ensures a partial occupation of the edge states, and $\mu=6.16$ corresponding to the middle of the unsplit density plateau at $\rho=3/4$. For $\mu=-6.16$ the superfluid density acquires a non-zero value at the two ends of the lattice, while within the bulk of the lattice it acquires a vanishing value. On the other hand for $\mu=6.16$ the superfluid density remains vanishingly small for all the stripes. In the thermodynamic limit while the superfluid density tends to zero for the bulk stripes at $\rho=1/4$ , it remains finite at the edges. This captures the conducting nature of the edge states.
Finally, we note that by opening the boundaries of the lattice along the $y$-direction, we end up with a honeycomb lattice with armchair edges and no dimers are split by this change. Therefore, in this case the $\rho-\mu$ curve remains unaffected by the open boundary condition for both positive and negative $W_0$. Thus, edge states are manifested only along the zigzag edges in the $x$-direction.
\section{The topological invariant}\label{sec:topologicalinvariant}
\begin{figure}[b]
\includegraphics[width=\linewidth,angle=0]{fig9}
\caption{The calculated Berry phase $\gamma$ along the $x$-direction of the Brillouin zone as a function of $W_0/t$ for the three gapped phases at densities (a) $1/4$, (b) $1/2$ and (c) $3/4$.}\label{fig:berry_phase}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.328\linewidth,angle=0]{fig10a}
\includegraphics[width=0.328\linewidth,angle=0]{fig10b}
\includegraphics[width=0.328\linewidth,angle=0]{fig10c}
\caption{Variations of density versus chemical potential for three different values of NN repulsion $V_1$, with the on-site potential strength $W_0=6.0$. The blue curves show the $\rho-\mu$ variations for a $20\times20$ Honeycomb lattice with periodic boundary conditions. The brown curves in the insets display the splitting of the $\rho=1/4$ insulator under open boundary condition along $x$-direction.}\label{fig:NN_repulsion}
\end{figure*}
In order to confirm the topological nature of the dimer insulators from a bulk perspective, one must calculate the topological invariant for the system under periodic boundary conditions. We now turn in this direction. For a $L_x\times L_y$ Honeycomb lattice, the lattice points on the discrete Brillouin zone $(k_x,k_y)$ can be expressed as,
\begin{align}
k_x &= -\frac{\pi}{3}+\frac{2\pi}{3L_x}p, \quad\quad p=1,2,\cdots,L_x\\
k_y &= \frac{2\pi}{\sqrt{3}L_y}q, \quad\quad\quad\quad q=1,2,\cdots,L_y
\end{align}
Let $\psi_{\pm,\pm}$ denote the normalized column eigenvectors corresponding to the energy bands $E_{\pm,\pm}$ of Eq.~\eqref{eq:ham-k} (see Appendix~\ref{app:bandstructure}). Then the ground state multiplets $\Psi^{(\rho)}$ corresponding to filling-fractions $1/4$, $1/2$ and $3/4$ are given by the matrices $\Psi^{(1/4)}=\{\psi_{-,+}\}$, $\Psi^{(1/2)}=\{\psi_{-,+},\psi_{-,-}\}$ and $\Psi^{(3/4)}=\{\psi_{-,+},\psi_{-,-},\psi_{+,-}\}$. One can then define the U(1) link variables along the two directions as,
\begin{align}
U_x(p,q)=\frac{\mathrm{det}_\pm\left(\Psi^\dagger_{p,q}\Psi_{p+1,q}\right)}{\left\vert\mathrm{det}_\pm\left(\Psi^\dagger_{p,q}\Psi_{p+1,q}\right)\right\vert},
\end{align}
and
\begin{align}
U_y(p,q)=\frac{\mathrm{det}_\pm\left(\Psi^\dagger_{p,q}\Psi_{p,q+1}\right)}{\left\vert\mathrm{det}_\pm\left(\Psi^\dagger_{p,q}\Psi_{p,q+1}\right)\right\vert},
\end{align}
where $\mathrm{det}_+$ ($\mathrm{det}_-$) corresponds to the permanent (determinant) of the matrix. It is important to note that in the definition of the link variable, the permanent is applicable when the particles under consideration are HCBs, whereas for fermions $U$ involves determinant. Finally, the Chern number can be calculated as,
\begin{align}
C=\frac{1}{2\pi i}\sum_{p,q}F_{xy}(p,q),
\end{align}
where $F_{xy}$ is the lattice field strength defined as,
\begin{align}
F_{xy}(p,q)=\ln U_x(p,q) U_y(p+1,q) U_x^{-1}(p,q+1) U_y^{-1}(p,q),
\end{align}
with $-\pi < \frac{1}{i}F_{xy}(p,q) \leq \pi$.
For the three gapped phases at densities $1/4$, $1/2$ and $3/4$, we calculate the Chern number \cite{fukui2005chern} for $W_0>t$ to determine the topological nature of these phases. Despite of the presence of the edge states for the $1/4$ (or $3/4$) dimer insulator, the Chern number turns out to be zero for all of the three insulators.
\begin{figure*}[t]
\centering
\includegraphics[width=0.328\linewidth,angle=0]{fig11a}~~
\includegraphics[width=0.328\linewidth,angle=0]{fig11b}~~
\includegraphics[width=0.328\linewidth,angle=0]{fig11c}
\caption{Variation of the density versus chemical potential in the presence of both NN repulsion $V_1$ and NNN repulsion $V_2$, with the on-site potential strength $W_0=6.0$, for (a) $V_1=2.0$, $V_2=0.2$, and (c) $V_1=2.0$, $V_2=2.0$. (b) Change of dimer structure factor and structure factor with the ratio $V_2/V_1$.}\label{fig:NNN_repulsion}
\end{figure*}
As mentioned at the end of Section~\ref{sec:edgestates}, the edge states are observed only along the zigzag edges of the lattice under open boundary condition along the $x$-direction. This is related to the fact that the dimer insulator structure can be effectively described as 1D SSH-like chains stacked in a 2D lattice, connected via weak hopping. Therefore, in order to probe the topological nature of the dimer insulators, we calculate Berry phase for each $k_y$ value separately according to the formula,
\begin{align}
\gamma_q=\mathrm{Im}\sum_{p=1}^{L_x} \ln U_x(p,q).
\end{align}
It turns out that the Berry phase $(\gamma)$ is independent of $k_y$ (or $q$). Fig.~\ref{fig:berry_phase} depicts the variation of the Berry phase as a function of $W_0/t$ calculated in the gapped region for three different densities $\rho=1/4$, $1/2$ and $3/4$. We can see that for positive values of $W_0$ ($W_0>t$), the Berry phase is quantized at $\pi$ for the dimer insulator at $\rho=1/4$ and remains zero for the one at density $3/4$. The situation is reversed when we reverse the sign of $W_0$ ($W_0<-t$). On the other hand, in both of these cases $\gamma$ remains zero for the insulator at $\rho=1/2$. This identifies the dimer insulators at $\rho=1/4$ and $3/4$ as weak topological insulators (WTIs) for $W_0>t$ and $W_0<-t$ respectively.
The existence of the WTI phase becomes clearer by realizing that the governing Hamiltonian obeys the following symmetries:
\begin{enumerate}[leftmargin=*]
\item[1.] Time reversal symmetry: $T H(k_x,k_y)T^{-1}=H(-k_x,-k_y)$, where $T$ is the antiunitary time reversal operator $T=\mathcal{K}$, where $\mathcal{K}$ is the complex conjugation operator, satisfying $T^2=1$.
\item[2.] Mirror symmetry: $M H(k_x,k_y)M^{-1}=H(-k_x,k_y)$, with $M=\sigma_x\otimes\sigma_x$.
\end{enumerate}
The two symmetry operators $T$ and $M$ commute, $[T,M]=0$, which places the model in the mirror symmetry class AI \cite{ryu2013classification}. This symmetry class admits a $\mathbb{Z}$ topological number in 1D, but a zero topological number in 2D. Our model realizes a stacked WTI of such 1D chains.
One way to interpret the presence of edge states for positive $W_0$ ($W_0>t$) is illustrated in Fig.~\ref{fig:structure_insulators} for an effective model, which is described by underlying SSH chains for densities $\rho=1/4$ (Fig.~\ref{fig:structure_insulators}\,a) and $\rho=3/4$ (Fig.~\ref{fig:structure_insulators}\,c). Each chain admits alternating tunnelings $t$, $t_x$ along the chain, and neighboring chains are weakly-coupled via $t_y$. Depending on the sign of $W_0$, the chains for either $\rho=1/4$ or $\rho=3/4$ manifest their topological phase, which leads to edge states at the corresponding density, as long as $t_y$ is weak enough.
\section{Effect of interactions}\label{sec:repulsion}
In this section, we discuss the effect of interactions between HCBs on the phase diagram and the WTI phase. First we consider NN repulsion between HCBs by adding
\begin{align}
H_1=V_1\sum_{\langle i,j\rangle} \hat{n}_i \hat{n}_j,
\end{align}
to the Hamiltonian, Eq.~\eqref{eq:hamiltonian}. Fig.~\ref{fig:NN_repulsion} compares the variations of the average density $\rho$ as a function of the chemical potential $\mu$ for three different values of $V_1$ with periodic and open boundary conditions.
The blue curves in Fig.~\ref{fig:NN_repulsion} depict the average density of a $20\times20$ periodic honeycomb lattice, whereas the brown curves in the insets show the alteration of the plateau at $\rho=1/4$ under open boundary condition. One can see that as we increase the NN repulsion the width of the plateaus corresponding to $\rho=1/4, 1/2$ and $3/4$ increases. Since the band gap in any insulating phase is determined by the width of the plateau in the $\rho-\mu$ curve, the gaps corresponding to the three above-mentioned insulating phases simply gets larger for larger NN repulsion. In other words, the insulating phases become more stable in the presence of NN repulsion. This can be understood by realizing that the introduction of NN repulsion effectively increases the energy cost of adding another particle to the insulating structures. Therefore, energy minimization forces the system to be in the insulating phases for wider ranges of the chemical potential, thus increasing the band gap of these phases. Under open boundary condition along the $x$-direction, for each value of $V_1$, the plateau at $\rho=1/4$ further splits into two plateaus corresponding to densities $0.225$ and $0.275$ similar to the case with zero NN repulsion. This means that the topological nature of the dimer insulator at $\rho=1/4$ remains unaffected by the presence of NN repulsion.
We now argue that the WTI phase is in fact robust against any amount of NN repulsion, as elucidated by considering the spatial structure of the insulating phase. As discussed in Section~\ref{sec:phase_diagram}, the WTI is a dimer insulator, where each dimer is formed by a particle hopping back and forth between the two sites belonging to a NN bond aligned along $x$-direction. Since the dimers are situated at alternate $y$-levels, the particles in two neighboring dimers do not feel any repulsion, as they never reside on two NN sites. Consequently, NN repulsion cannot restrict the hopping process involved in the formation of a dimer and thus the WTI remains uninfluenced.
This argument is also valid for the WTI phase at $3/4$-filling, which is the particle-hole conjugate of Fig.~\ref{fig:structure_insulators}\,a. While NN repulsion is felt between the particle in each dimer and the particles frozen in the in-between layers (depicted by white circles in Fig.~\ref{fig:structure_insulators}\,a and reflecting filled sites in its particle-hole conjugate), this does not disrupt the formation of dimers. In fact this configuration is the minimum-energy configuration at density $\rho=3/4$, even in presence of NN repulsion. While at this filling, the particle in each dimer encounters $2V_1$ repulsion from the two occupied NN sites, this would not depend on which of the two sites of the dimer it occupies. The particle will therefore prefer to hop back and forth between these sites, rather than choosing a particular site to reside in, thereby lowering the energy of the system.
\begin{figure*}[t]
\centering
\includegraphics[width=0.328\linewidth,angle=0]{fig12a}
\includegraphics[width=0.328\linewidth,angle=0]{fig12b}
\includegraphics[width=0.328\linewidth,angle=0]{fig12c}
\caption{Pictorial description of the insulating structures corresponding to the density-plateaus in Fig.~\ref{fig:NNN_repulsion}\,c at (a) $\rho=1/8$, (b) $\rho=1/4$, and (c) $\rho=3/8$.}\label{fig:structure}
\end{figure*}
The above discussion motivates the consideration of the effect of NNN interactions, described by an additional term
\begin{align}
H_2=V_2\sum_{\langle\langle i,j\rangle\rangle} \hat{n}_i \hat{n}_j.
\end{align}
As depicted in Fig.~\ref{fig:NNN_repulsion}\,a, in presence of weak NNN repulsion the variations of the order parameters with the chemical potential remain almost unchanged for a periodic honeycomb lattice. With open boundary conditions the plateau corresponding to the dimer insulator at $\rho=1/4$ splits into two parts (inset of Fig.~\ref{fig:NNN_repulsion}\,a) similarly to the non-interacting scenario. This demonstrates that the WTI phase is robust against small NNN repulsion values. Now, with the increase of the ratio $V_2/V_1$ the hopping process of the constituent particle of each dimer gets disrupted. Consequently, beyond some critical value of this ratio, it is energetically favorable for the particle to be localized at one of the two sites of the dimer instead of hopping back and forth. This way the particles can avoid NNN repulsion felt between two neighboring dimers along the $y$-direction. Such a configuration is depicted in Fig.~\ref{fig:structure}\,b. In Fig.~\ref{fig:NNN_repulsion}\,b the dimer structure factor $S_D(0,\pi)$ and the structure factor $S(0,\pi)$ are plotted as a function of $V_2/V_1$. We can see that with increasing value of $V_2/V_1$, the dimer structure factor decreases from $0.25$ to a value close to zero, whereas the structure factor remains constant at $0.0625$. Since the dimers are destroyed for larger values of $V_2$, the dimer structure factor obviously decreases. Nevertheless as the particle number in each $y$-layer is fixed the structure factor remains unaltered. Hence, it can be concluded that the WTI transforms into a normal insulator (similar to the one in Fig.~\ref{fig:structure}\,b) for larger values of NNN repulsion.
As can be seen from Fig.~\ref{fig:NNN_repulsion}\,c, the NNN interactions are observed to stabilize additional insulating plateaus at fillings $1/8$, $3/8$, $5/8$ and $7/8$ for a periodic honeycomb lattice. At $\rho=1/8$ only half of the dimers in Fig.~\ref{fig:structure_insulators}\,a are formed so that no NN or NNN repulsion is felt between two HCBs. The structure corresponding to this insulator is not unique. One of its possible structures is demonstrated in Fig.~\ref{fig:structure}\,a for an $8\times 8$ lattice. On the other hand, at filling-fraction $3/8$, half of the dimers of Fig.~\ref{fig:structure_insulators}\,a are occupied by an extra particle each, thus transforming a dimer into a pair of particles. Fig.~\ref{fig:structure}\,c depicts one of the many possible structures corresponding to this insulator. The dimers (pair of HCBs) in this insulating phase are distributed in a way such that no two dimers (pair of HCBs) are NNN of each other. Despite of the repulsion felt by the HCBs, this is in fact the minimum energy configuration of the system at $\rho=3/8$. One should note that the insulator at density $5/8$ $(7/8)$ is exactly the same as the one at $\rho=3/8$ ($1/8$) once the even and odd layers, as well as particles and holes, are interchanged. The detailed characterization of these additional insulating plateaus would be interesting to pursue in future.
\section{Conclusions}\label{sec:conclusion}
To summarize, we have studied HCBs in a periodic honeycomb lattice with NN hopping ($t$) and alternating positive and negative on-site potential ($W_0$) along different $y$-layers, using SSE QMC supported by analytical calculations. The system reveals the existence of three insulating phases for $W_0>t$: a CDW at $1/2$-filling and two dimer insulators at densities $1/4$ and $3/4$. Depending on the on-site potential pattern, one of the dimer insulators turns out to be a WTI, with a zero Chern number but a non-trivial Berry phase, which is protected by mirror-symmetry and belongs to the mirror-symmetry class AI. The model can be effectively thought of as weakly coupled SSH chains, where the intermediate layers being either completely empty (for $\rho=1/4$) or fully occupied (for $\rho=3/4$). Although our study involves HCBs, it is important to note that the WTI phase persists in case of fermions as well. Since the energy bands are well-separated in the regime $W_0>t$, the topological phase becomes oblivious to the exchange statistics of the constituent particles.
With recent advancements, optical lattice with ultra-cold atoms would be a perfect tool to actualize our model experimentally. Experiments on hexagonal lattices in this framework have already been around for a while \cite{soltan2011multi,tarruell2012creating,uehlinger2013artificial,polini2013artificial}.
The on-site potential of the lattice sites can also be tuned in these experiments, making it possible to achieve the pattern required by our model. Additionally, the measurements of Berry phase \cite{atala2013direct} as well as Chern number \cite{aidelsburger2015measuring} in an optical lattice setup have also been performed successfully. Thus, we believe that our model is a promising and interesting candidate to realize weak topological insulating phase in optical lattice experiments. In addition, certain features of our model, including the band spectrum and the presence of edge states, could also be probed using driven-dissipative exciton-polariton microcavity lattices \cite{amo2016exciton}.
Besides the WTI phase, our model exhibits a rich phase diagram, which includes intriguing phases such as bosonic Dirac semi-metal and nodal line semi-metal among others.
Since the main focus of our current work is the WTI phase, a detailed study of the other novel phases is outside the scope of this paper. It would be interesting to investigate these phases in more detail and examine how these phases are affected by the presence of off-site interaction in the system. Furthermore, it would be worthwhile to analyze the dependence of the phase diagram on the on-site repulsion $U$, when the HCBs are replaced by soft-core-bosons, as well as the persistence of the WTI in the $U\to0$ limit.
\begin{acknowledgments}
This research was funded by the Israel Innovation Authority under the Kamin program as part of the QuantERA project InterPol, and by the Israel Science Foundation under grant 1626/16. AG thanks the Kreitman School of Advanced Graduate Studies for support. AG also thanks M. Sarkar and S. Nag for useful discussions.
\end{acknowledgments}
|
1,477,468,750,593 | arxiv | \section{Introduction}
A spectacular dynamical layer decoupling of the transport properties accompanied by different types of order parameters (charge, spin and superconducting) developing
together has been observed experimentally in the stripe-ordered (or nearly ordered) cuprate superconductor {La$_{2-x}$Ba$_x$CuO$_4$}
at zero external magnetic field\cite{li-2007,tranquada-2008}, in underdoped {La$_{2-x}$Sr$_x$CuO$_4$} at moderate magnetic fields, and in optimally-doped {La$_{2-x}$Ba$_x$CuO$_4$}
at low magnetic fields.\cite{wen-2011}
A sequence of phase transitions are seen in these materials with the ``normal'' to charge-stripe ordered transition occurring first, followed by a spin-stripe order
transition with a lower critical temperature.
For instance, in {La$_{2-x}$Ba$_x$CuO$_4$} near doping $x=1/8$ a spectacular decoupling of the layers develops in transport measurements with the ratio of the c-axis
$\rho_c$ to the ab-plane $\rho_{ab}$ resistivities, becoming larger than $10^5$, begins to develop quite rapidly at temperatures right below the
spin-ordering transition. At a critical temperature of the order of $T_c^{2D}\sim 20 K$ (depending on the precise doping) the copper oxide planes appear
to become superconducting while the c-axis transport remains resistive. The full three-dimensional resistive transition is seen only below $10 K$.
A superconducting state with a Meissner effect and (presumably) $d$-wave superconductivity is seen below $T_c^{3D}\sim 4K$. However, even though t
he critical temperature of the uniform $d$-wave superconducting state is much lower near $x=1/8$ than for other doping levels
(where it is typically $\sim 40 K$), the experiments show that the anti-nodal superconducting gap is essentially unsuppressed.\cite{valla-2006,he-2008}
A strikingly similar transport anisotropy has been observed very recently in the temperature-pressure phase diagram of the heavy fermion superconductor
CeRhIn$_5$.\cite{park-2011} In this strongly correlated material the orders that develop are conventionally identified as a spin-density wave metallic state
and a uniform $d$-wave superconductor. In the phase in which both orders coexist the ratio $\rho_c/\rho_{ab}$ becomes large ($ \sim 10^3$) with $\rho_{ab}$
eventually becoming unmeasurably small as is the superconductivity became two-dimensional (as in the case of {La$_{2-x}$Ba$_x$CuO$_4$} near $1/8$ doping\cite{li-2007}).
The most unusual aspect of these experiments is not just the existence of multiple coexisting orders, but the dynamical layer decoupling seen in transport.
That is, the existence of a significant temperature range over which there is a form of two-dimensional superconductivity in the planes but which are otherwise
decoupled as if there was no Josephson effect between them.
Berg {\it et al}\cite{berg-2007,berg-2009a,berg-2009b,berg-2009c} showed that these seemingly contradictory results can be explained if one assumes that in
this state charge, spin and superconducting orders are not competing with each other but rather that they are intertwined, with the superconducting state in
the planes also being striped, {\it i.e.} it is a unidirectional {\em pair density wave} (PDW) with the property that the phase of the superconducting order
parameter alternates in sign, as if the axes of the $d$-wave order parameter were to rotate by $90^\circ$ (with vanishing average value for the superconducting
order parameter).
The order parameter for the PDW state is
\begin{equation}
\Delta_\text{PDW}(\vec x) = \Delta_{\vec{Q}_\text{PDW}}e^{\text{i} \vec Q_\text{PDW} \cdot\vec x} +\Delta_{-\vec{Q}_\text{PDW}}e^{-\text{i} \vec Q_\text{PDW}\cdot\vec x}
\label{eq:PDW}
\end{equation}
with the PDW ordering wave vector $\vec Q_\text{PDW}$ pointing in the direction normal to the stripes, and (in the LTT lattice structure of {La$_{2-x}$Ba$_x$CuO$_4$}) rotates by
$90^\circ$ from plane to plane. Translation invariance of the underlying system further dictates that the ordering wave vectors for spin
($\vec Q_\text{SDW}$), charge ($\vec Q_\text{CDW}$) and superconducting ($\vec Q_\text{PDW}$) order parameters obey the relation
$2\vec Q_\text{PDW}=2\vec Q_\text{SDW}=\vec Q_\text{CDW}$.
In this paper we show that PDW type states do occur in the phase diagram of strongly correlated systems on two-leg ladders.
We focus on these systems since in this context the physics of strong correlations can be controlled and there are powerful bosonization and numerical
methods to investigate their phase diagrams and to compute their correlators. For reasons that will be explained below, states of this type generally
involve strong correlation physics which is much harder to control in two dimensions. Results obtained in one-dimensional systems, but with lots of caveats,
can be used to develop at least qualitatively theories of two-dimensionally ordered states. In this work we will use bosonization methods, which are accurate
at relatively weak coupling, to show how intertwined orders of this type develop in two-leg ladder. This work in many ways is an extension of the results of
Ref.[\onlinecite{berg-2010}] that showed that the PDW state represents the spin-gapped phase (``Kondo-singlet'') of the Kondo-Heisenberg chain.
Non-uniform superconducting states were proposed by Fulde and Ferrell\cite{fulde-1964} and by Larkin and Ovchinnikov\cite{larkin-1964}, and are conventionally
called FFLO states. The superconducting component of the PDW state discussed above has the form of a Larkin-Ovchinnikov state.\cite{larkin-1964} The
Fulde-Ferrell state\cite{fulde-1964} has spiral order.
FFLO states have been proposed over the years for different types of superconducting materials, most recently in the phase diagram of the heavy fermion
superconductor CeCoIn$_5$ at finite magnetic fields,\cite{kenzelmann-2008} although the experimental evidence for them is weak (at best).
There are also recent theoretical proposals for FFLO-type states in cold atomic fermionic systems with unbalanced populations,\cite{radzihovsky-2009}
and in quantum wires of multi-valley semiconductors.\cite{datta-2009}
In the conventional theory of FFLO states one assumes a BCS-type system (a Fermi liquid) in which the spin up and down Fermi surfaces are split by the
Zeeman interaction with an external magnetic field. As a result the ordering wave vector of the FFLO states is tuned by the Zeeman interaction and,
hence by the magnitude of the magnetic field. A problem with this mechanism is that the usual nesting of the Fermi surface that leads to a BCS state
zero-momentum pairing is generally lost if the Fermi surfaces are split, and the superconducting states with finite-momentum pairing can only happen
for large enough attractive interactions, instead of being an infinitesimal instability as in the conventional BCS case.\cite{schrieffer-1964} FFLO phases driven by the Zeeman interaction (and hence with a spin- imbalance and a broken $SU(2)$ spin symmetry) in quasi-one-dimensional systems, including ladders, were discussed in Ref. [\onlinecite{Roux-2007b}].
On the other hand the PDW state is the result of strong correlation physics, does not involve a Zeeman interaction, and does not occur at weak coupling.
In the perspective of Ref.[\onlinecite{berg-2009a}], the PDW state is another manifestation of the concept of frustrated phase separation which is behind the
development of inhomogeneous electronic states in strongly correlated materials,\cite{emery-1993} whose result are electronic liquid crystal
phases\cite{kivelson-1998} of which the PDW is a particularly interesting example. An understanding of the physics of this state should shed
light on the connection between electronic inhomogeneity and superconductivity.
A BCS-mean-field-theory of a extended Hubbard model in two dimensions has been developed by Loder and coworkers\cite{loder-2009,loder-2011}
who found that, as part of a rich phase diagram, the PDW is the ground state for large enough interactions. Earlier mean field theory results in the $t-J$ model
by Yang {\it et al}\cite{yang-2009} found that the PDW is energetically very close to being the ground state. These authors found that (within their mean field theory)
the ground state is a modulated $d$-wave superconductor, {\it i.e.} a state in which the uniform $d$-wave order parameter coexists with the PDW state,
{\it i.e.} a state in which superconductivity and stripe order coexist.\cite{Jaefari-2010}
Several variational Monte Carlo calculations in the 2D $t-J$ model have found that this is a very competitive state near doping $x=1/8$ although
not quite the best variational ground state.\cite{himeda-2002,raczkowski-2007,capello-2008}.
While these results are encouraging they suffer from the problem that either they use approximations that are not controlled in these strongly coupled regimes
(as in the mean field studies), or that the search yields the best variational state within a restricted class which would be adequate at weak coupling
but not in these regimes. Nevertheless these results indicate that from the point of view of local energetics the PDW state is very competitive and
would most likely be the ground state for some extension of the Hamiltonians that were studied.
A very recent and numerically intensive calculation using infinite projected entangled pair states (an extension of the density matrix renormalization group)
by Corboz {\it et al}\cite{corboz-2011} have found stripe states coexisting with superconductivity in the 2D $t-J$
model but has not yet investigated the presence (or absence) of a PDW state.
In this paper we revisit the problem of the phase diagram of extended Hubbard-Heisenberg type models on two-leg ladders using bosonization methods. Here we show
that the two-leg ladder has a phase that can be identified with a version of the PDW state, another phase in which uniform superconducting order is present and there is a
continuous phase transition between these two phases. The two-leg ladder is an ideal model-system to study since it is well known from DMRG and bosonization results
to have broad regimes of coupling constants and doping in which the ground state has a spin-gap and $d$-wave superconducting correlations, as well as a strong
tendency to stripe order.\cite{noack-1994,noack-1997,white-1998a,white-2000} Using a weak coupling band terminology, the PDW state is stabilized when one of the
band of the ladder, say the bonding band, is at special commensurate fillings.
While there are extensive bosonization studies of the phase diagram of two-leg ladders,\cite{wu-2003,giamarchi-2003} the existence of phases with
PDW order has not been previously investigated.
Here we show that there is a connection between two leg ladder and the Kondo-Heisenberg (KH) chain, a model of a 1D electron system interacting
with a spin-$1/2$ Heisenberg antiferromagnetic chain through Kondo interaction.
In a series of papers, Zachar et.~al.~\cite{zachar-1996, zachar-2001, zachar-2001b} showed the existence of a phase in which the correlations of the uniform
superconducting order parameter decays exponentially fast while the correlations of a staggered superconducting state (composite SC) has quasi long range order.
Berg {\emph {et.~al.}}~\cite{berg-2010} studied the KH problem using the Density Matrix Renormalization Group (DMRG) calculations and showed that the spin-gapped
phase of the KH chain \cite{sikkema-1997} is a commensurate PDW state with $\Delta_{\vec{Q}} = \Delta^*_{-\vec Q}$.
Similarly, the PDW phases that we find in the Hubbard-Heisenberg model are also commensurate and have wave vector $Q_{PDW}=\pi$. In addition, and similarly to
what happens in the KH chain, the PDW order parameter is a composite operator of a triplet uniform SC order parameter of the anti-bonding band and the
antiferromagnetic (N\'eel) order parameter of the bonding band. Separately, these two order parameters have short range order in the PDW state.
In the PDW phase in the ladder system translation symmetry is broken spontaneously whereas in the KH chain it is broken explicitly by the spacing
between the static spins. We discuss in detail the
quantum phase transition between the PDW state and the phase with uniform SC order and it is found to be in the universality class of the quantum Ising chain.
We also consider an extended Hubbard-Heisenberg model on a two-leg ladder with flux $\Phi$ per plaquette. An important difference of this ladder system is that for
general flux time reversal symmetry is broken explicitly except for the spacial case of flux $\Phi=\pi$ per plaquette which is time reversal invariant. The presence of a non-
zero flux changes the band structure by doubling the number of Fermi points at which the bonding band crosses the Fermi energy. We have not explored in full the
complex phase diagram of this system except for the case of flux $\Phi=\pi$. Here we found that for generic fillings of the bonding band the system obeys an Umklapp
condition which leads to a spin gap state. We explored the phase diagram in this case and found that it generally supports two types of uniform and PDW
superconducting orders. In this system the PDW order parameters are bilinears in fermion operators and are not composite operators and in the previous case. Here, as in
the conventional ladder and in the KH chain, there is no coexistence between PDW and uniform orders: when one order develops (which in 1D means power law
correlations) the correlators of the other order parameters decay exponentially at long distances. In addition to the SC phase we also found four incommensurate CDW
phases. The quantum critical behavior of this system is more complex, reflecting the larger diversity of phases that we encountered. In particular while the generic
quantum phase transitions are also in the universality class of the quantum ising chain, for some special choices of parameters the symmetry associated with the quantum
critical behavior is enlarged to $U(1)$ and it is now in the universality class of a spinless Luttinger model.
Two-leg ladders with flux $\Phi$ per plaquette were studied (both analytically and using numerical DMRG methods) by Roux and coworkers\cite{Roux-2007} in their work
on diamagnetic effects in two-leg ladders, as well as by Carr and coworkers\cite{Narozhny-2005,Carr-2006} who used bosonization methods to study many aspects of the
phase diagram. However in their work these authors did not consider the case of flux $\Phi=\pi$ per plaquette in which time reversal invariance plays a key role, and the
problem of PDW phases, that (as we show occur here) generically at flux $\Phi=\pi$, and which is the focus of the present work.
This paper is organized as follows. In section \ref{sec:model} we introduce the model of the two-leg ladder and its effective field theory using bosonization methods.
We can draw the phase diagram using the microscopic parameters of the ladder in the the weak coupling regime where their relation with the coupling constants of the
bosonized theory is known explicitly. Although the form of the effective field theory does not change with the strength of the coupling constants, in more strongly coupled
regimes numerical methods must be used to established this relation.
In section \ref{sec:half-filled} we present the bosonized theory of a ladder whose bonding band is half-filled. Here we show that this effective low energy theory has a
hidden self-duality.
In section \ref{sec:phase-diagram-half} we use renormalization group methods to determine the phase diagram for the case of a half-filled bonding band, and show that,
in addition of a Luttinger Liquid type phase, it also has two SC states, one with uniform SC order and the other with PDW order with wave vector $Q_\text{PDW}=\pi$. We
show that in this case there is a direct quantum phase transition between the phase with uniform SC order and the PDW phase which is in the universality class of the
quantum Ising chain..
In section \ref{sec:other-commensurabilities} we extend this analysis to regimes with a bonding band at other commensurate fillings. The resulting PDW phase with
wave vector $Q_\text{PDW}$ coexists (or is intertwined) with an also commensurate charge-density-wave (CDW) state in the bonding band with wave vector
$Q_\text{CDW}=Q_\text{PDW}/2$. Unlike the half-filled case, this state does not occur at weak coupling. In section \ref{sec:flux} we consider an extended
Hubbard-Heisenberg model on a two-leg ladder with flux $\Phi$ per plaquette. Here we show that the commensurate PDW phase arises naturally in this
frustrated band structure although through a different mechanism. The conclusions are presented in section \ref{sec:conclusions}.
The RG equations for the general case are presented in Appendix \ref{sec:RG-pi-flux}.
The solution of the effective field theory of the flux $\Phi=\pi$ model for special combinations of coupling constants and refermionization is given in
Appendix \ref{sec:refermionization-pi-flux}.
\section{Model of the Two-Leg Ladder and Effective Field Theory}
\label{sec:model}
Consider a model of the two-leg ladder whose Hamiltonian is $H=H_0+H_\text{int}$. The kinetic energy term is
\begin{align}
H_0=& -t\sum_{i,j,\sigma}\left\{ c^\dagger_{i,j, \sigma}c_{i,j+1,\sigma} + \text{h.c.} \right\} \nonumber\\
&-t_\perp \sum_{j,\sigma}\left\{ c^\dagger_{1,j, \sigma}c_{2,j,\sigma} + \text{h.c.} \right\}
\end{align}
with $t$ and $t_\perp$ being, respectively, the intra-leg and inter-leg hopping amplitudes, and $i=1,2$ being the chain index and $j$ being the lattice site index .
The interaction terms of the ladder Hamiltonian have the form of a extended
Hubbard-Heisenberg model,
\begin{align}
\begin{split}
H_\text{int}= &~ U\sum_{i,j} n_{i,j, \uparrow }n_{i,j, \downarrow } + V_\parallel \sum_{i,j} n_{i,j}n_{i,j+1}\\
+ V_\perp & \sum_{j} n_{1,j}n_{2,j} +V_d \sum_{j} \left\{ n_{1,j}n_{2,j+1}+n_{1,j+1}n_{2,j} \right\}\\
& + J_\parallel \sum_{i,j} \vec S_{i,j} \cdot \vec S_{i,j+1} + J_\perp \sum_{j} \vec S_{1,j} \cdot \vec S_{2,j} \\
&+ J_d \sum_{j} \left\{ \vec S_{1,j} \cdot \vec S_{2,j+1}+\vec S_{1,j+1} \vec S_{2,j} \right\}
\label{lattice-model}
\end{split}
\end{align}
where $U$ is the on-site Hubbard repulsion, $V_\parallel$, $V_\perp$ and $V_d$ are the nearest neighbor and next-nearest neighbor ``Coulomb'' repulsions,
and $J_\parallel$, $J_\perp$ and $J_d$ are the nearest and next-nearest neighbor exchange interactions.
In the weak coupling regime, $U,V,J \ll t,t_\perp$, we proceed by first diagonalizing the kinetic energy term $H_0$ and finding its low-energy spectrum.
This can be done by switching to the bonding and anti-bonding basis defined as (at each rung $j$ of the ladder and for each spin polarization $\sigma$)
\begin{equation}
c_{b,a}=\frac{1}{\sqrt{2}}(c_{2}\pm c_{1})
\label{eq:bonding-antibonding}
\end{equation}
In the new basis the kinetic term reads as
\begin{align}
H_0 = \sum_{\eta=a,b} \sum_{j,\sigma}t_\eta \left\{ c^\dagger_{\eta,j,\sigma} c^{}_{\eta,j+1,\sigma} + \text{h.c.}\right\}
\end{align}
in which $b$ and $a$ stand for bonding and anti-bonding, and where $t_\eta=t \pm t_\perp$ for $\eta=b,a$ respectively.
\begin{figure}[hbt]
\includegraphics[width=0.4\textwidth]{half-filled-bonding.pdf}
\caption{Schematic picture of the bonding and anti-bonding bands. The bonding band $b$ is kept at half-filling.
The filling of the anti-bonding band $a$ is general. Here $k_{F,a}$ and $k_{F,b}=\frac{\pi}{2}$ are the Fermi points for each band.}
\label{fig: bonding-antibonding}
\end{figure}
In order to find the continuum limit representing
the low-energy and long-wavelength behavior of the model, we linearize the energy dispersion of each band of the ladder around the respective Fermi wave vector
\begin{equation}
\varepsilon_\eta(k) \approx E_F+v_\eta (k-k_{F,\eta})
\label{eq:dispersion-modes}
\end{equation}
where $v_\eta=2t_\eta\sin(k_{F,\eta})$ are the Fermi velocities, $k_{F,\eta}$ the Fermi wave vectors for each band, and $E_F$ is the Fermi energy.
We now consider the regime of small fluctuations close to the Fermi points of each band:
\begin{equation}
\frac{1}{\sqrt{a}}c_{\eta,j,\sigma} \rightarrow R_{\eta,\sigma}(x)e^{\text{i} k_{F\eta}x} + L_{\eta,\sigma}(x)e^{-\text{i} k_{F\eta}x},
\end{equation}
where $R$ and $L$ are right- and left-moving components of the electron field, $x=ja$ is the position, and $a$ is the lattice constant (the rung spacing). In this limit, the
kinetic term of the Hamiltonian takes the standard continuum form
\begin{equation}
H_0 = \sum_{\eta,\sigma} \int dx (-\text{i} v_\eta)\left\{ R^\dagger_{\eta,\sigma} \partial_x R_{\eta,\sigma} - L^{\dagger}_{\eta,\sigma} \partial_x L^{}_{\eta,\sigma} \right\}.
\end{equation}
The most general continuum interacting Hamiltonian density (up to possible Umklapp processes that will be discussed below) compatible with the charge conservation
and the global $SU(2)$ spin symmetry with scaling dimension two (marginal) operators has the following form
\begin{align}
\begin{split}
{\cal H}_\text{int} =& \sum_{\eta=a,b} \left\{ f_{c1\eta} \left(J^2_{R\eta}+J^2_{L\eta}\right) + g_{c1\eta} \, J_{R\eta}J_{L\eta} \right\}\\
&+ \sum_{\eta=a,b} \left\{ f_{s1\eta} (\vec J^{~2}_{R\eta}+ {\vec J}^{~2}_{L\eta}) + g_{s\eta} \, \vec J_{R\eta} \cdot \vec J_{L\eta} \right\}\\
&+ f_{c2} \left(J_{Ra}J_{Rb}+J_{La}J_{Lb}\right)\\
&+ g_{c2} (J_{Ra}J_{Lb}+J_{La}J_{Rb}) \\
&+ f_{s2}(\vec J_{Ra}\cdot \vec J_{Rb} + \vec J_{La}\cdot \vec J_{Lb})\\
&+ g_{s2} (\vec J_{Ra}\cdot \vec J_{Lb}+\vec J_{La}\cdot \vec J_{Rb} )\\
&+ \lambda_t ( \vec \Delta^\dagger_b \cdot \vec \Delta_a + \text{h.c.} ) + \lambda_s( \Delta^\dagger_b\Delta_a + \text{h.c.})\label{smooth-continuum-H}
\end{split}
\end{align}
in which $J_{R/L,\eta}$ are the right and left moving components of the charge density $J_\eta=J_{R,\eta}+J_{L,\eta}$ for each band $\eta$
\begin{equation}
J_{R,\eta}= \sum_{\sigma} R^\dagger_{\sigma,\eta} R^{}_{\sigma,\eta}, \quad J_{L,\eta} = \sum_{\sigma} L^\dagger_{\sigma,\eta} L^{}_{\sigma,\eta}
\end{equation}
$\vec J_{R/L,\eta}$ are right and left moving components of spin density $\vec J_\eta = \vec J_{R,\eta} + \vec J_{L,\eta}$ for each band $\eta$,
\begin{equation}
\vec J_{R,\eta}= \frac{1}{2} \sum_{\alpha\beta} R^\dagger_{\alpha,\eta} \vec \sigma_{\alpha\beta}R^{}_{\beta,\eta},
\quad \vec J_{L,\eta} = \frac{1}{2}\sum_{\alpha\beta} L^\dagger_{\alpha,\eta} \vec\sigma_{\alpha\beta}L^{}_{\beta,\eta}.
\end{equation}
where the components of the vector $\vec \sigma$ are the three Pauli spin matrices, and
\begin{equation}
\Delta_\eta = L_{ \uparrow ,\eta} R_{ \downarrow ,\eta} - L_{ \downarrow ,\eta} R_{ \uparrow ,\eta}, \qquad \vec \Delta_\eta =
\sum_{\alpha,\beta} L_{\alpha,\eta}(\text{i} \vec\sigma \sigma_y)_{\alpha\beta} R_{\beta,\eta}
\end{equation}
are singlet and triplet pairing operators respectively for each band $\eta$.
In the weak coupling limit, the relation between the coupling constants of the continuum theory of Eq.\eqref{smooth-continuum-H} and the parameters of the Hamiltonian
of the microscopic lattice model of Eq.\eqref{lattice-model} can be found through this naive continuum limit procedure.
This has been done before and can be found, for example, in Ref.~[\onlinecite{wu-2003}].
We note that here by keeping only (naively) marginal operators we have neglected a host of irrelevant operators that are present in the lattice model
that do not change the form of the low energy theory (although change the definition of the coupling constants by small amounts).
In the intermediate to strong coupling limit, either non-perturbative methods such as Bethe ansatz (applicable only if the system is integrable) or numerical
density-matrix renormalization group (DMRG) calculations are required to make a quantitative connection between the the lattice model and the effective
continuum field theory that we will use below. Nevertheless the form of ${\cal H}_\text{int}$ is general as seen here.
Here we will be interested in the case of a half-filled bonding band. This situation will happen naturally as the total filling of the ladder is varied without
breaking any symmetries of the ladder. we should note that if one wanted to specify the filling of the bonding and anti-bonding bands separately,
it would be necessary to set a chemical potential difference between the two legs which would make the ladder asymmetric.
At any rate, for a half-filled bonding band, in addition to the interactions presented in ${\cal H}_\text{int}$, we need to include the following (bonding band)
Umklapp process
\begin{equation}
{\cal H}_{u,b} = g_{u,b} ( L^\dagger_{b \uparrow }L^\dagger_{b \downarrow }R^{}_{b \downarrow }R^{}_{b \uparrow }e^{\im4k_{Fb}x} + \text{h.c.})
\end{equation}
The value of $g_{u,b}$ in the weak coupling regime is found to be given by
\begin{equation}
\frac{1}{a} g_{u,b} = \frac{1}{2}(U+V_\perp) - (V_\parallel+V_d) -\frac{3}{8} J_\perp+ \frac{3}{4} (J_\parallel+J_d)
\end{equation}
We will furthermore assume that for a substantial range of parameters of interest ${\cal H}_{u}$ is marginally relevant for a half filled bonding band.
We will get back to this point in the next section.
\section{Analysis of a two-leg ladder system with a half-filled bonding band}
\label{sec:half-filled}
We will now consider in detail the case when the bonding band is half filled and its Fermi wave vector is $k_{Fb}=\pi/2$.
In this case there is a marginally relevant interaction representing the Umklapp process mentioned above. The main effect of this process is to open a
charge gap $\Delta_c$ in the bonding band.
Therefore for the energies much smaller than the charge gap, one can assume that the charge degrees of freedom on bonding-band $b$ are frozen-out and
hence play no roll in the low energy limit of the remaining degrees of freedom. Moreover, due to the charge gap in the bonding band,
all the interactions with net charge transfer between the bands, namely singlet and triplet SC in Eq. \eqref{smooth-continuum-H} processes, become irrelevant.
Therefore the only remaining charge degree of freedom, which is that of the anti-bonding band $a$, is decoupled from the rest of the dynamics,
and this being a one-dimensional system, the effective field theory of the charge sector of the anti-bonding band is described by the Luttinger Liquid (LL) theory.
In its bosonized form the effective Hamiltonian density for the charge sector involves the Bose field $\phi_c$ and its dual field $\theta_c$ for the anti-bonding
band $a$ only (where to simplify the notation we dropped the label) and reads reads as
\begin{equation}
{\cal H}_c = \frac{v_c}{2} \left\{ K_c(\partial_x\theta_c)^2+\frac{1}{K_c}(\partial_x\phi_c)^2 \right\}
\label{eq:Hc}
\end{equation}
where $K_c$ is the charge Luttinger parameter of the system and $v_c$ is the velocity. The fields $\phi_c$ and $\theta_c$ are dual to each other and satisfy the standard
equal-time canonical commutation relations
\begin{equation}
[\phi_c(x),\partial_x\theta_c(x')]=\text{i}\delta(x-x')
\end{equation}
which identifies $\Pi_c=\partial_x \theta_c$ with the canonical momentum.
All the possible remaining interactions between the degrees of freedom of both bands are in the spin sector. The most general interacting Hamiltonian for the spin sector,
which is symmetric under the exchange of the band index $\eta=a,b$, is
\begin{align}
{\cal H}_s &= - g_{s1}(\vec J_{Rb} \cdot \vec J_{Lb}+\vec J_{Ra} \cdot \vec J_{La})\\
&\qquad - g_{s2} (\vec J_{Rb}\cdot \vec J_{La}+\vec J_{Lb}\cdot \vec J_{Ra})
\end{align}
Following the standard bosonization procedure in one dimension, in terms of spin boson fields
\begin{equation}
\phi_{s\pm} = \frac{1}{\sqrt{2}}(\phi_{s,b} \pm \phi_{s,a})
\end{equation}
we arrive at the following form for ${\cal H}_s$
\begin{align}
&{\cal H}_{s} = \frac{v_{s\pm}}{2} \left[ K_{s\pm}(\partial_x \theta_{s\pm})^2 + K^{-1}_{s\pm}(\partial_x \phi_{s\pm})^2 \right] \label{eq:shamiltonian}\\
&+\frac{\cos(\sqrt{4\pi}\phi_{s+})}{2(\pi a)^2} \left[ g_{s1} \cos(\sqrt{4\pi}\phi_{s-}) + g_{s2} \cos(\sqrt{4\pi}\theta_{s-})\right] \nonumber
\end{align}
where the Luttinger parameters $K_{s\pm}$ and velocities $v_{s\pm}$ are related to $g_{s\pm} = (g_{s,1}\pm g_{s,2})/2$ as
\begin{equation}
K_{s\pm} = \sqrt{\frac{2\pi v_f+ g_{s\pm}}{2\pi v_f- g_{s\pm}}}, \qquad v_{s\pm} = \sqrt{v^2_f-\left(\frac{g_{s\pm}}{2\pi}\right)^2}.
\label{eq:Ks-}
\end{equation}
in which $v_f$ is the Fermi velocity of the noninteracting problem. The dual fields $(\theta_{s\pm},\phi_{s\pm})$ obey similar commutation relations as the dual fields in the
charge sector. In general $g_{s1}$ is different for each band
$g_{1b}\neq g_{1a}$. This introduces terms involving operators of the form $ \partial_x \phi_{s+}\partial_x \phi_{s-}$ and $ \partial_x \theta_{s+} \partial_x \theta_{s-}$.
Although these
are marginal operators we will neglect them for now since we will later argue that the results are not essentially affected by these terms in phases with a spin gap.
In the absence of the spin gap, {\it i.e.} in the Luttinger Liquid phase, these operators change (among other things) the scaling dimensions of the
observables.\cite{emery-2000}
Upon inspecting the Hamiltonian of the spin sector Eq. \eqref{eq:shamiltonian}, we see that it is invariant under the duality transformation
\begin{equation}
(\phi_{s-},\theta_{s-}, g_{s1},g_{s2},K_{s-})\rightarrow(\theta_{s-},-\phi_{s-},g_{s2},g_{s1},K^{-1}_{s-})
\label{eq:duality}
\end{equation}
Thus, this duality symmetry guarantees the existence of a dual phase associated with
the vanishing of the coupling constant $g_{s2}=0$. We will see that in contrast to the PDW phase which is controlled by KH fixed point, in the dual phase
the uniform SC is
the most dominant instability. We will discuss the implications of this symmetry on the phase diagram later on.
In the $g_{s1}=0$ limit the Hamiltonian of Eq.\eqref{eq:shamiltonian} turns out to be the same as the effective field theory description of the continuum
limit of the one-dimensional
Kondo-Heisenberg (KH) chain\cite{zachar-2001,zachar-2001b,berg-2010} with nearest-neighbor Kondo spins. The KH chain is a system of a 1DEG
(which usually is taken to be non-interacting but this is not necessary) and a one-dimensional array of spin-$1/2$ degrees of freedom,
a 1D quantum antiferromagnet with exchange interaction $J_H$. The spacing between the spin degrees of freedom defines the unit cell of the chain and in
general it is not equal to the lattice spacing of the 1DEG. The coupling between the spin chain and the 1DEG is the Kondo exchange coupling $J_K$.
That these two problems have almost the same low energy effective field theory of the same form is not a coincidence since there is
a formal analogy between the two problems. In both
cases we have a gapless 1DEG, the free fermion band of the KH problem with the 1DEG of the electrons in the anti-bonding $a$-band of the two leg ladder system, which
in both cases is coupled to a Heisenberg spin-$1/2$ chain. It is known that the KH chain (regardless of the spacing between the Kondo spins)
has a broad regime of its phase diagram in which there is a spin gap.\cite{sikkema-1997} This phase has been identified with a commensurate
PDW phase.\cite{berg-2010} One difference between these two systems is that in the two-leg ladder the coupling is the tunneling matrix element $t_\perp$ whereas
in the KH case it is the local Kondo exchange $J_K$. However since the bonding band of the ladder has a charge gap the only active coupling allowed at low energies is
also the effective exchange coupling. Thus in the $g_{s1}=0$ limit both problems are the same. We will see below that in the regime with $g_{s1}=0$ the
parameters of the Hamiltonian flow under the renormalization group to a stable fixed point of the characterized by pair density wave correlations.
Also, in the $g_{s1}=0$ limit the system has an exponentially small gap (which can be determined form a mapping to the $SU(2)$ Thirring model~\cite{zachar-2001})
that is stable against small perturbations of the form we discussed above~\cite{emery-1979,gogolin-1998}.
However, there is an important qualitative difference between these two systems.
While the Kondo-Heisenberg chain is translationally invariant only if the lattice spacing of the quantum antiferromagnet (the distance between the Kondo spins)
is the same as the lattice spacing of the 1DEG, whereas the ladder is a translationally invariant system in all cases. This will play an important role in our analysis.
\section{Phase Diagram of the system with a half-filled bonding band}
\label{sec:phase-diagram-half}
We will now discuss in detail the phase diagram and phase transitions of a extended Hubbard-Heisenberg model on a ladder in which one band,
the bonding $b$ band is half filled, as shown schematically in Fig.\ref{fig:SU(2)-RG}.
\begin{figure}[hbt]
\includegraphics[width=0.45\textwidth]{SU2-RG.pdf}
\caption{Schematic phase diagram when the bonding band of the ladder is half-filled,
shown as a projection of the $SU(2)$-invariant RG flows onto the $(g_{s1},g_{s2})$ plane.
The solid black lines represent the separatrix between different phases.
Due to the duality symmetry in the effective Hamiltonian for the spin sector, the phase diagram is symmetric around $g_{s-}=0$.
The quadrant $g_{s1},g_{s2} >0 $ flows into the Gaussian fixed point $g_{s1}=g_{s2}=0$ and is a Luttinger Liquid (LL) phase.
Region II is controlled by the PDW strong coupling fixed point at $(g_{s1}=0,g_{s2}=-\infty)$.
Region I is controlled by decoupled spin-gapped fixed point of $(g_{s1}=0,g_{s2}=-\infty)$.
There is a KT transition from the LL behavior across the half-line $g_{s2} =0$ and $g_{s,1}>0$ to region II, and across the half-line
$g_{s1} =0$ and $g_{s,2}>0$ to region I.
The half-line $g_{s1}=g_{s2}<0$ represents a quantum phase transition between the two strong-coupling phases and it is in the Ising universality class.}
\label{fig:SU(2)-RG}
\end{figure}
\subsection{Weak Coupling RG analysis}
\label{sec:RG}
The total effective Hamiltonian is ${\cal H}={\cal H}_c+{\cal H}_s$, where the Hamiltonian density for the charge sector ${\cal H}_c$ is given by Eq.\eqref{eq:Hc} and the
Hamiltonian density of the spin sector ${\cal H}_s$ is given by Eq.\eqref{eq:shamiltonian}. The total Hamiltonian has five coupling/parameters, $K_c$, $K_{s\pm}$, and
$g_\pm$ (or equivalently $g_{s1}$ and $g_{s2}$). The Luttinger parameter of the charge sector will not renormalize since the charge sector decouples, while all the
parameters in the spin sector are subject to renormalization. The one-loop RG equations for the couplings in the spin sector are
\begin{subequations}
\begin{align}
&\frac{dK_{s+}}{dl} = -\frac{K^2_{s+}}{8\pi^2} \left( g^2_{s1}+g^2_{s2}\right),\\
&\frac{dK_{s-}}{dl} = \frac{1}{8\pi^2} g^2_{s2}-\frac{K^2_{s-}}{8\pi^2} g^2_{s1},\\
&\frac{dg_{s1}}{dl} = (2-K_{s+}-K_{s-}) g_{s1}, \\
&\frac{dg_{s2}}{dl} = (2-K_{s+} -\frac{1}{K_{s-}}) g_{s2}.
\end{align}
\label{RG-equations}
\end{subequations}
\subsection{Luttinger Liquid Phase}
\label{sec:LL}
We start with the case where $g_{s1}=0$ and $|g_{s2}(0)|$ small. The analysis for this regime is similar to what is done in the Kondo-Heisenberg chain (with nearest
neighbor Kondo spins).\cite{zachar-2001,zachar-2001b,berg-2010}
In this regime the RG equations of Eq. \eqref{RG-equations} become
\begin{align}
\begin{split}
&\frac{dK^{-1}_{s+}}{dl} =\frac{dK_{s-}}{dl} = \frac{1}{8\pi^2} g^2_{s2},\\
&\frac{dg_{s2}}{dl} = (2-K_{s+} -\frac{1}{K_{s-}}) g_{s2}.
\end{split} \label{g1-reduced-eqns}
\end{align}
To guarantee the $SU(2)$ invariance, these equations are subject to the initial conditions $K_{s-}(0) = K^{-1}_{s+}(0)\simeq1-g_{s2}(0)/4\pi$.
The relation $K_{s-} = K^{-1}_{s+}$ is an invariant of this RG flow. Upon implementing this constraint, the set of RG equations \eqref{g1-reduced-eqns} can be further
simplified to the Kosterlitz RG equations
\begin{align}
\frac{dx}{dl} = \frac{1}{8\pi^2}g_{s2}^2, \qquad \frac{dg_{s2}}{dl} = 2x g_{s2},
\end{align}
where $x=K_{s-}-1 \ll 1$.
The solution of this flow equation is well known. Here we will only be interested in the $SU(2)$ invariant trajectories of the RG flow which satisfy $g=\pm 4\pi x$. Thus,
a point on the $SU(2)$ invariant trajectory $x(0)=K_{s-}(0)-1 \simeq - g_{s2}(0)/4\pi < 0$ flows into the Gaussian (free field) fixed point
$g_{s1}=g_{s2}=0$, $K_{s+}=K_{s-}=1$ where the system is a (spin $1/2$) Luttinger Liquid which is a gapless and hence scale invariant system.
In other words, the fixed point of $(g_{s1}=0,g_{s2}=\infty)$ is perturbatively unstable and flows to the Luttinger liquid fixed point along the $SU(2)$ invariant RG
trajectory.
At this Luttinger liquid fixed point
all three different components of the triplet SC order parameter $\vec\Delta_a$ on the anti-bonding $a$-band and the Spin Density Wave (SDW)
order on both bands decay as a power law.
This is the Luttinger Liquid phase, shown in Fig.\ref{fig:SU(2)-RG}, and its correlators are standard. Since it has one gapless charge mode and two gapless spin modes it
is a C1S2 Luttinger state (in the terminology of Balents and Fisher\cite{balents-1996}).
\subsection{The PDW phase}
\label{sec:PDW-phase}
In contrast, points along the other $SU(2)$ invariant trajectory, $x(0)=K_{s-}(0)-1 \simeq- g_{2}(0)/4\pi > 0$, flow to the strong coupling fixed point
$(g_{s1}=0, g_{s2}=-\infty)$.
At this fixed point both $\cos(\sqrt{4\pi}\theta_{s-})$ and $\cos(\sqrt{4\pi}\phi_{s+})$ acquire non-vanishing expectation values, the fields are pinned at values
$(\theta_{s-},\phi_{s+})=\frac{\sqrt{\pi}}{2}( N_{s-},N_{s+})
(where $N_{s,\pm}$ are both odd or even integers at the same time) and their quantum fluctuations are gapped.
It is easy to see that the identity $\ev{\cos{\sqrt{4\pi}\theta_{s-}}} = \ev{\cos{\sqrt{4\pi}\phi_{s+}}}$ holds along the $SU(2)$ invariant trajectories. This observation will be
useful later.
This means that a extended-Hubbard Heisenberg model on a ladder when one of its bands is half-filled, has is a Kosterlitz-Thouless (KT) phase transition from the
gapless Luttinger Liquid phase to the spin-gap phase described by the same strong coupling fixed point of the spin-gap phase (``Kondo singlet'') of the
Kondo-Heisenberg chain (with nearest-neighbor Kondo ``impurities'').
However, at this strong coupling fixed point observables that involve the dual fields of either $\phi_{s-}$ and $\theta_{s+}$ have vanishing expectation values and their
correlations decay
exponentially fast. This results in short range correlations for the singlet uniform superconducting order parameter on the anti-bonding $a$ band, $\Delta_a $, and of the
Charge Density Wave (CDW) order parameter, $O_\text{CDW}$ for both bands $a$ and $b$, which have the charge-spin factorized form
\begin{align}
\Delta_a &=\frac{1}{\pi a} e^{-\text{i}\sqrt{2\pi}\theta_{c,a}}\cos(\sqrt{2\pi}\phi_{s,a}),\nonumber\\
O_{\textrm{CDW},a/b} &=\frac{1}{\pi a} e^{-\text{i}\sqrt{2\pi}\phi_{c,a/b}}\cos(\sqrt{2\pi}\phi_{s,a/b})
\end{align}
where here $a$ is the lattice spacing.
For instance, in the case of the singlet uniform SC order parameter $\Delta_a$ for the anti-bonding $a$ band, in the $s\pm$ basis, its spin part is decomposed as
\begin{align}
\cos(\sqrt{2\pi}\phi_{s,a})= &\cos(\sqrt{\pi}\phi_{s-})\cos(\sqrt{\pi}\phi_{s+})\nonumber\\
+&\sin(\sqrt{\pi}\phi_{s-})\sin(\sqrt{\pi}\phi_{s+})
\end{align}
The presence of $\cos(\sqrt{\pi}\phi_{s-})$ and $\sin(\sqrt{\pi}\phi_{s-})$ guarantees exponentially decaying correlation functions. A similar form is also found for the CDW
operators of the bonding and anti-bonding bands.
However this does not imply that there are no long range correlations at this fixed point since there are composite order parameters built from products of these operators
which do have quasi long range order~\cite{zachar-2001}. To this end we define the order parameters for the PDW phase as the staggered part of the following product of
operators in the two bands\cite{berg-2010}
\begin{equation}
O = \vec\Delta_a\cdot\vec S_b =\vec\Delta_a\cdot\vec J_b+(-1)^{x/a}O_\text{PDW}
\end{equation}
where $\vec S_b = \vec J_b + (-1)^{x/a}\vec N_b$, and $J_b$ is the total spin density vector on the bonding $b$ band and $\vec N_b$ is the N\'eel order parameter also
for the bonding $b$ band. The explicit bosonized expression for the pair-density-wave order parameter $O_\text{PDW}$ is
\begin{align}
&O_\text{PDW} = \vec\Delta_a\cdot\vec N_b=\frac{1}{2(\pi a)^2} \cos(\sqrt{2\pi}\phi_{c,b})e^{-\text{i}\sqrt{2\pi}\theta_{c,a}}\nonumber\\
&\qquad\times \left[2 \cos(\sqrt{4\pi}\theta_{s-}) +\cos(\sqrt{4\pi}\phi_{s-}) - \cos(\sqrt{4\pi}\phi_{s+})\right].
\label{O-PDW}
\end{align}
It is easy to see that the spin part of $O_\text{PDW}$ has a nonzero expectation value in this phase. Therefore in spit of the fact that the correlation functions of the
individual N\'eel (or SDW) order parameter of the bonding $b$ band and uniform triplet SC of the anti-bonding $a$ band are exponentially decaying, the correlation
function of their product, the PDW order parameter $O_\text{PDW}$, exhibits power law correlations:
\begin{equation}
\ev{O^{}_\text{PDW}(x)O^{\dagger}_\text{PDW}(0)} \sim {\cal C}^2_c {\cal C}^2_s |x|^{-2/K_{c,a}}
\end{equation}
in which ${\cal C}_c = \ev{\cos(\sqrt{2\pi}\phi_{c,b})}$ and ${\cal C}_s = \ev{\cos(\sqrt{4\pi}\theta_{s-})}$. Therefore $\Delta_\text{PDW}$ has quasi long range order with the
exponent of $2K^{-1}_{c,a}$. The very same argument holds for a composite operator obtained from the product of the CDW order parameter on the bonding $b$ band
and the uniform singlet SC on the anti-bonding $a$ band. Similar to the $O_\text{PDW}$ case, in $g_{s1}=0$ limit the individual CDW on the bonding $b$ band and
uniform singlet SC on the anti-bonding $a$ band both decay exponentially fast but their product has quasi long range order with the same exponent as PDW order
parameter. This is the Region II phase shown in Fig.\ref{fig:SU(2)-RG}.
\subsection{Uniform Superconducting Phase}
\label{sec:uniform-sc-phase}
Let us now consider the opposite limit of $g_{s2}=0$. Here one can repeat a similar RG analysis as in the $g_{s1}=0$ limit (or make use of the duality symmetry hidden
in the problem, shown in Eq.\eqref{eq:duality}) to show that the RG with $0 < g_{s1}(0) \ll 1$ will be renormalized back to $g_{s1}=0$ along the $SU(2)$ invariant
trajectory and we are again in the Luttinger Liquid phase we found before. Therefore, just like the fixed point at $(g_{s1}=0,g_{s2}=\infty)$ in the former regime, the fixed
point $(g_{s1}=\infty, g_{s2}=0)$ is also not accessible in this case. This means that components of the uniform triplet SC or the N\'eel (SDW) order parameters have
power law correlations.
However, assuming again $SU(2)$ invariance, in a system with $g_{s2}=0$ a small negative $g_{s1}<0$ will flow to the strong coupling fixed point
$(g_{s1}=-\infty,g_{s2}=0)$. This is the fixed point dual to the $(g_{s1}=0,g_{s2}=-\infty)$ fixed point under the duality transformation of Eq.\eqref{eq:duality}.
In this phase now the expectation values $\ev{\cos(\sqrt{4\pi}\phi_{s\pm})}$ are nonzero and observables that are
functions of $\phi_{s\pm}$ have long-ranged correlations. Instead the expectation values $\ev{\cos(\sqrt{4\pi}\theta_{s-})}=0$ and its fluctuation
has short-ranged correlations. At this strong coupling fixed point the semi-classical expectation values of the $\phi_{s,a}$ and $\phi_{s,b}$ are such that
$\ev{\cos(\sqrt{4\pi}\phi_{s+})}\times \ev{\cos(\sqrt{4\pi}\phi_{s-})} >0$ and hence $\sqrt{4\pi}\phi_{s+}=\sqrt{4\pi} \phi_{s-}=0,\pi \mod{2\pi}$.
In a phase controlled by this fixed point the two sectors, $\pm$, have separate spin gaps.
Furthermore, in this regime the expectation value $\langle \cos(\sqrt{2\pi}\phi_{s,a})\rangle \neq 0$ is nonzero, and therefore in the phase controlled
by this fixed point the uniform singlet SC on the anti-bonding $a$ band has quasi long range order,
\begin{equation}
\ev{\Delta^{}_a(x)\Delta_a^\dagger(0)} \sim {\cal C}^2_{s,a} |x|^{-2/K_{c,a}}.
\end{equation}
where ${\cal C}_{sa} = \ev{\cos(\sqrt{2\pi}\phi_{s,a})}$ in this phase. By the same argument, the CDW order parameter of the anti-bonding $a$ band has
quasi long range order as well,
\begin{equation}
\ev{ O^{a}_\text{CDW}(x){O^{a}}_\text{CDW}^\dagger(0)} \sim {\cal C}^2_{s,a} |x|^{-2K_{c,a}},
\end{equation}
However, given the repulsive nature of the interactions the Luttinger parameters obey $K_{c,a}<1$ and therefore SC is the dominant fluctuations of this phase.
Nevertheless, unlike the PDW phase, this phase has dominant uniform SC correlations. Nevertheless we note that in this phase there exists subdominant
CDW correlations.
On the other hand we will now see that the correlation function of $O_\text{PDW}$ decays exponentially fast in this phase. Similar to the previous discussion, it is easy to
see that $\ev{\cos(\sqrt{4\pi}\phi_{s+})} = \ev{\cos(\sqrt{4\pi}\phi_{s-})}$ for a SU(2) invariant RG flow.
Therefore, looking back at the structure of the PDW order parameter given in Eq.\eqref{O-PDW}, we see that the expectation values of $\cos(\sqrt{4\pi}\phi_\pm)$ cancel
each other out at this fixed point. Hence, the expectation value of the spin part of $O_\text{PDW}$ is zero since the expectation value $\ev{\cos(\sqrt{4\pi}\theta_{s-})} =0$
vanishes exactly in this phase. Moreover the two-point correlation function of $O_\text{PDW}$ has to be proportional to the vertex operator $ \cos(\sqrt{4\pi}\theta_{s-})$
whose correlations decay exponentially fast. Consequently the correlations of $O_\text{PDW}$ are short-ranged.
On the other, since there are independent spin gaps on both bands, the product of SC on $a$-band and CDW on $b$-band still has long range order . Therefore this product has similar correlations at both phases. This means it can not be used as an order parameter to distinguish these two phases. Therefore $O_\text{PDW}$ is the unique order parameter to distinguish the state with PDW order from the state with uniform SC order at the spin-gapped regime of the two-leg ladder system with one band kept at half-filling. This is the Region I phase shown in Fig.\ref{fig:SU(2)-RG} and it is identified with a phase with uniform superconductivity.
\subsection{The PDW-uniform SC Quantum Phase Transition}
\label{sec:PDW-phase-transition}
We will now discuss the quantum phase transition between the state with uniform SC order (with power law correlations) and the PDW state. To this end we first note that
both PDW and uniform SC in their associate phases have quasi long range order with the same exponent of $2/K_{c,a}$. This happens since the exponents are
controlled in both cases by the decoupled charge degree of freedom left in the system. However the two phases are distinguished by the fact that one SC state is
staggered (the PDW state) while the other is uniform. Therefore by symmetry the phase transition between these two states is similar to the transition from a state with
translational symmetry to the one with only a $\mathbb{Z}_2$ discrete broken translational symmetry. So we expect to find that this transition to be in the Ising universality
class. However, as we will see, the way this happens is actually rather subtle.
To discuss the nature of the phase transition between Region I and Region II of Fig.\ref{fig:SU(2)-RG} we need to look at the $g_{s1}=g_{s2}$ line in the parameter
space. The duality symmetry of the Hamiltonian implies that the phase diagram must be symmetric under it. We will now see that there is a direct quantum phase
transition at the self-dual line, the half-line that separates Region I from Region II in Fig.\ref{fig:SU(2)-RG}.
From the first RG equation in Eq.\eqref{RG-equations}, it is clear that the Luttinger parameter $K_{s+} \rightarrow0$ flows to zero whenever either $g_{s1}$ or $g_{s2}$
is relevant. Furthermore along the $g_{s1}=g_{s2}$ line system flows to a new strong coupling fixed point with $\ev{\cos(\sqrt{4\pi}\phi_{s+})}\neq0$ and the field
$\phi_{s+}$ is pinned. At the strong coupling fixed point the effective bosonized Hamiltonian can be further simplified to the following
\begin{align}
{\cal H}_\text{eff} =& \frac{v_{s-}}{2} \left\{ K^{}_{s-}(\partial_x\theta_{s-})^2+K_{s-}^{-1}(\partial_x\phi_{s-})^2 \right\}\nonumber\\
&+\frac{\mu_\phi}{\pi} \cos(\sqrt{4\pi}\phi_{s-}) + \frac{\mu_\theta}{\pi} \cos(\sqrt{4\pi}\theta_{s-}), \label{eq:effective-hamiltonian}
\end{align}
where the new couplings $\mu_{\phi/\theta}$ are related to the $g_{s}$'s as
\begin{equation}
\mu_{(\phi,\theta)} = \frac{g_{s(1,2)}}{2\pi } \; \ev{\cos(\sqrt{4\pi}\phi_{s+})}.
\end{equation}
This system was discussed extensively by Lecheminant et.~al.\cite{lecheminant-2002}, who showed at $K_{s-}=1$ the resulting $H_\text{eff}$ of
Eq.\eqref{eq:effective-hamiltonian} can be re-fermionized in terms of two sets of chiral Majorana fermions. For $\mu_\theta=\mu_\phi$, {\it i.e.} along the self-dual line of
our problem, one of the chiral Majorana fermion pairs becomes massless and its mass changes sign accross this phase transition. Therefore, the phase boundary
between the phases corresponding to $g_{s1}$ and $g_{s2}$ is the in the same universality class as the quantum Ising chain, a theory of a (non-chiral) massless
Majorana fermion. We should point out that the expression of the Ising order (and disorder) operators in terms of the bosonized fields of the ladder is highly non-local.
This question can also be addressed directly through RG equations as well. Indeed along the self-dual line they reduce to
\begin{align}
\begin{split}
&\frac{dK_{s-}}{dl} = \mu_\theta^2-K_{s-}^2\mu_\phi^2 ,\\
&\frac{d\mu_\phi}{dl} = (2-K_{s-}) \mu_\phi, \\
&\frac{d\mu_\theta}{dl} = (2-\frac{1}{K_{s-}}) \mu_\theta.
\end{split}
\end{align}
Starting with $K_{s-}(0)=1$ and $|\mu_\theta(0)|>|\mu_\phi(0)|$, we see that RG flows to the the fixed point of $(|\mu_\theta| =\infty, \mu_\phi=0, K_{s-}=\infty)$ which is the
same as the PDW fixed point. In contrast, if we start with $K_{s-}(0)=1$ and $|\mu_\phi(0)|>|\mu_\theta(0)|$, the RG flow will take us to
$(\mu_\theta=0,| \mu_\phi|=\infty, K_{s-}=0)$, i.e. the uniform SC fixed point. The flow with the initial condition $K_{s-}(0)=1$ and $g_{s1}(0)=g_{s2}(0)<0$ will
flow to the Ising critical point with $g_{s1}=g_{s2}=-\infty$ and $K_{s-}=1$ while $g_{s1}=g_{s2}>0$ flows to the Gaussian fixed point.
Fig.\ref{fig:SU(2)-RG} shows the results of a numerical calculation of all the $SU(2)$-invariant RG flows projected onto the $(g_{s1},g_{s2})$ plane. Solid black lines
represent the separatrix between different phases. The low energy behavior of all the models with $SU(2)$-invariance which satisfy $g_{s1},g_{s2}>0$ is controlled by the
Gaussian fixed point. The rest of the flow, except for the semi line $g_{s1}=g_{s2}<0$, will end either at $(g_{s1}, g_{s2})=(-\infty,0)$ or at $(g_{s1},g_{s2})=(0,-\infty)$
depending on the initial values of the couplings. The RG flow pattern is symmetric around the $g_{s1}=g_{s2}$ line as dictated by the duality symmetry.
\section{Bonding band at general commensurate fillings}
\label{sec:other-commensurabilities}
We will now discuss the case of a ladder with a bonding band at other commensurate fillings.
To understand this case it is useful to recall first the physics of the simpler problem of the extended repulsive one-band Hubbard-Model at quarter filling $k_F = \pi/4$.
This system has a quantum phase transition at a critical value of the nearest neighbor ``Coulomb'' interaction between a Luttinger liquid and a commensurate
insulating CDW state. An intuitive classical picture of the ground state of such a system is as if the electrons occupy every other site on the lattice
with their spins arranged in a ``stretched'' N\'eel antiferromagnetic state, as in an antiferromagnetic Heisenberg chain with twice the lattice constant.
In this regime, the charge sector of the Hubbard chain is gapped (and hence insulating).
This phase transition is driven by a higher order Umklapp interaction, which appears at third order in perturbation theory, which stabilizes this period $2$ CDW state.
As it is well known, and easy to see, in the bosonized for of this system the Umklapp term for the $1/4$ filled band has the form
(see, {\it e.g.} Ref.[\onlinecite{giamarchi-2003}] and references therein)
\begin{equation}
{\cal H}_{u,1/4}=g_{1/4} \cos(4\sqrt{2\pi}\phi_c )
\end{equation}
The scaling dimension of this Umklapp operator is $4K_c$, where $K_c$ is the charge Luttinger parameter of the extended Hubbard chain.
Therefore, this Umklapp process is relevant for $K_c < 1/4$ which always lays in the intermediate to strong coupling repulsive regime.
Although in this regime the bosonization formulas that relate the {\em parameters} of the microscopic model (in its naive continuum limit)
and the bosonized theory is no longer accurate, the {\em form} of the effective low energy bosonized theory retains its form as it is dictated by symmetry.
The main problem is that the connections between the Luttinger parameter(s) and the microscopic parameters is more complex due to finite renormalizations
induced by the irrelevant operators. In practice this relation must (and is) determined from numerical calculations on the microscopic model.
For a system on a two-leg ladder one can pursue a similar line of reasoning and use the fact that, for certain fillings, there is a similar Umklapp processes for the bonding
band (for instance). However, just as in the case of the extended Hubbard chain, here too
the couplings corresponding to these Umklapp terms in the strong coupling effective theory can not be easily related to the microscopic parameters of
the original lattice model and requires serious numerical work. Nevertheless, such Umklapp processes still exist and should eventually become relevant.
At this value of the parameters, where $K_c=1/4$, the Umklapp process for the bonding band becomes marginally relevant, and the system has a
Kosterlitz-Thouless phase transition to a period $2$ commensurate CDW state that coexists with antiferromagnetic order, with power law correlations in the spin sector.
We can now follow the same approach we used for the case of a half-filled bonding band of the preceding sections to determine the phase diagram. We will not present
the details here but give the main results. As before, the phase diagram in general has three phases: a) a Luttinger Liquid phase (similar to the one discussed by
Wu {\it et al}\cite{wu-2003}), b) a phase with uniform superconducting order (and hence a spin gap), and c) a PDW phase (also with a spin gap).
However, the ordering wave vector of the PDW state is now $Q_\text{PDW}=\pi/2$ (instead of $Q_\text{PDW}=\pi$ for the case discussed in the preceding sections).
This PDW state there is a ``composite'' CDW quasi ordered state (with power-law correlations) with degrees of freedom partially on the anti-bonding band,
in this case with wave vector $Q_\text{CDW}^a=2k_{F,a}+Q_\text{PDW}$.
Thus the resulting PDW phase is similar to the one discussed in Refs. [\onlinecite{zachar-2001b,berg-2010}].
In contrast, the phase with uniform SC order has a CDW state that develops on the anti-bonding bans alone and has a conventional $2k_{F,a}$
ordering wave vector. In spite of these differences with the case of the half-filled bonding band, the quantum phase transition between the PDW phase
and the phase with uniform SC order for the quarter filled bonding band is also in the Ising universality class.
A state with very similar properties was found in the Kondo-Heisenberg chain for a Kondo lattice with period $2$ (see Ref.[\onlinecite{berg-2010}]). However, while in the
KH chain translation invariance is broken explicitly by the assumed fixed spacing of the Kondo spins, in the case of the ladder translation invariance is broken
spontaneously at the Kosterlitz-Thouless quantum phase transition we just discussed. However the spin gap (and the PDW state as well) can only develop once this
CDW state is formed. In this sense this is an example of {\em intertwined} (as opposed to {\em competing}) orders in the sense discussed by
Berg {\it et al}.\cite{berg-2007,berg-2009b}
This line of argument can, naturally, also be extended to states in which the bonding band has other fillings, such as $1/2^{n}$ for $n=1,2,...$, and consider Umklapp
terms of the form $g_{\frac{1}{2n}}\cos(2n\sqrt{2\pi}\phi_c)$. Although such terms will generally be present, their effects become relevant only for $K_c<1/n^2$ which lays
deep in in the extreme repulsive regime for $n\geq2$. Thus, unless the system has substantial interactions beyond nearest neighbors, the resulting higher order
commensurate CDW and PDW states will be quite weak and difficult to stabilize.
\section{PDW state in an extended Hubbard-Heisenberg model on a two-leg ladder with $\Phi$ flux per plaquette}
\label{sec:flux}
We will now introduce and investigate another ladder model in which we can show has a PDW phase. More specifically we will consider an extended Hubbard-
Heisenberg model in a two-leg ladder with flux $\Phi$ per plaquette (in units of the flux quantum $\phi_0=hc/e$).
As usual the flux is introduced by a Peierls substitution which here reduces to assigning a phase $\pm \Phi/2$ to the electron hopping matrix elements along the two legs
which now become complex and are complex conjugate of each other,
$t^{}_1=t^*_2$, where $t_{1,2}$ are the hopping amplitudes on top and bottom leg (see Fig.\ref{fig:double-minima} a).
In addition to the the hopping along the rungs, we assume a real hoping amplitude between the legs.
The free part of the Hamiltonian of this system is
\begin{align}
H_0=&-t \sum_j \left( e^{\text{i}\Phi/2}c^\dagger_{1,j+1} c^{}_{1,j} + e^{-\text{i}\Phi/2}c^\dagger_{2,j+1} c^{}_{2,j} + \text{h.c.}\right)\nonumber \\
& -t_\perp\sum_j \left( c^\dagger_{1,j} c^{}_{2,j} +c_{2,j}^\dagger c_{1,j}\right)
\end{align}
in which $i=1,2$ is the chain index referring to the top and bottom chains respectively and $j\in {\mathbb Z}$ denotes the lattice sites. To the best of our knowledge this
electronically frustrated system has not been discussed previously. The interaction terms, $H_{\rm int}$ that we will consider are the same as in the conventional ladder
system and are given in Eq.\eqref{lattice-model}.
We will see here that this model has a very rich phase diagram which we will not explore completely. However we will show that PDW phases occur naturally although
through a rather different mechanism that we found in the conventional ladder discussed in the previous sections.
We will discuss this problem using the same methods bosonization we used above.
We begin by constructing an effective field theory. In momentum space the free fermion part of the Hamiltonian becomes
\begin{widetext}
\begin{equation}
H_0 = \int_{-\pi}^\pi \frac{dk}{2\pi} \Big[-2t \cos(k+\Phi/2)c^\dagger_1(k)c^{}_1(k)
-2t \cos(k-\Phi/2)c^\dagger_2(k)c^{}_2(k
-t_\perp( c^\dagger_1(k)c^{}_2(k) +c_2^\dagger(k)c_1(k))\Big]
\end{equation}
\end{widetext}
\begin{figure}[hbt]
\begin{center}
\subfigure[]{\includegraphics[width=0.35\textwidth]{flux-ladder.pdf}}
\subfigure[]{\includegraphics[width=0.4\textwidth]{double-minima.pdf}}
\end{center}
\caption{a) A two-leg ladder with flux $\Phi$ (in units of the flux quantum) per plaquette. Here $1$ and $2$ label the two legs, $j$ and $j+1$ label two consecutive rungs. The hopping amplitudes are $t_1= t\; e^{\text{i}\Phi/2}$, $t_2=t \; e^{-\text{i} \Phi/2}$ and $t_\perp$.
b) Schematic plot of the dispersion relation of the bonding band $E_b(k)$ for a non-vanishing flux per plaquette, $\Phi\neq 0$.}
\label{fig:double-minima}
\end{figure}
For $t_\perp=0$ the band structure consists of right and left branches centered at $k=\Phi/2$ and $k=-\Phi/2$ respectively. For $t_\perp\neq 0$ a full gap opens up at the
crossing point of the two bands and bonding and anti-bonding bands form.
The Hamiltonian is diagonalized if we switch to the new basis defined by the orthogonal transformations $c_\eta (k)= \sum_{i} M_{\eta,i}(k) c_i(k)$ with the
transformation matrix $M(k)$:
\begin{equation}
M_{\eta,i}(k)=
\begin{pmatrix}
\cos(\xi(k)/2)& -\sin(\xi(k)/2)\\
\sin(\xi(k)/2)& \cos(\xi(k)/2)
\end{pmatrix}
\end{equation}
where $\eta=a,b$ stands for the bonding and anti-bonding bands, $i=1,2$ labels the legs of the ladder, and $\xi$ is defined as
\begin{align}
\sin(\xi(k)/2) =& \frac{u(k)}{\sqrt{1+u^2(k)}}\nonumber\\
\cos(\xi(k)/2) = &\frac{1}{\sqrt{1+u^2(k)}}
\end{align}
in which
\begin{equation}
t_\perp u(k) =2t\sin(\Phi/2)\sin(k) + \sqrt{(2t\sin(\Phi/2)\sin(k))^2+t^2_\perp}.
\end{equation}
The inverse transformations are defined by the inverse matrix $M^{-1}_{i,\eta}$ as following
\begin{align}
c_{1,j,\sigma} &=\int^\pi_{-\pi} \frac{dk}{2\pi} e^{\text{i} j k}\left[ \cos(\frac{\xi(k)}{2})c^a_{\sigma}(k) + \sin(\frac{\xi(k)}{2})c^b_{\sigma}(k)\right]\nonumber\\
c_{2,j,\sigma}&=\int^\pi_{-\pi} \frac{dk}{2\pi} e^{\text{i} j k}\left[- \sin(\frac{\xi(k)}{2})c^a_{\sigma}(k) +\cos(\frac{\xi(k)}{2})c^b_{\sigma}(k)\right]
\label{eq:bonding-antibonding-lattice-transformation}
\end{align}
where $b$ and $a$ label the bonding and the anti-bonding bands respectively.
The dispersion relations for the bonding and anti-bonding bands are
\begin{equation}
E_\eta(k) = -2t\cos(\Phi/2)\cos(k) \pm \sqrt{(2t\sin(\Phi/2)\sin(k))^2+ t^2_\perp}.
\label{eq:band-structure-flux}
\end{equation}
Band structures of this type appear in quantum wires with two pockets~\cite{datta-2009} and in 1D electronic systems with spin-orbit interactions (with the leg label playing the role of the electron spin) (see, {\it e.g.} Ref.[\onlinecite{lutchyn-2011}] and references therein).
The dispersion relations $E_\eta(k)$ satisfies the symmetries
\begin{equation}
E_\eta(-k)=E_\eta(k), \qquad E_a(\Phi+2\pi,k) = - E_b(\Phi,k)
\end{equation}
For a wide range of parameters, $t_\perp/t$ and flux $\Phi$, the bonding band has the form sketched in Fig.\ref{fig:double-minima} with two minima in momentum space. For the rest of this section we will focus on the regime of the parameters in which the Fermi energy lies below the hybridization gap of the bonding band, $E_F<E_b(0)=-2t \cos(\Phi/2)-t_\perp$. In this regime, the Fermi energy crosses the bonding band at four distinct points, $\pm k_1, \pm k_2$, while the anti-bonding band is empty, as shown in Fig. \ref{fig:double-minima} b.
Here we will consider an extended Hubbard-Heisenberg model on a ladder with flux $\Phi$ per plaquette. We will see now that this system has an interesting phase structure. We will analyze this system using the same bosonization methods as in the conventional ladder. For reasons that we will explain below we will focus on the spacial case of flux $\Phi=\pi$ per plaquette.
\subsection{Low energy continuum limit}
\label{sec:low-energy-pi-flux}
In this work we are interested in the regime in which the Fermi energy crosses only the bonding band and hence the anti-bonding band is empty. Furthermore we assume
that the interactions are weak enough that we can focus only in the low energy excitations near the four Fermi points $\pm k_1, \pm k_2$ where the Fermi energy crosses
the bonding band. Furthermore, for the more interesting case of flux $\Phi=\pi$, the Fermi points obey the commensurability condition $k_1+k_2=\pi$. In this case we also
have $u(k_1)=u(k_2)$. Hence the parameter $\xi(k)$ obeys the same identity and it is the same for both Fermi points. Henceforth it will be
denoted by $\xi$.
By looking only at the low energy fluctuations around the Fermi points in the bonding band, the expansion of Eq.\eqref{eq:bonding-antibonding-lattice-transformation}
reduces to the operator identifications
\begin{subequations}
\begin{align}
\frac{1}{\sqrt{a}} c_{1,\sigma}(j) &\to
\sin(\frac{\xi}{2})L_{1\sigma}(x)e^{\text{i} k_1x} + \cos(\frac{\xi}{2})R_{1\sigma}(x)e^{-\text{i} k_1x} \nonumber\\
&+ \sin(\frac{\xi}{2}) R_{2\sigma}(x)e^{\text{i} k_2x} + \cos(\frac{\xi}{2})L_{2\sigma}e^{-\text{i} k_2x}\\
\frac{1}{\sqrt{a} } c_{2,\sigma}(j)& \to
\cos(\frac{\xi}{2})L_{1\sigma}(x) e^{\text{i} k_1x} + \sin(\frac{\xi}{2})R_{1\sigma}(x)e^{-\text{i} k_1x} \nonumber\\
&+ \cos(\frac{\xi}{2}) R_{2\sigma}(x)e^{\text{i} k_2x}+ \sin(\frac{\xi}{2}) L_{2\sigma}(x)e^{-\text{i} k_2x}
\end{align}
\label{eq:lattice-to-bonding-transformations}
\end{subequations}
where we have used $\cos(\xi(-k)/2) = \sin(\xi(k)/2)$ and $\xi(k_1)=\xi(k_2)\equiv \xi$ which are both true for $\Phi=\pi$, and where we have also projected out the anti-
bonding band.
We will treat the Fermi point labels as a flavor index $f=1,2$.
By inspection of the free fermion lattice Hamiltonian one can see that the Fermi momenta $k_1$ and $k_2$ are essentially determined by the flux $\Phi$ and by the filling
fraction of the bonding band. In what follows we will ignore the contribution of the anti-bonding band since it is empty and its excitations have energies larger than the
cutoff of the effective field theory.
Following a similar discussion as in section \ref{sec:model}, the non-interacting continuum Hamiltonian becomes
\begin{equation}
{\cal H}_0= \sum_{f=1,2} (-\text{i} v_f)\left\{R^\dagger_{f,\sigma}\partial_x R^{}_{f,\sigma} - L^\dagger_{f,\sigma} \partial_x L^{}_{f,\sigma}\right\}
\end{equation}
where $v_1=-\frac{dE_b}{dk}|_{k_1}$ and $v_2=\frac{dE_b}{dk}|_{k_2}$ are the Fermi velocities associated with the two Fermi points. For general flux $\Phi$ there is no
symmetry relation the Fermi points and the two Fermi velocities are different, $v_1\neq v_2$.
However, the the case of flux $\Phi=\pi$ per plaquette the energy bands have the additional symmetry $k \to \pi -k$. This symmetry reflects that fact that an exchange of
the two legs, $1 \leftrightarrow 2$, is in general equivalent to the reversal of the flux $\Phi \leftrightarrow -\Phi$ which is the time-reversed state. However due to the flux
quantization, the states with $\Phi=\pi$ and $\Phi=-\pi$ are equivalent since the Hamiltonian is a periodic function of the flux with period $2\pi$ (corresponding to
unobservable changes by an integer multiple of the flux quantum). On the other hand, from Eq.\eqref{eq:band-structure-flux}, we see that for flux $\Phi=\pi$ the
dispersion relations are also invariant under $k \to \pi - k$ which amounts to an exchange of the two fermi points.
Thus, in the case of flux $\Phi=\pi$ which insures that the Fermi velocities are equal, $v_1=v_2$, for all fillings of the bonding band (and of the anti-bonding band as
well). Therefore, for flux $\Phi=\pi$, the symmetry of exchanging the two legs implies that the effective low energy theory must have a symmetry under the exchange of the
flavor labels $1$ and $2$ (together with a chiral transformation which exchanges right and left movers).
In order to introduce all possible four-fermion intra-band and inter-band interactions, one considers an extended Hubbard-Heisenberg type lattice problem, just as what
we did for the two-leg ladder system of Section~\ref{sec:model}, and construct the continuum theory for the present case. All four-fermion interactions for this system can
be represented by simple diagrams similar of the type shown in Fig.\ref{fig:interaction-diagram}. All the interactions can again be classified into charge and spin current-
current interactions, singlet and triplet SC couplings or Umklapp processes with different commensurabilities.
This means that the effective field theory for the present system has the same field theoretical form as the Hamiltonian of the two-leg ladder system given by
Eq.~\eqref{smooth-continuum-H}. The only difference is that in the present case, the two sets of right- and left-moving labeled by the flavor index $f=1,2$ are low energy
fluctuations of the bonding band. Moreover the connection between the couplings in the effective theory and the microscopic parameters of the original lattice problem is
different for the two problems. The two top diagrams in Fig.~\ref{fig:interaction-diagram} represent singlet and triplet SC interactions between the $1$ and $2$ while the
lower diagrams corresponds to the most relevant $Q=2(k_1+k_2)$ Umklapp processes. We will further assume that the Fermi points $\pm k_1, \pm k_2$ are such that no
other Umklapp processes are allowed.
The discussion of the phase diagram of this system in the incommensurate regime is analogous to what has been discussed in the conventional ladder by many authors.
C. Wu {\emph et. al.}\cite{wu-2003} find that the only SC state in the phase diagram of the system away from the half-filling is the uniform $s$- or $d$-wave SC. Similar
conclusions hold for the incommensurate regime of the this model which is in Luttinger liquid phase with two gapless charge modes and two gapless spin modes,
$C2S2$. The only difference is that in this case these modes originate entirely from the bonding band.
For general flux $\Phi$ and for certain filling fractions of the bonding band Umklapp process involving separately the pairs of Fermi points $\pm k_1$ and $\pm k_2$
become allowed. The physics that follows in these cases is similar to what we discussed for the conventional ladder in Section \ref{sec:model} and will not be repeated
here.
\begin{figure}[t!]
\includegraphics[width=0.35 \textwidth]{interaction-diagram.pdf}
\caption{Schematic representation of all the processes leading to uniform SC couplings and $Q=2(k_1+k_2)$ Umklapp processes. The sum of the top two
diagrams represent the uniform singlet and triplet SC interactions while the two lower diagrams correspond to the Umklapp process.}
\label{fig:interaction-diagram}
\end{figure}
However, for flux $\Phi\neq 0$ a new type of Umklapp process, shown in Fig.\ref{fig:interaction-diagram}, becomes allowed. For this process to be possible Fermi
momenta must satisfy the condition $Q=2(k_1+k_2)$. This Umklapp process leads to the following interactions:
\begin{align}
{\cal H}_\text{Um} = \left( \lambda_{u3} \, n^\dagger_1n^{}_2 + \lambda_{u4} \, \vec n^\dagger_1\cdot \vec n^{}_2 \right) e^{\text{i} Qx} + \text{h.c.}
\end{align}
where $n_f$ (with $f=1,2$) are the $2k_F$ CDW order parameters associated with the Fermi points at $\pm k_1$ and $\pm k_2$, with ordering wave vectors $2k_1$ and
$2k_2$ respectively. Similarly, $\vec n_f$ is the associated SDW order parameters with the same ordering wave vectors. When the commensurability condition is
satisfied this process is marginal and needs to be included in the effective low energy theory.
However the commensurability condition $k_1+k_2=\pi$ can only be met if the flux is $\Phi=\pi$. Furthermore, in this case the system is commensurate for all fillings of
the bonding band.
For $\Phi=\pi$ the one-particle spectrum is given by $E_b(k)=-\sqrt{4t^2\sin^2(k)+t^2_\perp}$ which satisfies $E_b(\pi-k)=E_b(k)$. Therefore if $k_1$ is a Fermi
momentum so is $k_2=\pi-k_1$. Hence for flux $\Phi=\pi$ the system remains commensurate {\em for all} electron fillings.
We will see below that for $\Phi=\pi$ the pair-density-wave state exists for all values of the filling (with the Fermi energy in the bonding band).
From now on we will restrict ourselves to the case of flux $\Phi=\pi$.
The bosonized Hamiltonian for flux $\Phi=\pi$ is (including the Umklapp process)
\begin{align}
{\cal H} =& \frac{v_{c+}}{2} \left\{ K_{c+} (\partial_x\theta_{c+})^2 + K^{-1}_{c+} (\partial_x\phi_{c+})^2 \right\}\nonumber\\
+& \frac{v_{c-}}{2} \left\{ K_{c-} (\partial_x\theta_{c-})^2 + K^{-1}_{c-} (\partial_x\phi_{c-})^2 \right\}\nonumber\\
+&\frac{v_{s+}}{2} \left\{ K_{s+} (\partial_x\theta_{s+})^2 + K^{-1}_{s+} (\partial_x\phi_{s+})^2 \right\}\nonumber\\
+& \frac{v_{s-}}{2} \left\{ K_{s-} (\partial_x\theta_{s-})^2 + K^{-1}_{s-} (\partial_x\phi_{s-})^2 \right\}\nonumber\\
+ &\frac{\cos(\sqrt{4\pi}\phi_{s+})}{2(\pi a)^2} \left[ g_{s1} \cos(\sqrt{4\pi}\phi_{s-}) + g_{s2} \cos(\sqrt{4\pi}\theta_{s-}) \right]\nonumber\\
+& \frac{\cos(\sqrt{4\pi}\phi_{s+})}{2(\pi a)^2} \left[g_{5} \cos(\sqrt{4\pi}\theta_{c-})+g_{u5} \cos(\sqrt{4\pi}\phi_{c-}) \right]\nonumber\\
+ &\frac{\cos(\sqrt{4\pi}\theta_{c-})}{2(\pi a)^2} \left[ g_{3} \cos( \sqrt{4\pi} \theta_{s-} ) + g_{4} \cos(\sqrt{4\pi} \phi_{s-})\right] \nonumber\\
+& \frac{\cos(\sqrt{4\pi}\phi_{c-})}{2(\pi a)^2} \left[ g_{u3} \cos( \sqrt{4\pi} \theta_{s-} ) + g_{u4} \cos(\sqrt{4\pi} \phi_{s-}) \right]
\label{eq:Heff}
\end{align}
where $\phi_\pm =(\phi_2\pm \phi_1)/\sqrt{2}$ and similarly for $\theta$ fields.
As before, there are marginal operators (both in the charge and spin sectors) of the form $\partial_x\phi_+\partial_x\phi_-$ and $\partial_x\theta_+\partial_x\theta_-$. However, as in Section~\ref{sec:model},
these operators can be ignored since their main effect is a renormalization of the scaling dimensions\cite{emery-2000} which here translate in smooth changes of the phase diagrams (without changing their topology) and in the spin gap phases they have essentially no effect.
The first two lines of Eq.\eqref{eq:Heff} is the sum of four different LL Hamiltonians for for the two charge and spin sectors.
The third line corresponds to different spin backscattering processes, while the fourth and fifth lines represent the singlet and triplet SC couplings and the
$Q=2(k_1+k_2)=2\pi$ Umklapp processes respectively.
In addition to the relation between initial value of the luttinger parameters of different sectors and the couplings in the various current-current interaction given in
Eq.~\eqref{eq:Ks-}, the spin $SU(2)$ invariance dictates that $g_5=g_4+g_3$ and $g_{u5}=g_{u4}+g_{u3}$.
This will be useful in the discussion of the RG equations and phase diagram.
The Hamiltonian of Eq.\eqref{eq:Heff} has several symmetries. Similarly to the half-filled bonding band case discussed in the preceding sections,
we find a duality symmetry in the $s-$ spin sector, $(\phi_{s-},\theta_{s-})\rightarrow (\theta_{s-},-\phi_{s-})$,
under which the Hamiltonian of Eq.\eqref{eq:Heff} retains its form.
We will denote this symmetry by ${\mathbb Z}^{s-}_2$. However, self-duality holds
only if $g_{s1}=g_{s2}$, $g_{3}=g_4$ and $g_{u3}=g_{u4}$. In addition, the last two lines of the Hamiltonian of Eq.\eqref{eq:Heff}
have identical form which indicates that we can define yet another duality symmetry of the same form but this time in the $c-$ charge sector,
$(\phi_{c-},\theta_{c-})\rightarrow (\theta_{c-},-\phi_{c-})$, and which will be denoted by ${\mathbb Z}^{c-}_2$.
Self-duality in this sector requires, in addition, to set $g_5=g_{u5}$.
Finally, the Hamiltonian of Eq.\eqref{eq:Heff} is also even in the fields $\phi_{c,\pm}$, $\theta_{c,\pm}$, $\phi_{s,\pm}$ and $\theta_{s,\pm}$,
which reflects the invariance under the exchange of the labels of the Fermi points (or flavors), $1\leftrightarrow 2$, which is an exact symmetry only for flux $\Phi=\pi$.
In the next section we will look at the different SC and CDW states, each with a unique symmetry properties under the action of the total symmetry group, and construct the order parameters for each state in order to identify the associated quantum phase diagram.
\subsection{Order parameters and phases}
\label{sec:plaquette-order-parameters}
To identify all the phases present in the phase diagram we construct the associated order parameters consistent with the symmetries of the current problem. In terms of the two flavors $f=1,2$ of the fermions in the bonding band we define for this system two uniform SC order parameters, $\Delta_\pm$, and two PDW order parameters $\tilde \Delta_\pm$ (both with ordering wave vector $Q_{PDW}=\pi$). They are
\begin{align}
\Delta_\pm=& \left(L_{1 \uparrow } R_{1 \downarrow }+R_{1 \uparrow }L_{1 \downarrow } \right) \pm \left(L_{2 \uparrow } R_{2 \downarrow }+R_{2 \uparrow }L_{2 \downarrow } \right) \nonumber\\
\tilde \Delta_\pm=& \left(L_{2 \uparrow } R_{1 \downarrow }+R_{1 \uparrow }L_{2 \downarrow } \right) \pm \left(L_{1 \uparrow } R_{2 \downarrow }+R_{2 \uparrow }L_{1 \downarrow } \right)
\label{eq:Deltas}
\end{align}
Similarly we also four CDW order parameters, $n_\pm$ and $\tilde n_\pm$,
\begin{align}
n_\pm=& \sum_{\sigma} \Big(L^\dagger_{1\sigma} R_{1\sigma} \pm L^\dagger_{2\sigma} R_{2\sigma}\Big) \nonumber\\
\tilde n_\pm=& \sum_\sigma \Big(L^\dagger_{2\sigma} R_{1\sigma} \pm L^\dagger_{1\sigma} R_{2\sigma}\Big)
\label{eq:ns}
\end{align}
and their adjoint operators.
The relation between these order parameters and the microscopic pair fields and CDW fields is as follows.
The pair fields defined on site $j$ on each leg $i=1,2$ of the ladder, $\Delta^i_j$, on the rung $j$, $\Delta^{12}_j$, and on each leg $i=1,2$, $\Delta^i_{j,j+1}$, are defined by
\begin{align}
\Delta_{i,j}=&c_{i,j, \uparrow } c_{i,\j, \downarrow }\nonumber \\
\Delta^{12}_{j}=& c_{1,j, \uparrow } c_{2,j, \downarrow }+c_{2,j, \uparrow } c_{1,j, \downarrow }\nonumber \\
\Delta^i_{j,j+1}=&c_{i, \uparrow }(j)c_{i, \downarrow }(j+1)+c_{i, \uparrow }(j+1)c_{i, \downarrow }(j)
\label{eq:pair-fields-pi-flux}
\end{align}
These observables can be written in terms of the slowly varying chiral Dirac fermions $R_{f,\sigma}$ and $L_{f,\sigma}$ (for the two flavors $f=1,2$) in the symmetrized and anti-symmetrized forms (with respect to the exchange of the labels $1$ and $2$ of the legs of the ladder)
\begin{subequations}
\begin{align}
\Delta^1_j+\Delta^2_j &\to \sin \xi \; \Delta_+ + (-1)^{x/a} \tilde\Delta_+ \\
\Delta^1_j-\Delta^2_j &\to - \cos \xi (-1)^{x/a} \tilde\Delta_- \\
\Delta^{12}_j & \to \Delta_+ + \sin \xi \; (-1)^{x/a}\; \tilde \Delta_+ \\
\Delta^1_{j,j+1}+\Delta^2_{j,j+1} & \to 2 \sin \xi \; \sin (qa/4)\; \Delta_- \nonumber\\
& - (-1)^{x/a} 2 \text{i} \cos(qa/4)\; \tilde \Delta_-\\
\Delta^1_{j,j+1}-\Delta^2_{j,j+1} &\to - (-1)^{x/a} 2 \text{i} \cos \xi \; \cos(qa/4)\; \tilde \Delta_+
\end{align}
\label{eq:pair-fields-pi-flux-low-energy}
\end{subequations}
where $q=2(k_2-k_1)$ and where we have used the definitions of Eq.\eqref{eq:Deltas}.
We see that the SC order parameters $\Delta_\pm$ and $\tilde \Delta_\pm$ represent two different types of uniform SC states and PDW SC states (both with wave vector
$Q_{PDW}=\pi$) respectively. These pairs of SC states differ by their symmetry transformations under flavor exchange. It is worth to note that in the flux $\Phi=\pi$ model
the PDW order parameters are actually bilinears of fermion operators, {\it c.f.} Eq.\eqref{eq:pair-fields-pi-flux-low-energy}. This is in contrast to what we found in the conventional two-leg ladder in
section \ref{sec:half-filled}, and to the recent results by Berg {\it et al}\cite{berg-2010} in the Kondo-Heisenberg chain, where the PDW order parameter is microscopically
quartic in fermion operators. In this sense the PDW states of the flux $\Phi=\pi$ two-leg ladder is closer in spirit to the conventional construction of
FFLO states,\cite{larkin-1964,fulde-1964}even though the spin $SU(2)$ symmetry is preserved here and explicitly broken in the standard FFLO construction.
Similarly we can relate the site $n_{i,j}$ (with $i=1,2$ the leg index) and rung, $n^{12}_j$ electron charge density operators
\begin{equation}
n_{i,j}=\sum_\sigma c_{i,j,\sigma}^\dagger c_{i,j,\sigma}, \qquad
n_j^{12}= \sum_\sigma c_{1,j,\sigma}^\dagger c_{2,j,\sigma}={n_j^{21}}^\dagger
\label{eq:microscopic-densities}
\end{equation}
which, after symmetrizing and anti-symmetrizing with respect to the exchange of the two legs lead to a set of four order CDW parameters, $n_\pm$ and $\tilde n_\pm$,
The relation between the microscopic charge density operators of Eq.\eqref{eq:microscopic-densities} and the slowly varying chiral Dirac fermions
$R_{f,\sigma}$ and $L_{f,\sigma}$ (with $f=1,2$) is
\begin{subequations}
\begin{align}
n_{1,j}+n_{2,j} \to & j^0_1+j^0_2+ \nonumber\\
&\sin \xi \; (-1)^{x/a}\; e^{\text{i} qx/2} n_+ + e^{\text{i} qx/2} \tilde n_+ + \textrm{h.c.}\\
n_{1,j}-n_{2,j} \to &-\cos \xi (j_1^1-j_2^1)- (\cos \xi \; e^{\text{i} qx/2} \tilde n_- + \textrm{h.c.}\\
n^{12}_j+n^{21}_j \to & \sin \xi (j_2^0+j_1^0)\nonumber\\
&\!\!\!\! + (-1)^{x/a}\; e^{\text{i} qx/2} n_+ + \sin \xi \; e^{\text{i} qx/2} \tilde n_+ + \textrm{h.c.}\\
n^{12}_j-n^{21}_j \to &- \cos \xi \; (-1)^{x/a} \; e^{\text{i} qx/2} n_- - \textrm{h.c.}
\end{align}
\end{subequations}
where we used the definitions of Eq.\eqref{eq:ns}, and the usual definitions of the (normal ordered) currents and densities of the Dirac fermions (again with $f=1,2$)
\begin{align}
j^R_f&=\sum_\sigma R^\dagger_{f,\sigma} R_{f,\sigma}, && j^L_f=\sum_\sigma L^\dagger_{f,\sigma} L_{f,\sigma}\nonumber\\
j^0_f&=j^R_f+j^L_f, && j^1_f=j^R_f-j^L_f
\end{align}
We can also define CDW order parameters on the legs of the ladder. However we will not discuss them since it turns out that they can also be expressed in terms of the same four slowly varying observables $n_\pm$ and $\tilde n_\pm$ and hence do not bring new information.
From these results we see that in general we find both uniform SC order parameters and PDW order parameters, which always have a commensurate ordering wave vector $Q_{PDW}=\pi$. The CDW order parameters are generally incommensurate and have ordering wave vectors $Q_{CDW}=q/2, \pi\pm q/2$ (or, equivalently $k_2-k_1$, $2k_1$ and $2k_2$).
We will now proceed to write down the bosonized expressions of the SC and CDW order parameters.
The bosonized expressions of the SC order parameters are
\begin{subequations}
\begin{align}
\Delta_+ \propto e^{-\text{i}\sqrt{\pi}\theta_{c+}} &\big\{ \cos(\sqrt{\pi}\theta_{c-})\cos(\sqrt{\pi}\phi_{s+})\cos(\sqrt{\pi}\phi_{s-}) \nonumber \\
&+ \text{i} \sin(\sqrt{\pi}\theta_{c-}) \sin(\sqrt{\pi} \phi_{s+}) \sin(\sqrt{\pi}\phi_{s-})\big\}
\label{eq:Delta+-bosonized-pi-flux}\\
\Delta_- \propto e^{-\text{i}\sqrt{\pi}\theta_{c+}} &\big\{ \cos(\sqrt{\pi}\theta_{c-})\sin(\sqrt{\pi}\phi_{s+})\sin(\sqrt{\pi}\phi_{s-}) \nonumber \\
&+ \text{i} \sin(\sqrt{\pi}\theta_{c-})\cos(\sqrt{\pi}\phi_{s+})\cos(\sqrt{\pi}\phi_{s-}) \big\}
\label{eq:Delta--bosonized-pi-flux}\\
\tilde\Delta_+ \propto e^{-\text{i}\sqrt{\pi}\theta_{c+}} & \big\{- \cos(\sqrt{\pi}\phi_{c-})\cos(\sqrt{\pi}\phi_{s+})\cos(\sqrt{\pi}\theta_{s-}) \nonumber \\
&+ \text{i} \sin(\sqrt{\pi}\phi_{c-})\sin(\sqrt{\pi}\phi_{s+})\sin(\sqrt{\pi}\theta_{s-}) \big\}
\label{eq:tildeDelta+-bosonized-pi-flux}\\
\tilde\Delta_- \propto e^{-\text{i}\sqrt{\pi}\theta_{c+}} &\big\{ \cos(\sqrt{\pi}\phi_{c-})\sin(\sqrt{\pi}\phi_{s+})\sin(\sqrt{\pi}\theta_{s-}) \nonumber \\
&- \text{i} \sin(\sqrt{\pi}\phi_{c-})\cos(\sqrt{\pi}\phi_{s+})\cos(\sqrt{\pi}\theta_{s-}) \big\}
\label{eq:tildeDelta--bosonized-pi-flux}
\end{align}
\end{subequations}
Here, and below, in order to simplify the notation we have dropped the prefactors of these expressions, including the Klein factors, whose effects are taken into account in our results. (A discussion of the role of Klein factors in the identification of phases in ladders is found in ref.[\onlinecite{Marston-2002}].
The bosonized form of the CDW order parameters $n_\pm$ and $\tilde n_\pm$ are
\begin{subequations}
\begin{align}
n_+ \propto e^{-\text{i}\sqrt{\pi}\phi_{c+}} & \big\{- \cos(\sqrt{\pi}\phi_{c-})\cos(\sqrt{\pi}\phi_{s+})\cos(\sqrt{\pi}\phi_{s-}) \nonumber \\
&+ \text{i} \sin(\sqrt{\pi}\phi_{c-})\sin(\sqrt{\pi}\phi_{s+})\sin(\sqrt{\pi}\phi_{s-}) \big\}
\label{eq:n+-bosonized-pi-flux}\\
n_- \propto e^{-\text{i}\sqrt{\pi}\phi_{c+}} & \big\{ \cos(\sqrt{\pi}\phi_{c-})\sin(\sqrt{\pi}\phi_{s+})\sin(\sqrt{\pi}\phi_{s-}) \nonumber \\
&- \text{i} \sin(\sqrt{\pi}\phi_{c-})\cos(\sqrt{\pi}\phi_{s+})\cos(\sqrt{\pi}\phi_{s-}) \big\}
\label{eq:n--bosonized-pi-flux}\\
\tilde n_+ \propto e^{-\text{i}\sqrt{\pi}\phi_{c+}}& \big\{ -\cos(\sqrt{\pi}\theta_{c-})\cos(\sqrt{\pi}\phi_{s+})\cos(\sqrt{\pi}\theta_{s-}) \nonumber \\
&+ \text{i} \sin(\sqrt{\pi}\theta_{c-})\sin(\sqrt{\pi}\phi_{s+})\sin(\sqrt{\pi}\theta_{s-}) \big\}
\label{eq:tilden++-bosonized-pi-flux}\\
\tilde n_{-} \propto e^{-\text{i}\sqrt{\pi}\phi_{c+}} &\big\{ \cos(\sqrt{\pi}\theta_{c-})\sin(\sqrt{\pi}\phi_{s+})\sin(\sqrt{\pi}\theta_{s-}) \nonumber \\
&- \text{i} \sin(\sqrt{\pi}\theta_{c-})\cos(\sqrt{\pi}\phi_{s+})\cos(\sqrt{\pi}\theta_{s-}) \big\}
\label{eq:tilden--bosonized-pi-flux}
\end{align}
\label{eq:ns-bosonized-pi-flux}
\end{subequations}
where we have also dropped the prefactors and their dependence on the Klein factors.
The effective field theory of Eq.\eqref{eq:Heff} shows that the spin sector $s+$ couples to the two remaining sectors, the charge sector $c-$ and the spin sector $s-$, only
through terms that involve the operator $\cos(\sqrt{4\pi}\phi_{s+})$ but not the dual field $\theta_{s+}$. A consequence of this feature of the effective Hamiltonian is that
the Luttinger parameter $K_{s+}$ always decreases under the RG flow, as can be seen by an examination of Eq.\eqref{eq:RG-flow-Ks+}, and flows to a regime in which
$K_{s+} \to 0$. In this regime the field $\phi_{s+}$ is locked and its fluctuations become massive. Hence there is a gap in the spin sector, the field
$\phi_{s+}$ is pinned, and $\langle \cos(\sqrt{4\pi}\phi_{s+})\rangle \neq 0$ has a non-vanishing expectation value.
The RG equations given in Appendix \ref{sec:RG-pi-flux} reveal that for the range of parameters of physical interest all the coupling constants
(including those in Eq.\eqref{eq:Heff-cminus-sminus}) generically flow to strong coupling.
Hence, we expect that the operators $\cos(\sqrt{4\pi}\phi_{s-})$, $\cos(\sqrt{4\pi}\theta_{s-})$, $\cos(\sqrt{4\pi}\phi_{c-})$, and $\cos(\sqrt{4\pi}\theta_{c-})$ will acquire an
expectation value and that the fields become locked to the values
$\phi_{c-}=n_{\phi_{c-}} \sqrt{\pi}/2$, $\theta_{c-}=n_{\theta_{c-}}\sqrt{\pi}/2$, $\phi_{s-}=n_{\phi_{s-}}\sqrt{\pi}/2$, where $n_{\phi_{c-}}$, $n_{\theta_{c-}}$, $n_{\phi_{s-}}$,
and $n_{\theta_{s-}}$ are integers that can each be even or odd. Depending of this choice the locked states represent different phases.
In addition, we recall that operators involving dual fields cannot have an expectation value simultaneously as this is forbidden by the commutation relations. This leads us
to the conclusion that in general we will have different phases depending on which fields are locked and to which values.
We will label the phases by the locked fields: $(\phi_{c-},\phi_{s-},\phi_{s+})$,
$(\phi_{c-},\theta_{s-},\phi_{s+})$, $(\theta_{c-},\phi_{s-},\phi_{s+})$, and $(\theta_{c-},\theta_{s-},\phi_{s+})$ respectively.
Thus, in general we will have a total of eight phases characterized by different order parameters. In all these phases only the charge sector $c+$ remains gapless.
Additional gapless excitations appear at the continuous quantum phase transitions between these different phases.
From the structure of the effective field theory we see that the $c+$ charge sector decouples and remains critical for all values of the parameters. It is an effective Luttinger
liquid with Luttinger parameter $K_{c+}$ and velocity $v_{c+}$. This sector has the trivially self-duality of the Luttinger models, which guarantees the existence in the
phase diagram of a dual CDW state for any SC state, and vice versa. We will denote this duality symmetry by ${\mathbb Z}^{c+}_2$.
\paragraph{Uniform SC phases:} The bosonized expressions of Eq.\eqref{eq:Delta+-bosonized-pi-flux} and Eq.\eqref{eq:Delta--bosonized-pi-flux}
for the two uniform SC order parameters, $\Delta_\pm$,
imply that these operators may exhibit quasi long range order provided that the $c-$ sector is gapped such that the dual field $\theta_{c-}$
is pinned and its vertex operator $\cos(\sqrt{\pi}\theta_{c-})$ has a nonzero expectation value. Thus, the uniform SC $\Delta_+$ phase
(even under the exchange of the two legs) occurs whenever
the fields lock to the classical values $(\theta_{c-},\phi_{s-},\phi_{s+})=(0,0,0)$ or $(\theta_{c-},\phi_{s-},\phi_{s+})=(\pi/2,\pi/2,\pi/2)$.
Similarly, the uniform SC $\Delta_-$ phase (odd under the exchange of the two legs)
occurs whenever the fields lock to the classical values
$(\theta_{c-},\phi_{s-},\phi_{s+})=(0,\pi/2,\pi/2)$ or $(\theta_{c-},\phi_{s-},\phi_{s+})=(\pi/2,0,0)$.
\paragraph{PDW phases:} The PDW phase $\tilde \Delta_+$ occurs for $(\phi_{c-},\theta_{s-},\phi_{s+})=(0,0,0)$ and
$(\phi_{c-},\theta_{s-},\phi_{s+})=(\pi/2,\pi/2,\pi/2)$, while the PDW phase $\tilde \Delta_-$ occurs for
$(\phi_{c-},\theta_{s-},\phi_{s+})=(0,\pi/2,\pi/2)$ and $(\phi_{c-},\theta_{s-},\phi_{s+})=(\pi/2,0,0)$.
As it should, the order parameters $\Delta_\pm$ and $\tilde \Delta_\pm$, which describe PDW phases which are even and odd under the exchange
of the two legs respectively,
exhibit power law correlations due to the contributions form the charge $c+$ sector. Comparing
the bosonized expressions for $\Delta_\pm$ and $\tilde\Delta_\pm$ it is clear that uniform SC phases and PDW phases are related by the combined dual
transformation of the two sectors,
$\mathbb{Z}^{c-}_2 \times \mathbb{Z}^{s-}_{2}$.
In this system PDW phases cannot occur in the absence of Umklapp process available at flux $\Phi=\pi$, and for this reason
are absent for other values of the flux.
\paragraph{CDW phases:} Similarly, the CDW phase $n_+$ has quasi long range order if the field that now lock are
$(\phi_{c-},\phi_{s-},\phi_{s+})=(0,0,0)$ or $(\phi_{c-},\phi_{s-},\phi_{s+})=(\pi/2,\pi/2,\pi/2)$, the phase
$n_-$ for $(\phi_{c-},\phi_{s-},\phi_{s+})=(0,\pi/2,\pi/2)$ or $(\phi_{c-},\phi_{s-},\phi_{s+})=(\pi/2,0,0)$, the phase
$\tilde n_+$ for $(\theta_{c-},\theta_{s-},\phi_{s+})=(0,0,0)$ or $(\theta_{c-},\theta_{s-},\phi_{s+})=(\pi/2,\pi/2,\pi/2)$, and
$\tilde n_-$ for $(\theta_{c-},\theta_{s-},\phi_{s+})=(0,\pi/2,\pi/2)$ or $(\theta_{c-},\theta_{s-},\phi_{s+})=(\pi/2,0,0)$.
The diagram of Fig.\ref{fig:plaquette-phase-diagram} illustrates the symmetry relations between various order parameters.
\begin{figure}[hbt]
\includegraphics[width=0.3 \textwidth]{plaquette-phase-diagram.pdf}
\caption{The relation between various uniform and staggered SC and their CDW counterparts present in the phase diagram of the model with the flux
$\Phi=\pi$ per plaquette.}
\label{fig:plaquette-phase-diagram}
\end{figure}
\subsection{Quantum Phase Transitions}
The effective field theory of the ladder with flux $\Phi=\pi$ given in Eq.\eqref{eq:Heff} has many effective parameters and coupling constants. We will not attempt to give a
detailed description of this theory here. Some important details are given in the Appendices. In particular the RG equations for the effective field theory are given in
Appendix \ref{sec:RG-pi-flux} and their solution for general couplings is a complex problem.
From the simpler case of the standard ladder we know that there is always a regime in which the couplings flow to strong values which in this case also corresponds to a
system with a spin gap. The situation is very similar here. Thus while there are regimes in which some sectors can remain gapless, there is also a generic regime in
which only one sector, the charge $c+$ sector, remains gapless while all the other ones are massive.
Let us now look at the effective field theory under the assumption that the $s+$ sector is massive (and hence $\phi_{s+}$ is pinned). We will now examine in detail the
dynamics of the two remaining sectors, the charge sector $c-$ and the spin sector $s-$. In this regime Eq.\eqref{eq:Heff} reduces to the simpler system (ignoring for now
the decoupled and critical charge sector $c+$)
\begin{align}
\begin{split}
{\cal H} &=\frac{v_{c-}}{2} \big\{ K_{c-} (\partial_x\theta_{c-})^2 + K^{-1}_{c-} (\partial_x\phi_{c-})^2 \big\}\\
&+ \frac{v_{s-}}{2} \left\{ K_{s-} (\partial_x\theta_{s-})^2 + K^{-1}_{s-} (\partial_x\phi_{s-})^2 \right\}\\
&+g^*_{s1} \cos(\sqrt{4\pi}\phi_{s-}) + g^*_{s2} \cos(\sqrt{4\pi}\theta_{s-}) \\
&+ g^*_5 \cos(\sqrt{4\pi} \phi_{c-}) +g^*_{u5} \cos(\sqrt{4\pi} \theta_{c-}) \\
&+ \frac{\cos(\sqrt{4\pi}\theta_{c-})}{2(\pi a)^2} \left[ g_{3} \cos( \sqrt{4\pi} \theta_{s-} ) + g_{4} \cos(\sqrt{4\pi} \phi_{s-}) \right]\\
&+ \frac{\cos(\sqrt{4\pi}\phi_{c-})}{2(\pi a)^2} \left[ g_{u3} \cos( \sqrt{4\pi} \theta_{s-} ) + g_{u4} \cos(\sqrt{4\pi} \phi_{s-}) \right]\end{split}
\label{eq:Heff-cminus-sminus}
\end{align}
where we absorbed the expectation values of the $s+$ sector in the effective coupling constants $g^*_\alpha = 2g_\alpha\ev{\cos(\sqrt{4\pi}\phi_{s+})}/(2\pi a)^{2}$, where
$g_\alpha=g_{s1},g_{s2},g_{5},g_{u5}$ respectively.
Let us consider the subspace defined by $g_{s2}=g_3=g_{u3}=0$ in the parameter space. From the RG equations it can be inferred that once we start with the this initial
condition, the RG flow will remain on the same hypersurface defined by $g_{s2}=g_3=g_{u3}=0$.
In this regime luttinger parameters $K_{c\pm}$ and $K_{s\pm}$ follow the RG equations below
\begin{subequations}
\begin{align}
&\frac{dK_{c+}}{dl} =0\\
&\frac{dK_{s+}}{dl} = -\frac{K^2_{s+}}{8\pi^2}(g^2_{s1}+g^2_{5}) \\
&\frac{dK_{s-}}{dl} = -\frac{K^2_{s-}}{8\pi^2}(g^2_{s1}+g^2_4)\\
&\frac{dK_{c-}}{dl} = \frac{1}{8\pi^2} (g^2_4+ g^2_5) - \frac{K^2_{c-}}{8\pi^2}(g^2_{u4}+g^2_{u5})
\end{align}
\end{subequations}
The first equation states that the Luttinger parameter of the decoupled $c+$ sector does not renormalize.
The second and the third equations state that $K_{s\pm}$ renormalize to small values, $K_{s\pm} \to 0$.
This means both $s\pm$ sectors are opening up a gap such that the vertex functions of $\phi_{s\pm}$ will acquire expectation values.
The effective Hamiltonian when the for the regime where $\phi_{s+}$ is pinned is
\begin{align}
\begin{split}
{\cal H}^{c-}_\text{eff} =& \frac{v_{c-}}{2} \left\{ K^{}_{c-}(\partial_x\theta_{c-})^2+K_{c-}^{-1}(\partial_x\phi_{c-})^2 \right\}\\
&+\frac{g^{c-}_\theta}{\pi} \cos(\sqrt{4\pi}\theta_{c-})+ \frac{g^{c-}_{\phi}}{\pi}\cos(\sqrt{4\pi}\phi_{c-})
\label{eq:effective-hamiltonian-c-}
\end{split}
\end{align}
in which $g^{c-}_{\theta}=\frac{{\cal C}_{s+}}{2\pi a}g_{4}$ and $g^{c-}_{\phi}=\frac{{\cal C}_{s+}}{2\pi a}g_{u4}$ where ${\cal C}_{s+} = \ev{\cos(\sqrt{4\pi}\phi_{s+})}$. This
is the same effective theory of Eq.\eqref{eq:effective-hamiltonian} in section \ref{sec:PDW-phase-transition} except that it is written for the dual fields in the charge
$c-$ sector (instead of the spin sector). It predicts existence of a pair of dual phases which between them there is a phase transition in Ising critical class.
The duality symmetry in the \eqref{eq:effective-hamiltonian-c-} will be denoted by ${\mathbb Z}^{c-}_{2}$.
It relates the state presented by SC operators $\Delta_{\pm}$ to the states with the same parity presented by $n_\pm$. Similarly $\tilde\Delta_\pm$
phases and $\tilde n_\pm$ are dual under ${\mathbb Z}^{c+}_{2} \times {\mathbb Z}^{c-}_{2}$. Similar analysis holds in the $s-$ sector.
The states with the same parity in $(\Delta_\pm,\tilde n_\pm)$ and $(\tilde
\Delta_\pm,n_\pm)$ are dual under ${\mathbb Z}^{c+}_2\times{\mathbb Z}^{s-}_2$.
On the other hand, if we assume that there is relation between some of the couplings
(up to restrictions imposed by the $SU(2)$ spin invariance) we arrive to a system that can be solved by refermionization.
This is discussed in detail in Appendix \ref{sec:refermionization-pi-flux}. Depending on the relations between the coupling constants the system may be in one of the
phases we discussed above or be qunatum critical. We find two types of quantum criticality.
One possibility is an ising quantum critical point at which one of the Majorana fermions becomes massless.
Clearly we have four choices for this. On the other hand we also find a case in which two Majorana fermions become massless. In this case
the system has a quantum critical regime which can be described as an effective Luttinger model coupled to a massive Thirring model.
Away from the quantum critical regime this system becomes a theory of four coupled massive Majorana fermions.
\section{Conclusions}
\label{sec:conclusions}
In this paper we investigated the mechanisms of formation of pair-density-wave superconducting order in quasi-one-dimensional systems.
Although at present time the existence and relevance of the PDW state to the interpretation of experiments in the cuprate superconductors can
be argued on purely phenomenological grounds, we know that this is not a state that is naturally favored in a weak coupling BCS theory.
The main motivation of this work is to investigate the mechanisms of formation of PDW order. For this reason it is natural to examine how (and if)
it it appears on one and quasi-one-dimensional systems.
Here we investigate the occurrence of PDW phases in two models of two-leg ladders. In the first model we reexamined the properties of the spin-gap phase of
a model of a two-leg ladder in the regime where the microscopic interactions are repulsive and showed that it includes a phase with PDW order.
Here we showed that within the repulsive regime, a PDW state exists provided that one of the bands, the bonding band for example, is kept at half filling.
We showed that in this regime the phase diagram of the ladder has, in addition to a conventional Luttinger liquid phase, two superconducting phases:
a phase with uniform superconducting order (with power law correlations) and a PDW phase, a superconducting state (again with power law correlations)
but with wave vector $Q_\text{PDW}=\pi$. We also investigated the nature of the quantum phase transition between these two superconducting states and
showed that it is in the Ising universality class. We discussed in detail the connections that exist between this system and the Kondo-Heisenberg chain. In particular,
much as in the case of the Kondo-Heisenberg chain, the PDW order parameter in the two-leg ladder is a composite operators of two order parameters of the bonding and
anti-bonding bands which separately have only short range order. Thus this is a highly non-BCS realization of PDW order.
By extending the analysis to the case other commensurate fillings of the bonding band, we showed that the state with PDW order arises in conjunction with the
development of a commensurate CDW state. In this sense this result embodies the notion of intertwined orders proposed in Ref.[\onlinecite{berg-2009a}].
We also investigated the existence of PDW phases in an extended Hubbard-Heisenberg model on a two leg ladder with flux $\Phi$ per plaquette. We showed that
commensurate PDW phases appears in this system when the flux $\Phi=\pi$ per plaquette. In contrast to the case of the conventional ladder, this realization of PDW
order in the flux $\Phi=\i$ ladder can be expressed as a bilinear of fermion operators. In this sense this realization of the PDW state is closer in spirit to the construction of
FFLO states although in the problem at hand the spin rotational symmetry is kept unbroken at all levels. PDW order also appears at other values of the flux but only when
certain commensurability conditions are met, just as it is the case in the conventional two-leg ladder.
There are still several interesting open questions. While the results of this work, and the earlier results of Ref.[\onlinecite{berg-2010}], show how the pair-density-wave
state arises together with a spin gap in a system with repulsive interactions, the ordering wave vector we find is always commensurate. However there is no reason of
principle for the PDW ordering wave vector to be commensurate. The root of this phenomenon is the magnetic mechanism of the PDW order which is present in both the
two-leg ladder and in the Kondo-Heisenberg chain. Indeed in both cases the ordering wave vectors of the PDW and of the spin order (even though it becomes short
ranged by the development of the spin gap) are the same. On the other hand, it is not possible to have incommensurate magnetic order (even with power law
correlations) in one dimension with full $SU(2)$ spin rotational invariance. Indeed it is known from work in frustrated one-dimensional systems that the incommensurate
magnetic state is preempted in one dimension by a dimerized state with a spin gap. Naturally, one way around this problem is to consider systems with a weak magnetic
anisotropy. At any rate the construction of a system with incommensurate PDW order is an interesting open problem.
\begin{acknowledgments}
We thank Erez Berg and Steven Kivelson for very stimulating discussions and a previous collaboration which motivated this work, and G. Roux for point us out Refs. \onlinecite{Roux-2007b} and \onlinecite{Roux-2007}.
This work was supported in part by the National Science Foundation, under grants DMR 0758462 and DMR-1064319 (EF) at the University of Illinois,
and by the U.S. Department of Energy, Division of Materials Sciences under Award No.
DE-FG02-07ER46453 through the Frederick
Seitz Materials Research Laboratory of the University of Illinois.
\end{acknowledgments}
|
1,477,468,750,594 | arxiv | \section{Proof of Theorem \ref{main}}
Using \cite{L1} and \cite{L2}, we determined in \cite{BO2} the reflective index $i_r(M)$ of all irreducible Riemannian symmetrric spaces $M$ of noncompact type and the reflective submanifolds $\Sigma$ in $M$ for which $i_r(M) = \mbox{codim}(\Sigma)$. Using duality between Riemannian symmetric spaces of noncompact type and of compact type, we obtain Table \ref{reflindex} for the reflective index $i_r(G)$ of all simply connected, compact simple Lie groups and the reflective submanifolds $\Sigma$ in $G$ for which $i_r(G) = \mbox{codim}(\Sigma)$.
\begin{table}[h]
\caption{The reflective index $i_r(G)$ of simply connected, compact simple Lie groups}
\label{reflindex}
{\footnotesize\begin{tabular}{ | p{2cm} p{3.5cm} p{2cm} p{2cm} p{2cm} |}
\hline \rule{0pt}{4mm}
\hspace{-1mm}$G$ & $\Sigma$ & $\dim(G)$ & $i_r(G)$ & Comments \\[1mm]
\hline \rule{0pt}{4mm}
\hspace{-2mm}
$SU_2$ & $SU_2/S(U_1U_1)$ & $3$ & $1$ & \\
$SU_3$ & $SU_3/SO_3$ & $8$ & $3$ & \\
$SU_{r+1}$ & $S(U_rU_1)$ & $r(r+2)$ & $2r$ & $r \geq 4$ \\
$Spin_5$ & $Spin_4$, $SO_5/SO_2SO_3$ & $10$ & $4$ & \\
$Spin_{2r+1}$ & $Spin_{2r}$ & $r(2r+1)$ & $2r$ & $r \geq 3$ \\
$Sp_r$ & $Sp_{r-1}Sp_1$ & $r(2r+1)$ & $4r-4$ & $r \geq 3$\\
$Spin_{2r}$ & $Spin_{2r-1}$ & $r(2r-1)$ & $2r-1$ & $r \geq 3$ \\
$E_6$ & $F_4$ & $78$ & $26$ &\\
$E_7$ & $E_6U_1$ & $133$ & $54$ & \\
$E_8$ & $E_7Sp_1$ & $248$ & $112$ & \\
$F_4$ & $Spin_9$ & $52$ & $16$& \\
$G_2$ & $G_2/SO_4$ & $14$ & $6$& \\[1mm]
\hline
\end{tabular}}
\end{table}
Note that Table \ref{reflindex} leads to Table \ref{Liegroup} when replacing $i_r(G)$ with $i(G)$ and adding $\Sigma = SU_3$ in the row for $G_2$. The two problems we thus need to solve for each $G$ are:
\begin{itemize}
\item[(1)] prove that there exists no non-reflective totally geodesic submanifold $\Sigma$ in $G$ with $\mbox{codim}(\Sigma) < i_r(G)$;
\item[(2)] determine all non-reflective submanifolds $\Sigma$ in $G$ with $\mbox{codim}(\Sigma) = i_r(G)$.
\end{itemize}
The following result is a crucial step towards the solution of the two problems:
\begin{thm}[Ikawa, Tasaki \cite{IT}] \label{IkawaTasaki}
A necessary and sufficient condition that a totally geodesic submanifold $\Sigma$ in a compact connected simple Lie group is maximal is that $\Sigma$ is a Cartan embedding or a maximal Lie subgroup.
\end{thm}
The Cartan embeddings are defined as follows. Let $G/K$ be a Riemannian symmetric space of compact type and $\sigma \in \mbox{Aut}(G)$ be an involutive automorphism of $G$ such that $\mbox{Fix}(\sigma)^o \subset K \subset \mbox{Fix}(\sigma)$, where
\[
\mbox{Fix}(\sigma) = \{g \in G : \sigma(g) = g\}
\]
and $\mbox{Fix}(\sigma)^o$ is the identity component of $\mbox{Fix}(\sigma)$. By definition, the automorphism $\sigma$ fixes all points in $K$ and the identity component $K^o$ of $K$ coincides with $\mbox{Fix}(\sigma)^o$.
The Cartan map of $G/K$ into $G$ is the smooth map
\[
f : G/K \to G\ ,\ gK \mapsto \sigma(g)g^{-1}.
\]
The Cartan map $f$ is a covering map onto its image $\Sigma = f(G/K)$.
Let $\theta \in \mbox{Aut}(G)$ be the involutive automorphism on $G$ defined by inversion, that is,
\[
\theta : G \to G\ ,\ g \mapsto g^{-1}.
\]
We now define a third involutive automorphism $\rho \in \mbox{Aut}(G)$ by $\rho = \theta \circ \sigma$. By definition, we have
\[
\rho(g) = \theta(\sigma(g)) = \sigma(g)^{-1} = \sigma(g^{-1})
\]
for all $g \in G$. Moreover, for all $g \in G$ we have
\begin{align*}
\rho(f(gK)) & = \rho(\sigma(g)g^{-1}) = \sigma((\sigma(g)g^{-1})^{-1}) = \sigma(g\sigma(g)^{-1}) = \sigma(g\sigma(g^{-1})) \\
& = \sigma(g)\sigma^2(g^{-1}) = \sigma(g)g^{-1} = f(gK).
\end{align*}
Thus the automorphism $\rho$ fixes all points in $\Sigma$.
The automorphisms $\sigma,\theta,\rho \in \mbox{Aut}(G)$ are involutive isometries of $G$, where $G$ is considered as a Riemannian symmetric space with a bi-invariant Riemannian metric. Geometrically, $\theta$ is the geodesic symmetry of $G$ at the identity $e \in G$ and its differential at $e$ is
\[
d_e\theta : T_eG \to T_eG\ ,\ X \mapsto -X.
\]
The differential of $\sigma$ at $e$ is
\[
d_e\sigma : T_eG \to T_eG\ ,\ X \mapsto
\begin{cases}
X & \mbox{if } X \in T_eK, \\
-X & \mbox{if } X \in \nu_eK,
\end{cases}
\]
where $\nu_eK$ denotes the normal space of $K$ at $e$. This shows that $\sigma$ is the geodesic reflection of $G$ in the identity component $K^o$ of $K$. In particular, $K^o$ (and hence also $K$) is a totally geodesic submanifold of $G$. Since $\rho = \theta \circ \sigma$, the differential of $\rho$ at $e$ is
\[
d_e\rho : T_eG \to T_eG\ ,\ X \mapsto
\begin{cases}
X & \mbox{if } X \in \nu_eK, \\
-X & \mbox{if } X \in T_eK,
\end{cases}
\]
It follows that there exists a connected, complete, totally geodesic submanifold $N$ of $G$ with $e \in N$ and $T_eN = \nu_eK$. We saw above that $\rho$ fixes all points in $\Sigma$, which implies $\Sigma \subset N$ since $\Sigma$ is connected. Moreover, since $\dim(\Sigma) = \dim(G) - \dim(K) = \mbox{codim}(K) = \dim(N)$ and $\Sigma$ is complete we get $\Sigma = N$. It follows that $\Sigma$ is a totally geodesic submanifold of $G$. In fact, we have proved that both $K^o$ and $\Sigma$ are reflective submanifolds of $G$ which are perpendicular to each other at $e$.
In view of Theorem \ref{IkawaTasaki} it therefore remains to investigate the maximal Lie subgroups of $G$. The connected maximal Lie subgroups of compact simple Lie groups are well known from classical theory. Due to connectedness we can equivalently consider maximal subalgebras of compact simple Lie algebras. In Table \ref{maxsubalgebra} we list the maximal subalgebras of minimal codimension in compact simple Lie algebras (see e.g.\ \cite{Ma}).
\begin{table}[h]
\caption{Maximal subalgebras ${\mathfrak{h}}$ of minimal codimension $d({\mathfrak{g}})$ in compact simple Lie algebras ${\mathfrak{g}}$}
\label{maxsubalgebra}
{\footnotesize\begin{tabular}{ | p{2cm} p{2cm} p{2cm} |}
\hline \rule{0pt}{4mm}
\hspace{-1mm}${\mathfrak{g}}$ & ${\mathfrak{h}}$ & $d({\mathfrak{g}})$ \\[1mm]
\hline \rule{0pt}{4mm}
\hspace{-2mm}
${\mathfrak{s}}{\mathfrak{u}}_{r+1}$ & ${\mathfrak{s}}{\mathfrak{u}}_r \oplus \mathbb{R}$ & $2r$ \\
${\mathfrak{s}}{\mathfrak{o}}_{2r+1}$ & ${\mathfrak{s}}{\mathfrak{o}}_{2r}$ & $2r$ \\
${\mathfrak{s}}{\mathfrak{p}}_r$ & ${\mathfrak{s}}{\mathfrak{p}}_{r-1} \oplus {\mathfrak{s}}{\mathfrak{p}}_1$ & $4r-4$ \\
${\mathfrak{s}}{\mathfrak{o}}_{2r}$ & ${\mathfrak{s}}{\mathfrak{o}}_{2r-1}$ & $2r-1$ \\
${\mathfrak{e}}_6$ & ${\mathfrak{f}}_4$ & $26$ \\
${\mathfrak{e}}_7$ & ${\mathfrak{e}}_6 \oplus \mathbb{R}$ & $54$ \\
${\mathfrak{e}}_8$ & ${\mathfrak{e}}_7 \oplus {\mathfrak{s}}{\mathfrak{p}}_1$ & $112$ \\
${\mathfrak{f}}_4$ & ${\mathfrak{s}}{\mathfrak{o}}_9$ & $16$ \\
${\mathfrak{g}}_2$ & ${\mathfrak{s}}{\mathfrak{u}}_3$ & $6$ \\[1mm]
\hline
\end{tabular}}
\end{table}
We can now finish the proof of Theorem \ref{main}. From Tables \ref{reflindex} and \ref{maxsubalgebra} we get $i_r(G) \leq d({\mathfrak{g}})$. Theorem \ref{IkawaTasaki} then implies $i(G) = i_r(G)$. Using Table \ref{reflindex} we obtain the column for $i(G)$ in Table \ref{Liegroup}.
To find all $\Sigma$ in $G$ with $\mbox{codim}(\Sigma) = i(G)$ we first note that $i(G) < d({\mathfrak{g}})$ if and only if $G \in \{SU_2,SU_3\}$. In this case $\Sigma$ must be a Cartan embedding and hence a reflective submanifold. From Table \ref{reflindex} we obtain that $\Sigma = SU_2/S(U_1U_1)$ if $G = SU_2$ and $\Sigma = SU_3/SO_3$ if $G = SU_3$. Now assume that $i(G) = d({\mathfrak{g}})$. Then $\Sigma$ is either a Cartan embedding (and then $\Sigma$ is as in Table \ref{reflindex}) or a maximal connected subgroup $H$ of $G$ for which ${\mathfrak{h}}$ has minimal codimension $d({\mathfrak{g}})$ (and then ${\mathfrak{h}}$ is as in Table \ref{maxsubalgebra}). By inspection we see that such $H$ is reflective unless $G = G_2$, in which case we get the non-reflective totally geodesic submanifold $SU_3$ of $G_2$ satisfying $\mbox{codim}(SU_3) = 6 = i(G_2)$. This finishes the proof of Theorem \ref{main}.
Regarding our conjecture $i(M) = i_r(M)$ if and only if $M \neq G_2^2/SO_4$, we list in Table \ref{summary} the irreducible Riemannian symmetric spaces of noncompact type for which the conjecture remains open.
\begin{table}[h]
\caption{The reflective index $i_r(M)$ for irreducible Riemannian symmetric spaces $M$ of noncompact type for which the conjecture $i(M) = i_r(M)$ is still open and reflective submanifolds $\Sigma$ of $M$ with $\mbox{codim}(\Sigma) = i_r(M)$}
\label{summary}
{\footnotesize\begin{tabular}{ | p{2.9cm} p{3.7cm} p{1.5cm} p{0.8cm} p{2cm} |}
\hline \rule{0pt}{4mm}
\hspace{-1mm}$M$ & $\Sigma$ & $\dim M$ & $i_r(M)$ & Comments \\[1mm]
\hline \rule{0pt}{4mm}
\hspace{-2mm}
$SU^*_{2r+2}/Sp_{r+1}$ & $\mathbb{R} \times SU^*_{2r}/Sp_r$ & $r(2r+3)$ & $4r$ & $r \geq 3$ \\[1mm]
\hline \rule{0pt}{4mm}
\hspace{-2mm}
$Sp_r(\mathbb{R})/U_r$ & $\mathbb{R} H^2 \times Sp_{r-1}(\mathbb{R})/U_{r-1}$ & $r(r+1)$ & $2r-2$ & $r \geq 6$\\
$SO^*_{4r}/U_{2r}$ & $SO^*_{4r-2}/U_{2r-1}$ & $2r(2r-1)$ & $4r-2$ & $r \geq 3$ \\
$Sp_{r,r}/Sp_rSp_r$ & $Sp_{r-1,r}/Sp_{r-1}Sp_r$ & $4r^2$ & $4r$ & $r \geq 3$ \\
$E_7^{-25}/E_6U_1$ & $E_6^{-14}/Spin_{10}U_1$ & $54$ & $22$ & \\[1mm]
\hline \rule{0pt}{4mm}
\hspace{-2mm}
$Sp_{r,r+k}/Sp_rSp_{r+k}$ & $Sp_{r,r+k-1}/Sp_rSp_{r+k-1}$ & $4r(r+k)$ & $4r$ & $r \geq 3, k \geq 1$, $r > k+1$ \\
$SO^*_{4r+2}/U_{2r+1}$ &$SO^*_{4r}/U_{2r}$ & $2r(2r+1)$ & $4r$ & $r \geq 3$ \\[1mm]
\hline \rule{0pt}{4mm}
\hspace{-2mm}
$E_6^6/Sp_4$ & $F_4^4/Sp_3Sp_1$ & $42$ & $14$ & \\[1mm]
\hline \rule{0pt}{4mm}
\hspace{-2mm}
$E_7^7/SU_8$ & $\mathbb{R} \times E^6_6/Sp_4$ & $70$ & $27$ & \\[1mm]
\hline \rule{0pt}{4mm}
\hspace{-2mm}
$E_8^8/SO_{16}$ & $\mathbb{R} H^2 \times E_7^7/SU_8$ & $128$ & $56$ & \\[1mm]
\hline \rule{0pt}{4mm}
\hspace{-2mm}
$E_6^2/SU_6Sp_1$ & $F_4^4/Sp_3Sp_1$ & $40$ & $12$ & \\
$E_7^{-5}/SO_{12}Sp_1$ & $E_6^2/SU_6Sp_1$ & $64$ & $24$ & \\
$E_8^{-24}/E_7Sp_1$ & $E_7^{-5}/SO_{12}Sp_1$ & $112$ & $48$ & \\[1mm]
\hline
\end{tabular}}
\end{table}
|
1,477,468,750,595 | arxiv | \section{INTRODUCTION}
The phase-ordering dynamics of systems quenched from a high-temperature
disordered state into an ordered state is a problem of great relevance in
the description of out-of-equilibrium pattern formation \cite{REV}.
One well established property is the onset of dynamic scaling,
where the late-time behaviour of the order-parameter correlation
functions is described by scaling forms with a single time-dependent
length scale $L(t)$. Thus the real-space correlation function is found to
have the scaling form
\begin{equation}
C(r,t)= f(r/L(t))\ ,
\label{scaling:real}
\end{equation}
while its Fourier transform, the structure factor, has the
corresponding scaling form
\begin{equation}
S(k,t)= [L(t)]^d g(kL(t))\ .
\label{scaling:Fourier}
\end{equation}
Conventional experimental systems such as binary alloys and binary liquids
are described by a scalar order parameter. Recently, however, there has
been much interest in systems with more complicated order parameters such
as $n$-component vectors (the $O(n)$ model) \cite{REV}--\cite{TU}
and traceless symmetric tensors (nematic liquid crystals) \cite{Nematics}.
In this paper we restrict discussion to the $O(n)$ model.
Much numerical and theoretical effort has been devoted to
understanding the basic properties of systems that can support singular
topological defects, i.e.\ systems with $n \le d$. The presence
of such defects leads to a generalization of the usual Porod law for the
large-$q$ tail of the structure-factor scaling function $g(q)$, namely
$g(q) \sim q^{-(d+n)}$ for $q \gg 1$ \cite{Porod}.
This result is geometrical in origin, and is independent of whether or
not the order parameter is conserved by the dynamics \cite{BH}.
Very recently, the cases $n=d+1$, for which nonsingular topological
textures occur, have been studied numerically (for $d=2$) and
analytically (for $d=1$). Rutenberg and Bray \cite{OXY} found that the
$d=1$ XY model ($n=2$) exhibits a scaling violation due to the existence
of {\em two} relevant length scales: the phase coherence length and phase
winding length. On the other hand, the two-dimensional Heisenberg model
($n=3$) with non-conserved dynamics also violates dynamic scaling due,
it appears, to the existence of as many as three separate length scales,
related to individual texture size, the typical separation between textures
and the typical distance between textures of opposite charge \cite{TOT}.
These texture systems are, perhaps, the most complex of the phase
ordering systems.
By contrast, systems without topological defects ($n>d+1$) seem relatively
straightforward. There is good evidence for the simple scaling behavior
described by (\ref{scaling:real}) and (\ref{scaling:Fourier}), with
characteristic scale $L(t) \sim t^{1/2}$ for nonconserved dynamics.
The energy scaling approach of Bray and Rutenberg \cite{AR} shows that,
provided scaling holds, $L(t) \sim t^{1/2}$ is indeed correct for $n>d+1$
systems with nonconserved dynamics, and gives $L(t) \sim t^{1/4}$ for
conserved dynamics, again nicely consistent with simulation results \cite{RC}
and the Renormalization Group result of Bray \cite{BREN}.
Much less is known, however, about the form of the
structure-factor tail for $n>d+1$. The recent simulations of Rao and
Chakrabarty (RC) \cite{RC}, with conserved dynamics, for the cases
$d=1$, $n=3$ and $d=2$, $n=4$ show `squeezed exponential' behavior
[i.e.\ $g(q) \sim \exp(-bq^\delta)$ with $\delta>1$]. In this paper
we concentrate on systems with $n>d+1$ and nonconserved dynamics. We
consider the cases $d=2$, $n=4,5$ and $d=1$, $n=3,4,5$.
In each case we confirm the expected $t^{1/2}$ growth, and
find `stretched exponential' behavior [i.e.\ $g(q) \sim \exp(-bq^\delta)$
with $\delta \le 1$] for the tail of the structure factor, with an exponent
$\delta$ that appears to depend on $n$ and $d$.
In an attempt to understand the origin of this tail behaviour, we present
an analytical approach based on an approximate equation due to Bray and
Humayun (BH) \cite{BH92}, which is itself based on the `gaussian auxiliary
field' (GAF) method \cite{GAF} that describes rather well the form of the
structure factor for nonconserved systems with singular defects ($n\le d$).
For those systems, this method reproduces, in particular, the generalized
Porod tail. For $n>d+1$, the physical basis of the method is less clear.
However, the simple truncation of the equation at leading order in $1/n$,
proposed by BH in another context \cite{BH92}, leads to an exponential decay
of $g(q)$, modified by a power-law prefactor for $d>1$. It is noteworthy
that the asymptotic behaviour is nonanalytic in $1/n$: for $n$ strictly
infinite, the gaussian form $g(q) \sim \exp(-2q^2)$ is obtained. The
exponential asymptotics of the BH equation were noted in recent numerical
studies by Castellano and Zannetti \cite{Cast}.
Using a `hard-spin' model Newman et al.\ \cite{NEW} studied numerically
the dynamics of one-dimensional systems without defects for $n=3,4$ and $5$.
Measuring only the real-space correlations, they found that dynamic scaling
is obeyed with characteristic length $L(t)=t^{1/2}$. Moreover, they found
the real-space correlation function was very well fitted by a gaussian form,
which is the exact result in the limit $n \to \infty$.
The Fourier space analysis presented here, revealing stretched exponential
tails, shows that the good gaussian fits achieved in real space are
misleading.
Our main results can be summarized as follows: (i) For all our models the
characteristic length scale required to collapse the data for the
real-space correlation function and structure factor, is $L(t)= t^{1/2}$,
in agreement with theoretical predictions \cite{AR};
(ii) The asymptotic behaviour of the structure factor is well described
by a stretched exponential of the form
$g(q) \sim \exp(-bq^\delta)$, where the exponent $\delta$ apparently
depends on both $n$ and $d$ and seems to be different from the value obtained
for the corresponding system with conserved dynamics \cite{RC};
(iii) An analytical treatment of the approximate BH equation, expected to
be valid at large (but finite) $n$, gives a simple exponential modified by
a power, $g(q) \sim q^{-(d-1)/2}\exp(-bq)$, with the {\em same} asymptotic
form for conserved dynamics.
The rest of the article is organized as follows.
In the next section, we introduce the CDS model based on the time-dependent
Ginzburg-Landau (TDGL) equation for a zero temperature quench, and we
describe the corresponding numerical procedure employed in the simulation.
Section 3 presents simulation results for a vector order
parameter with $n=4$ and $n=5$ components in one and two dimensions
and the one-dimensional $O(3)$ model. For the $d=2$ systems, we also
present data for the real-space correlation function to demonstrate the
dynamic scaling behaviour. We then discuss the procedure used to obtain
the asymptotic functional form of the structure factor tail.
Next we compare the data to the results of the approximate analytic
theory. Finally we make some concluding comments on our results and a
give a brief summary.
\section{MODEL AND SIMULATIONS}
The dynamic evolution of a non-conserved vector order parameter
(model A) with $n$ components $\vec{\phi} = (\phi_1,\phi_2,\cdots,\phi_n)$,
for a zero-temperature quench, is described by a purely dissipative process
defined in term of the following TDGL equation:
\begin{equation}
\frac{\partial \vec{\phi}({\bf x},t)}{\partial t} = - \Gamma \frac{\delta
F(\vec{\phi}({\bf x},t))}{\delta \vec{\phi}({\bf x},t)}\ , \label{eq:smai}
\label{TDGL}
\end{equation}
with $\Gamma$ a kinetic coefficient that we will set equal to unity,
and $F$ the free energy functional which generates the thermodynamic force,
\begin{equation}
F[\vec{\phi}({\bf x},t)] = \int d^d{\bf x} \left[ \frac{1}{2}
(\nabla \vec{\phi}({\bf x},t))^2 + V(\vec{\phi}({\bf x},t)) \right]\ ,
\label{F}
\end{equation}
with the potential defined as
\begin{equation}
V(\vec{\phi}({\bf x},t)) = \frac{1}{4}(1-\vec{\phi}^2({\bf x},t))^2
\end{equation}
where $\vec{\phi}^2 = \sum_{i=1}^{n} \phi^2_i ({\bf x},t)$.
The ground states, or fixed points of the dynamics, are determined by the
condition $\vec{\phi}^2 = 1$, which defines a degenerate manifold of states
connected by rotations. In the internal space of the order parameter,
this manifold is the surface of an $n$-dimensional sphere. At late times
the order parameter is saturated in length (i.e.\ lies on the ground
state manifold everywhere). Then the dynamics is driven by the
decrease of the free-energy associated with the term $(\nabla
\vec{\phi})^2$ in (\ref{F}), through a reduction in the magnitude of the
spatial gradients.
We can construct an explicit numerical scheme for the simulation based on
a computationally efficient algorithm, namely the cell-dynamic system (CDS)
\cite{OP1}, which updates the order parameter according to the rule
\begin{equation}
\vec{\phi}({\bf x},t+1) = H(\vec{\phi}({\bf x},t)) + \tau D \left[ \frac{1}{z}
\sum_{\bf x'}\vec{\phi}({\bf x'},t) - \vec{\phi}({\bf x},t) \right]\ ,
\end{equation}
with
\begin{equation}
H(\vec{\phi}({\bf x},t))= \vec{\phi}({\bf x},t) +\tau
\vec{\phi}({\bf x},t)(1-(\vec{\phi}({\bf x},t)^2)\ ,
\end {equation}
where $z$ is the number of nearest neighbors, and $\tau$ and $D$ are
parameters that we choose to be $\tau =0.2$ and $D=0.5$
in our simulations.
The above numerical procedure is identical to that used by
Toyoki \cite{TU}, differing only in the values of the
parameters $\tau$ and $D$. The CDS is an Euler-like algorithm and
for convenience in our analysis of the results we use a unit of time
equal to the update time step $\tau$. It should be noted (see Figures
1 and 3) that the scaling regime is reached very quickly in these
systems without defects, and very long runs are not necessary.
The two-dimensional systems consist of a square lattice of
size $256\, \times \,256$ with periodic boundary conditions.
The physical quantities are calculated as averages over 20 independent
distributions of initial conditions. The one-dimensional systems
have $L= 16384$ sites (with periodic boundary conditions) and we average
$100$ independent runs. The initial conditions for the order parameter
components $\vec{\phi}_i$ were randomly chosen from a uniform distribution
with support on the interval (-0.1,0.1).
A quantity of interest that is computed during the course
of the numerical simulation in the two-dimensional models is
the two-point real-space correlation function
\begin{equation}
C({\bf r},t)= <\vec{\phi}({\bf x},t)\cdot\vec{\phi}({\bf x}+{\bf r},t)>
\label{eq:cor}
\end{equation}
where $<\cdots>$ stands for the average over the set of independent
initial conditions (or `runs'). A spherical average over all possible
distances $r=|{\bf r}|$ is performed to find the isotropic real-space
correlation $C(r,t)$. The other function of interest, calculated for
all the models, is the structure factor,
\begin{equation}
S({\bf k},t)= <\vec{\phi}({\bf k},t)\cdot\vec{\phi}({\bf -k},t)>\ .
\label{eq:struc}
\end{equation}
We also make a spherical average over all possible values of
${\bf k}$ with given $k= |{\bf k}|$.
In the calculation of these quantities at each time, the data are
`hardened' by replacing the order parameter at each point by a unit vector
in the same direction (the fixed point of the CDS iteration being a vector
of unit length). This procedure accelerates the entry into the dynamic
scaling regime, and helps us to elucidate the proper nature of the
asymptotic tail in the structure factor.
\section{RESULTS}
Dynamic scaling is observed for all the models studied.
The scaling regime is reached at quite early times, in agreement
with previous studies. We show that dynamic scaling holds in the
two-dimensional systems ($n=4,5$), using the characteristic length
$L(t) = t^{1/2}$ deduced from theoretical considerations \cite{AR}.
This agrees with earlier simulations of Bray and Humayun using `hard-spin'
dynamics \cite{BH90}.
Figure 1(a) presents a plot for $d=2$, $n=4$ of the correlation function
(\ref{scaling:real}) as a function of distance $r$ for several
times, while in Figure 1(b) we show the collapsed dynamic scaling
function when the analysis is made using the
scaling variable $x = r/L(t)$. As can be seen from the Figure,
the scaling function $f(x)$ is a monotonically decreasing function
with the generic featureless shape that is characteristic of
non-conserved $O(n)$ models.
It is of some interest to investigate the small-$x$ behavior of the
real-space scaling function $f(x)$. In systems with $n \le d$, the existence
of singular topological defects leads to a non-analytic term of the form
$|x|^n$ (with an additional $\ln x$ factor for even $n$), which leads to
the $k^{-(d+n)}$ Porod tail in Fourier space \cite{Porod,BH}.
In the present case, where
$n>d$, we expect no such short-distance singularities.
Therefore, we consider an expansion of the form
$f(x) = 1- \alpha x^2 + \beta x^4 \cdots $.
In table 1 we present the parameters $\alpha$ and $\beta$ determined from
the simulations in the range $x < 0.5$. The ratio
$r=\beta/\alpha^2$ should be a universal number for given $n$ and $d$.
It will be seen from table 1 that this ratio has the value
$r \simeq 0.59$ for $n=4$, different from the value
$1/2$ obtained for a gaussian function, which is the exact result
for the limit $n \to \infty$. For $n=5$, table 1 gives $r \simeq 0.49$,
already consistent with the large-$n$ result. However, in the absence of
any short-distance singularity, the small-$x$ behavior provides no useful
information on the nature of the tail in the structure factor.
Consequently, it is more convenient to investigate
directly the simulation results of the structure factor and
extract from them the asymptotic behaviour. We shall see that the
behavior in Fourier space is clearly non-gaussian, even for $n=5$.
As expected, given the absence of topological defects, the results
indicate (Fig.\ 2) that the decay of the structure factor is clearly
faster than a power law, in contrast to the interpretation of his own
similar results by Toyoki \cite{TU}. In order to demonstrate that the
tail is well described by the stretched exponential form
\begin{equation}
g(q) \sim A\exp(-bq^{\delta})\ ,
\label{se}
\end{equation}
where $q=kL(t)$ is the scaling variable in momentum space, we attempt
to find the corresponding power $\delta$ in the exponential by plotting
$\ln g$ versus $q^\delta$ and adjusting the value of $\delta$ until
the best linear behavior is obtained in the regime $q>1$.
During the fitting procedure the other two parameters of the fit,
$A$ and $b$, are readily determined.
The criteria used for the optimum fitting is based on the
Pearson correlation coefficient (PCC), which measure the strength of
the linear relation among two variables and varies between -1 (perfect
negative linear relation) and $+1$ (perfect positive linear relationship).
We proceed as follows: first, we choose an exponent $\delta$ and then
perform linear regression; next we change $\delta$ until the PCC
reaches its maximum value. The regression coefficients are calculated
using the values of the scaling structure function at the last two
times in the simulation. The optimum values for system with
$n=4$ components are $\delta= 0.435$ with a Pearson coefficient
of $ -.999998$. The other two parameters are $\ln A= 13.21$ and $b=8.19$
This result is presented in the Fig. 2(b).
We turn now to the description of the case $n=5$, following a similar
analysis to the $n=4$ model. Fig. 3(a) shows the correlation function
as a function of distance for different times. In this model the collapse
is also achieved using the characteristic length $L(t)= t^{1/2}$, as can be
observed in Fig.3(b). Therefore, both models are consistent
with dynamical exponent $z=2$. The corresponding
scaling plot for the structure factor is shown in Fig. 4(a).
A more important effect is observed in the
structure factor tail, because in this case it
has also a stretched exponential but with an apparently larger exponent.
Following an analysis similar to that used for $n=4$, we find that
the value of the best fit value of the exponent is $ \delta= 0.613$,
and the corresponding PCC in the regression is $-0.999998$.
The other two parameters are $\ln A= 7.57$ and $b= 4.39$.
In Fig. 4(b) we plot $\ln g$ against
$q^\delta$ and the linear behaviour is clearly seen.
Comparison between the real-space correlation functions
of the $n=4$ and $n=5$ models shows that the scaling functions are very
similar; the main difference is that the scaling function decreases
slightly more slowly for $n=5$ than $n=4$.
This is reflected in the parameters of the fitting
function for the small-$x$ range: the amplitudes $\alpha$ and $\beta$
tend to decrease as $n$ increases (table 1).
We turn to the discussion of the simulation results for one-dimensional
systems. We shall describe the relevant behavior in Fourier space.
Real-space data has been presented in \cite{NEW}.
Our results show that in one-dimensional systems the asymptotic behavior
of the structure factor also has a stretched exponential form, but the
fitted exponents $\delta$ are larger than those of the
corresponding two-dimensional models, and close to unity for $n=4$ and 5.
In Fig.\ 5(a), we present the simulation results for the
scaling function $g(q)$ of the structure factor for the one-dimensional
Heisenberg Model ($n=3$). The continuous curve is the result of the
analytical approach described in section IV.
The analysis of the asymptotic behaviour gives an exponent $\delta= 0.79$
for the stretched exponential. Fig.\ 5(b) shows the
curve of $\ln[g(q)]$ versus $q^\delta$, where the linear behaviour is
clearly observed. Similarly, we present the corresponding plots for the
$n=4$ model in Fig.\ 6, where the measured exponent is now $\delta = .98$,
while for $n=5$ we obtain $\delta = 1.02$ as is shown in Fig.\ 7.
The values of $\delta$ for the last two models are so close that in
practice it is difficult to distinguish between them. They are also
close to the value unity obtained from the approximate large-$n$ equation
discussed in the following section.
It is clear from the results for the one-dimensional
models that the scaling function in real space is not a gaussian
function, despite the good real-space fits to this form obtained
in \cite{NEW}. Moreover, the (effective) exponents $\delta$ for the $n=4,5$
models are bigger than for the corresponding models in two dimensions.
Therefore, we have evidence that the exponent $\delta$
increases with $n$, while it seems to decrease with $d$.
Note that the gaussian result obtained for $n=\infty$ corresponds
to $\delta=2$, so the results presented here for the structure-factor tail
are actually quite far from that limit. The analytical treatment presented
in the next section gives some indication of why this might be expected.
In particular, it suggests that the structure factor is dominated by a
simple exponential for $q \to \infty$ at fixed large $n$, while the familiar
gaussian form is recovered as $n \to \infty$ at fixed $q$.
We conclude this section by discussing briefly some alternative fitting
forms for the structure factor tail. First, however, we note that the
stretched exponential form (\ref{se}) describes the tail well over at
least 10 decades of $S(k)$ in all cases. Of course, this represents
a much smaller dynamic range (1 to $1\frac{1}{2}$ decades) in the
scaling variable $q=kL(t)$. Motivated by the analytical result [equation
(\ref{g}) below] of the approximate large-$n$ theory, other fitting forms
were tried for $d=2$ (the agreement with the large-$n$ theory already
being good for $n =4$ and 5 in $d=1$). A direct fit of (\ref{g}) does not
work well for $d=2$. Allowing for a general power-law prefactor,
$g(q) = Aq^{-x}\exp(-bq)$, gives a reasonable fit, but with very large
values for $x$ --- 5.6 for $n=4$, and 6.7 for $n=5$. Fixing $x=1/2$, but
allowing for a general stretched exponent $\delta$ again gives a reasonable
fit (with $\delta \simeq 0.68$ for $n=4$ and 0.70 for $n=5$), but over a
significantly reduced range of $q$. For these reasons we prefer the
unmodified stretched exponential (\ref{se}) as giving the simplest and most
convincing description of the large-$q$ data, at least for these small values
of $n$ in $d=2$. Of course, it is quite possible that the form (\ref{g})
will fit the data well at larger values of $n$.
\section{ANALYTICAL TREATMENT}
In an attempt to gain some analytical insight into the structure factor
asymptotics, we start from an approximate equation of motion for the
pair correlation function derived using the gaussian auxiliary field
approach pioneered by Mazenko \cite{Mazenko}.
We then make, for reasons that will become clear, the
further simplification of retaining only the leading nonlinearity as
$n \to \infty$. The resulting equation is then finally used to extract
the asymptotics of $g(q)$.
The GAF method for vector fields has been discussed in some detail
elsewhere. We refer the reader to the original papers \cite{GAF} and a
recent review \cite{REV} for a full exposition. The essence of the method
is a mapping from the original field variable $\vec{\phi}$ to an `auxiliary
field' $\vec{m}$. The function $\vec{\phi}(\vec{m})$ satisfies the equation
$(1/2)\sum_{i=1}^n \partial^2 \vec{\phi}/\partial_{m_i}^2
= \partial V/\partial\vec{\phi}$, where $V(\vec{\phi})$ is the potential in the
Ginzburg-Landau function (4). With the boundary conditions $\vec{\phi}(0)=0$,
and $\vec{\phi}(\vec{m}) \to \vec{m}/|\vec{m}|$ for $|\vec{m}| \to \infty$,
this equation for $\vec{\phi}(\vec{m})$ represents the equilibrium profile
function for a spherically symmetric topological defect, with
$|\vec{m}|$ representing distance from the defect.
The (uncontrolled) approximation that $\vec{m}$ is a gaussian field,
and the imposition of the scaling form (1), leads eventually to the
self-consistent equation \cite{REV,GAF}
\begin{equation}
f'' + \left(\frac{d-1}{4} + \frac{x}{4}\right)f'
+ \frac{\lambda}{2}\, \gamma \frac{dC}{d\gamma} = 0
\label{SC1}
\end{equation}
for the scaling function $f(x)$, where primes indicate derivatives.
In (\ref{SC1}) $\gamma$ is the normalized correlator of the auxiliary field,
$\gamma = \langle \vec{m}(1)\cdot\vec{m}(2) \rangle
/[\langle \vec{m}^2(1)\rangle \langle \vec{m}^2(2) \rangle]^{1/2}$,
where `1' and `2' represent the space-time points ${\bf x}_1,t$ and
${\bf x}_2,t$, and the function $C(\gamma)$ is given by
\begin{equation}
C(\gamma) = \frac{n\gamma}{2\pi}\,
\left[B\left(\frac{n+1}{2},\frac{1}{2}\right)\right]^2\,
F\left(\frac{1}{2},\frac{1}{2};\frac{n+2}{2};\gamma^2\right)\ ,
\label{BPT}
\end{equation}
where $B(x,y)$ is the beta function, and $F(a,b;c;z)$ the hypergeometric
function. The constant $\lambda$ in (\ref{SC1}) has to be adjusted so
that $f(x)$ vanishes sufficiently fast at infinity \cite{GAF}.
As should be clear from the above discussion, (\ref{SC1}) only really makes
sense for $n \le d$, based as it is on the presence of singular topological
objects whose positions are defined by the zeros of the field $\vec{\phi}$ or,
equivalently, by the zeros of $\vec{m}$. Indeed, the function $C(\gamma)$
has inbuilt structure that generates the Porod tail associated with
such defects. Specifically, in the short-distance limit, where $\gamma \to 1$,
the hypergeometric function in (\ref{BPT}) has a singular contribution of
order $(1-\gamma^2)^{n/2}$ (with a logarithmic correction for even $n$).
Since $1-\gamma^2 \sim x^2$ for small scaling variable $x$, this singular
term is of order $x^n$ (again, with a logarithm for even $n$), leading to the
power-law tail $g(q) \sim q^{-(d+n)}$ in Fourier space. Within the GAF
approach, this tail is obtained for {\em all} $n$ and $d$. For $n>d+1$,
however, neither singular topological objects nor nonsingular topological
textures exist, so the GAF result is qualitatively incorrect. Indeed,
this is to be expected since the GAF approach is specifically designed
to build in the defect structure.
So what should one do when there are no defects? We have seen that the
usual GAF approach always give a Porod tail, for any $n$ and $d$: this is
unphysical for $n>d+1$, since the tail is a consequence of the presence of
topological defects. One way around this impasse is to artificially
approximate the full GAF equation (\ref{SC1}) by the form valid for
$n \to \infty$. In this limit $\gamma dC/d\gamma = f +f^3/n +O(1/n^2)$,
and (\ref{SC1}) becomes, correct to $O(1/n)$,
\begin{equation}
f'' + \left(\frac{d-1}{4} + \frac{x}{4}\right)f'
+ \frac{\lambda}{2}\,\left(f + \frac{1}{n}f^3\right) = 0\ .
\label{SC2}
\end{equation}
This step, admittedly ad-hoc, has the desired effect of eliminating
the unwanted (for $n>d+1$) short-distance singularity in $f(x)$.
Eq.\ (\ref{SC2}) is the nonconserved version of the equation introduced
by BH to study the crossover from multiscaling to simple scaling
in the asymptotic dynamics of a {\em conserved} vector field at large but
fixed $n$ \cite{BH92}. Both conserved and nonconserved versions have
recently been studied numerically \cite{Cast}.
To extract analytically the large-$q$ behavior, we perform a
($d$-dimensional) Fourier transform of (\ref{SC2}). The resulting
equation for $g(q) \equiv \int d^dx\,f(x)\exp(i{\bf q}\cdot{\bf x})$ is
\begin{equation}
\left(\frac{d}{4}+q^2\right)g(q) + \frac{q}{4} g'(q) =
\frac{\lambda}{2}\,\left(g(q) + B(q)\right)\ ,
\label{SC3}
\end{equation}
where
\begin{equation}
B(q) = \frac{1}{n} \int d^dx\,f^3(x)\exp(i{\bf q}\cdot{\bf x})\ .
\label{B}
\end{equation}
If we assume an asymptotic form $g(q) \sim q^\nu\exp(-bq^\delta)$, with
$\delta<2$, then (\ref{SC3}) gives
\begin{equation}
B(q) \to \frac{2q^2}{\lambda} g(q)\ ,\ \ \ \ \ q \to \infty\ .
\label{largeq}
\end{equation}
In the Appendix, we show that consistency with (\ref{largeq}) requires
$\delta=1$ and $\nu = (1-d)/2$, i.e.
\begin{equation}
g(q) \to Aq^{(1-d)/2}\exp(-bq)\ ,\ \ \ \ \ q \to \infty\ .
\label{g}
\end{equation}
In real-space this implies that the function $f(z)$ has simple poles
in the complex $z$ plane at $z=\pm ib$. The value of $b$ is
not determined by this argument; instead one can derive (see Appendix)
the relationship
\begin{equation}
A^2 = (16\pi^2 n/\lambda)\,(2\pi b)^{d-1}
\label{A}
\end{equation}
between $b$ and the prefactor $A$ in the asymptotic form (\ref{g}).
The existence of these simple poles in real space also follows directly from
the real-space equation (\ref{SC2}). If one assumes a singularity of the
form $(z-z_0)^{-\gamma}$, with $\gamma>0$, then balancing the dominant terms
$f''$ and $\lambda f^3/2n$ gives immediately $\gamma=1$, i.e.\ a simple pole.
The position $z_0$ is not determined, but the residue $C$ of the pole is
given by $C=\mp i(4n/\lambda)^{1/2}$, where the two values correspond to the
poles $z_0 = \pm ib$. Using this result for $C$, one can readily recover
(\ref{A}) by contour methods, e.g.\ for $d=1$ one has
\begin{eqnarray}
g(q) & = & \int_{-\infty}^{+\infty} dx\,f(x)\exp(iqx) \nonumber \\
& \to & 2\pi(4n/\lambda)^{1/2}\exp(-bq)\ ,\ \ \ \ q \to \infty\ ,
\end{eqnarray}
where the second line, equivalent to (\ref{A}) for $d=1$, was obtained by
closing the contour in the upper half-plane.
The approach outlined above gives the relation (\ref{A}) between $A$ and
$b$, but does not determine $b$ explicitly. We now give a heuristic
argument that $b \sim (\ln n)^{1/2}$ for large $n$. First we make an
observation concerning the value of $\lambda$. Equation (\ref{SC3}) with
$q=0$ gives
\begin{eqnarray}
\lambda & = & \frac{d}{2}\,\frac{g(0)}{g(0)+B(0)} \nonumber \\
& = & \frac{d}{2}\,\left[1 + \frac{1}{n}\int d^dx\,
f^3(x)/ \int d^dx\,f(x)\right]^{-1}\ .
\end{eqnarray}
In particular, $\lambda = d/2$ for $n=\infty$. For $n=\infty$, therefore,
(\ref{SC3}) becomes $q^2g(q)+(q/4)g'(q)=0$, with solution
$g(q) = (8\pi)^{d/2}\exp(-2q^2)$ [the prefactor being fixed by the
condition $f(0)=1$]. For $n$ large but finite, on the other hand, we have
seen that the asymptotic form is $g(q) \sim \exp(-bq)$. The crossover
between these two forms presumably occurs at some $q=q^*(n)$, with
$g(q) \sim \exp(-2q^2)$ for $1 \ll q \ll q^*$, and $g(q) \sim \exp(-bq)$
for $q \gg q^*$. Matching these two forms at $q=q^*$ gives $q^* \sim b$.
Next we evaluate $B(q)$ in the region $q \ll q^*$.
Here $q(q) \simeq (8\pi)^{d/2}\exp(-2q^2)$, so $f(x) \simeq \exp(-x^2/8)$,
giving $B(q) \sim (1/n)\exp(-2q^2/3)$. However, this decays more slowly
with $q$ than the other terms in (\ref{SC3}), which fall off as
$\exp(-2q^2)$. So the term involving $B(q)$ (evaluated for $q \ll q^*$)
becomes comparable with the other terms when $(1/n)\exp(-2q^2/3)
\sim \exp(-2q^2)$, i.e.\ when $q \sim (\ln n)^{1/2}$. This suggests
$q^* \sim (\ln n)^{1/2}$, and therefore $b \sim (\ln n)^{1/2}$.
The numerical data of Castellano and Zannetti \cite{Cast} certainly show
that $b$ increases extremely slowly with $n$.
To compare this approximate theory with our simulation data, we have solved
(\ref{SC1}) numerically for $d=1$, $n=3,4,5$, using the procedure described
in \cite{GAF}. The Fourier transform was then taken numerically, and the
`best fit by eye' to the structure-factor data was obtained by adjusting
the timescale in the theoretical curves, giving the results shown by the
continuous curves in Figures 5(a), 6(a) and 7(a). The corresponding
log-linear plots, which reveal the large-$q$ behaviour more clearly,
are shown in Figures 8(a)-(c). As might be expected, equation
(\ref{SC1}) [or its Fourier transform (\ref{SC3})] does not describe the
data quantitatively over the whole range of $q=kL(t)$, but it does give a
qualitatively correct description. There is an early parabolic region,
corresponding to a gaussian form for $g(q)$, which then gives way to a slower
decay that, at least for $n=4$ and 5, is consistent with the simple
exponential form predicted by (\ref{SC3}) but with a different coefficient
$b$ in the exponent. Given that the theory is, at best, a large-$n$ theory
we regard these results as encouraging.
The $n=3$ data, however, and the $d=2$ data, do not
seem to fit a simple exponential, at least for the range of $q$ that we
have been able to explore. (This is of course implicit in the values
$\delta<1$ obtained for these systems from Figures 2,4 and 5.) \ It may
well be that considerably larger values of $n$ are needed in $d=2$ than
in $d=1$ for the large-$n$ asymptotics to become apparent.
The above derivation of an exponential tail was specific to nonconserved
fields. What can we say for conserved fields? The fundamental equation
of motion for this case is obtained from the TDGL equation (\ref{TDGL})
by the replacement $\Gamma \to -\Gamma \nabla^2$. Applying the GAF method
to this equation, imposing the scaling form (\ref{scaling:real}) [but
with $L(t) = t^{1/4}$ for conserved fields], and taking the Fourier
transform, leads to \cite{BH92}
\begin{equation}
\left(\frac{d}{8}+q^4\right)g(q) + \frac{q}{8} g'(q) =
\frac{\lambda}{2}q^2\,\left(g(q) + B(q)\right)\ ,
\label{CONS}
\end{equation}
instead of (\ref{SC3}). [The definition (\ref{B}) of $B(q)$ differs by a
constant from that used in \cite{BH92}, where $\lambda$ was written as
$2q_m^2$, $q_m$ being the position of the maximum of $g(q)$ for large $n$.] \
Assuming the asymptotic form $q(q) \sim q^\nu \exp(-bq^\delta)$
for $q \to \infty$, (\ref{CONS}) gives (\ref{largeq}) once more, provided
$\delta <4$. Then our previous arguments apply, and the asymptotic form
(\ref{g}), with $A$ and $b$ related by (\ref{A}), are recovered. This
approach therefore predicts that the structure factors for conserved and
nonconserved systems will have the {\em same} asymptotic forms, at least
within the context of the BH truncation. The same conclusion was drawn
from recent numerical solutions of the BH equation \cite{Cast}.
\section{CONCLUSION}
In summary, we have studied the dynamics of phase ordering for models
without topological defects in one and two dimensions. We find that
scaling is achieved with the growth law $L(t)= t^{1/2}$.
The tail in the structure factor is well fitted by a stretched
exponential form. For the two-dimensional systems, table 1 summarizes
the relevant parameters describing the fits in real and Fourier
space. In contrast to systems with singular defects ($n\le d$), where the
generalized Porod form $g(q) \sim q^{-(d+n)}$ for the structure factor tail
is a consequence of the defect structure, and is independent of the presence
or absence of conservation laws, in systems without defects the functional
form does, apparently, differ for conserved and nonconserved systems.
We have shown, for example, that for the particular case of the $n=4$ model
in two dimensions the tail is well described, over the range of $q$
accessible to us, by a stretched exponential with exponent $\delta=.435$,
differing from the result for the same model with conservation studied
by RC \cite{RC}, who found $\delta \simeq 1.7$.
Within the `toy' equation of Bray and Humayun \cite{BH92}, however, we have
shown that the true asymptotics are the {\em same} for conserved and
nonconserved dynamics. Of course, the BH equation is at best a large-$n$
theory, and the numerical results for nonconserved and conserved
dynamics may converge as $n$ is increased.
A related question is whether the exponents $\delta$ measured here and in
\cite{RC} are genuine asymptotic exponents, or effective exponents whose
values will change as the range of $q$ over which the fit is made is moved
to larger $q$. More extensive simulations may cast some light on this issue.
The `universal' (independent of conservation laws) Porod tail behavior
obtained for $n \le d$ is geometrical in origin, being a consequence of
the field structure induced by singular topological defects \cite{BH}.
As yet, however, we have no corresponding physical picture in the absence
of topological defects.
It is interesting that, within the simple model of equation (\ref{SC1}),
the exponent $\delta$ jumps discontinuously from $\delta=2$ at $n=\infty$
to $\delta=1$ for $n$ large but finite. More precisely, one can say that
$\delta=2$ corresponds to the limit $n \to \infty$ at fixed, large $q$,
while $\delta = 1$ corresponds to $q \to \infty$ at fixed, large $n$.
We have argued that the crossover between these limiting forms for
fixed, large $n$ occurs at $q \sim (\ln n)^{1/2}$.
This change of behavior depending on the order of the limits is
reminiscent of the result obtained from the conserved version of
(\ref{SC1}), where a novel `multiscaling' behavior is obtained for
$n \to \infty$ at fixed, large $t$ \cite{CZ}, while simple scaling is
recovered for $t \to \infty$ at fixed, large $n$ \cite{BH92}. For the
nonconserved case, one always has simple scaling. For both conserved
and nonconserved fields, however, the asymptotics of $g(q)$ are
sensitive to whether $n$ is large or truly infinite. This rules out,
for example, exploring the asymptotics by expanding around the large-$n$
solution in powers of $1/n$.
\acknowledgments
We thank Sanjay Puri for stimulating discussions during the early
stages of this work. F. Rojas thanks CONACYT (Mexico) for financial
support. This work was supported by EPSRC (UK) grant GR/J24782.
\section{APPENDIX}
In this Appendix we use (\ref{largeq}) to derive the asymptotic form
(\ref{g}) for $g(q)$. From the definition (\ref{B}) we have
\begin{equation}
B(q) = \frac{1}{n} \int_{\bf p}\int_{\bf k} g({\bf p})g({\bf k})
g({\bf q}-{\bf p}-{\bf k})\ ,
\end{equation}
where $\int_{\bf p} \equiv \int d^dp/(2\pi)^d$. Inserting the asymptotic
form
\begin{equation}
g(q) \to Aq^\nu\exp(-bq^\delta)\ ,
\end{equation}
gives
\begin{equation}
B(q) \to \frac{A^3}{n}\int_{\bf p}\int_{\bf k}F({\bf p},{\bf k},{\bf q})\,
\exp[-bE({\bf p},{\bf k},{\bf q})]\ ,
\end{equation}
where
\begin{eqnarray}
F({\bf p},{\bf k},{\bf q}) & = &
|{\bf p}|^\nu|{\bf k}|^\nu|{\bf q}-{\bf p}-{\bf k}|^\nu\ \nonumber \\
E({\bf p},{\bf k},{\bf q}) & = &
|{\bf p}|^\delta + |{\bf k}|^\delta +
|{\bf q}-{\bf p}-{\bf k}|^\delta\ .
\end{eqnarray}
We now scale out the $q$-dependence through the changes of variable
${\bf p}=q{\bf u}$, ${\bf k}=q{\bf v}$, ${\bf q}=q{\bf e}$, where
${\bf e}$ is a unit vector. Then
\begin{equation}
B(q) = \frac{A^3}{n}\,q^{2d+3\nu}\int_{\bf u}\int_{\bf v}
F({\bf u},{\bf v},{\bf e})\,
\exp[-bqE({\bf u},{\bf v},{\bf e})]\ .
\end{equation}
For $q \to \infty$, we can attempt to evaluate the ${\bf u}$ and ${\bf v}$
integrals using the method of steepest descents. This requires minimizing
the function $E({\bf u},{\bf v},{\bf e})$. The points requiring
consideration are the symmetry point, ${\bf u}={\bf v}={\bf e}/3$, and
the points ${\bf u}={\bf 0}={\bf v}$ and two similar points obtained by
permuting ${\bf u}$, ${\bf v}$ and ${\bf e}-{\bf u}-{\bf v}$. The
corresponding values of $E$ are $E({\bf e}/3,{\bf e}/3,{\bf e}/3)
= 3^{1-\delta}$, and $E({\bf 0},{\bf 0},{\bf e}) =
E({\bf 0},{\bf e},{\bf 0}) = E({\bf e},{\bf 0},{\bf 0})=1$.
Thus for $\delta >1$, the symmetry point minimizes $E$, giving
$B(q) \sim \exp(-3^{1-\delta}bq)$. But this form violates the asymptotic
relation (\ref{largeq}), according to which $B(q)$ and $g(q)$ must decay
with the {\em same} exponential factor, so $\delta>1$ is ruled out.
For $\delta<1$, the smallest $E$ is unity, obtained when two of ${\bf u}$,
${\bf v}$, and ${\bf e}-{\bf u}-{\bf v}$ vanish. So this case is apparently
consistent with (\ref{largeq}). However, the integral is now dominated by
points where two of the momenta ${\bf p}$, ${\bf k}$, and ${\bf q}-{\bf p}
-{\bf k}$ vanish. This invalidates the use of the asymptotic form for $g(q)$
in the evaluation of $B(q)$, so the derivation of a stretched exponential
form is not internally consistent for $\delta<1$.
This leaves $\delta=1$. For this case all points of the form
${\bf u}=\alpha {\bf e}$, ${\bf v}=\beta {\bf e}$, with $0\leq\alpha\leq 1$
and $0\leq\beta\leq 1-\alpha$, give $E=1$, so one has to integrate over all
such points. Writing ${\bf u}=\alpha{\bf e} + {\bf u}_\perp$,
${\bf v} = \beta{\bf e} + {\bf v}_\perp$, expanding $E$ to quadratic
order in ${\bf u}_\perp$, ${\bf v}_\perp$, and carrying out the integrals
over ${\bf u}_\perp$, ${\bf v}_\perp$, gives after some algebra
\begin{eqnarray}
B(q) & = & (A^3/4\pi^2n)\,q^{d+1+3\nu}\exp(-bq)\,I(d,\nu)/(2\pi b)^{d-1}
\nonumber \\
I(d,\nu) & = & \int_0^1 d\alpha\int_0^{1-\alpha}d\beta\,
[\alpha\beta(1-\alpha-\beta)]^{\nu+(d-1)/2}\ .
\label{B1}
\end{eqnarray}
But (\ref{largeq}) implies, asymptotically,
\begin{equation}
B(q) = (2A/\lambda)\,q^{2+\nu}\,\exp(-bq)\ .
\label{B2}
\end{equation}
Comparing (\ref{B1}) and (\ref{B2}) gives $\nu = (1-d)/2$ and Eq.\ (\ref{A})
for the amplitude $A$.
\newpage
|
1,477,468,750,596 | arxiv | \section{INTRODUCTION}
Clustering aims at partitioning the space creating sets or clusters of elements that are as coherent as possible within the cluster and as different as possible from the elements in the other clusters. It is an unsupervised learning technique for which the cluster assignment or cluster number is unknown. The criterion that determines the classification given a specific element distribution is a distance measure on the space where cluster features are defined.
Mean shift clustering is widely used in segmentation and detection \cite{lee_kernel_2013}, tracking and optical flow estimation \cite{zhao_robust_2017}, or feature matching for 3D reconstruction \cite{wei_region_2004}. Visual tracking aims at locating as accurately over time one or more targets (or clusters), in changing scenarios. Real-time visual tracking is a crucial task in Computer Vision, and still a challenge especially with clutter and multiple targets.
Conventional cameras present problems in robotics, especially if real-time processing is required. Cameras capture and transmit a fixed number of frames regardless of the camera motion. The data may have motion blur, or large displacements and occlusions between consecutive frames, causing difficulties for clustering and tracking performance. Even in the absence of motion, when all information is redundant, conventional sensors keep transmitting the very same information resulting in a computational bottleneck.
\begin{figure}[tp]
\begin{center}
\begin{minipage}[b]{0.31\columnwidth}
\centering
\includegraphics[width=\textwidth, height=3.4cm]{baxter_body.jpg}
\end{minipage}
\hspace{0.001cm}
\begin{minipage}[b]{0.31\columnwidth}
\centering
\includegraphics[width=\textwidth, height=3.4cm]{dvs.png}
\end{minipage}
\hspace{0.001cm}
\begin{minipage}[b]{0.32\columnwidth}
\centering
\includegraphics[width=\textwidth, height=3.4cm]{tracking.png}
\end{minipage}
\end{center}
\vspace{-3mm}
\caption{Left to right: A Baxter robot exploring the scenario, event output (positive - red, negative - blue polarity), and clustering and tracking output.}
\label{fig:gen_overview}
\end{figure}
We propose a method for real-time event-based clustering and tracking using a Dynamic Vision Sensor (DVS) \cite{lichtsteiner_latency_2008}. These biologically-inspired sensors mimic how eyes perceive motion: getting rid of synchronous frames, these sensors asynchronously transmit events only when pixel intensity changes. These changes are sensed with a very high temporal resolution, and latencies of a few microseconds, orders of magnitude less than in conventional cameras (which have about 33~ms). Perceiving only intensity changes results in the transmission of information only from object edges, reducing data and computational complexity compared to the frame-based methods. A few methods for event-based clustering have been proposed earlier: in \cite{linares_usb3_2015} an FPGA solution is proposed, where clusters are assigned based on the event position and are subsequently tracked; in \cite{mishra_saccade_2017} authors propose the partition of events in \textit{spike groups} to characterize objects for motion segmentation. Additionally, also some methods for event-based tracking have been proposed as in \cite{lagorce_asynchronous_2015}, where authors propose examples of kernels for tracking using oriented Gabor filters and handmade shapes, in \cite{ni_visual_2015} where authors propose a method that handles 2D affine transformations, in \cite{kim_simultaneous_2014} where authors reconstruct the intensity from event information and do the tracking.
We propose a method based on mean-shift clustering that labels each event. Contrarily to previous works, we do not reconstruct intensity or accumulate events to create pseudo-frames. Our real-time solution does not require either providing the number or shape of clusters to be detected and tracked.
Although some naive approaches for event-driven computation have been proposed \cite{delbruck_frame_2008}, only the work in \cite{barranco_contour_2015} achieved some event grouping using a supervised method based on Structured Random Forests (not in real-time) and event-based optical flow \cite{barranco_contour_2014}. Clustering events in real-time to label longer contours greatly benefits other applications, besides tracking, such as optical flow or segmentation. Also, a number of methods have developed event-based feature tracking methods \cite{zhu_event_2017}.
Firstly, our clustering is evaluated using a dataset freely available \cite{mueggler_event_2017} that contains sequences with patterns of different shapes. Then, we also apply it in experiments with a humanoid Baxter robot \cite{guizzo_rethink_2012}, and implement a tracking application using Kalman filters to validate it. We mounted our DVS sensor on the Baxter in an eye-in-hand configuration (see Fig.~\ref{fig:gen_overview}). Then, we executed different trajectories at several speeds in an exploratory task of a typical manipulation scenario using real-world objects. All code and data are available\footnote{\url{https://github.com/fbarranco/dvs_meanshift}}.
\section{PROPOSED CLUSTERING METHOD}
Clustering techniques are widely used for analyzing the feature space in problems such as filtering and segmentation. However, most techniques require a-priori knowledge of the number or the shape of clusters and consequently, these techniques are not able to deal with real-world features. The mean shift technique computes for each point the mean of the data distribution (in some multi-dimensional parameter space) in a neighborhood and then, the center of this region is shifted to the computed mean until the processing converges. This algorithm is a nonparametric method that iteratively represents the feature space as a probability density function (PDF), where the modes of the density provide the denser regions and thus, the local maxima of the PDF. Therefore, the solution can be found as an iterative nonparametric density gradient estimation using a specific kernel for estimating the density function. No prior assumption on the number of clusters or their shape is made. Although the original Fukunaga's method \cite{fukunaga_estimation_1975} works iteratively leading to better clustering results, Comaniciu's method \cite{comaniciu_mean_2002} is much faster reducing the number of comparisons in the feature space. We selected a hybrid approach that combines these two, solving the problem as a gradient descent for an optimization \cite{bitsakos_experimental_2010}.
\subsection{Event-based clustering using mean shift}
Event-based sensors only transmit information when the intensity at a specific location changes by a substantial amount; the sensor triggers an event $e(\textbf{x},t,p)$ where $\textbf{x}$ is the location, $t$ the time of the change, and $p$ the polarity (positive or negative intensity change). To exploit the potential of the high temporal timing, we avoid accumulating events; each event is processed asynchronously when it happens.
The mean-shift solution can be formulated as the gradient descent of the minimization in Eq.~(\ref{eq:meanshift})
\begin{eqnarray}
arg\,min_{~\textbf{x}_{i},~p_{i},~f_{\delta}(t_{i})} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \\
\sum_{i,j} K_{G}\left(\frac{[\textbf{x}_i,p_i,f_{\delta}(t_{i})] - [\textbf{x}^{0}_{j},p_{j},f_{\delta}(t_{j})]}{h} \right) \nonumber
\label{eq:meanshift}
\end{eqnarray}
Instead of raw timestamps, we have chosen an exponential decay function defined on the estimated lifetime of events that provides a good dynamic representation of local spatial structures \cite{clady_motion_2017}. In the above equation $f_{\delta}(t) = e^{-\frac{\Delta t}{\tau}}$, where $\tau$ is a parameter that allows tuning the temporal history weight. Thus, we consider 4D data (space, polarity and time), so that the initial values are represented as $[\textbf{x}^{0}_{i}, p^{0}_{i}, f_{\delta}(t^{0}_{i})]$ for pixel $i$. We use a multivariate Gaussian Kernel of the form $K_{G}(\textbf{x})= \frac{1}{\sigma \sqrt{2\pi}} e^{-1/2(\textbf{x}^{T}\textbf{x})}$. Such a kernel although it usually requires more steps to converge, it also results in better clustering. The minimization is defined over a summation of all pairs of pixels $i,j$. The tunnable bandwidth parameter $h$ defines the kernel radius.
In our case, at each iteration the current position is compared to the position of the original set $\textbf{x}^{0}$ as in the classic method \cite{fukunaga_estimation_1975}, but the polarity and time function use the updated values of the previous iteration as done in \cite{comaniciu_mean_2002}. Using this hybrid approach a better clustering is achieved while reducing the computational complexity. Classic mean-shift method is computationally expensive $O(Kn^{2})$ where $n$ is the number of pixels and $K$ the number of iterations before convergence. Instead, we deal only with events that are processed in parallel in small packets of a few hundreds, helping to reduce the computational resource requirements.
\subsection{Multi-target cluster tracking}
Mean-shift clustering has gained popularity in recent years, used as preprocessing in applications such as filtering, segmentation, and tracking. We validate our event-based clustering method for visual tracking of multiple targets.
Although due to the high temporal resolution the position of every cluster can be tracked pixel-wise, the trajectory of the center of masses is neither accurate nor smooth. For example, after changing the motion direction, different edges may be triggered because events are not triggered for edges that are parallel to the camera motion. Similarly, slow motion may trigger much less events, and thus the calculation of the center of masses may be affected.
Bearing in mind that the additional computation for the tracking must be light in order to keep the clustering processing event-wise, we propose the use of Kalman filters for multi-target tracking. Kalman filters efficiently predict and correct linear processes and are widely used in tracking applications, although usually an accurate system model is required. We assume the reader is comfortable with Kalman filter formulation but more information can be found in \cite{li_multiple_2010}.
Our Kalman filtering uses a status vector with position $x,y$ and velocity $v_{x},v_{y}$ in pix/s for each center of mass: $\textbf{x}_{k} = [x_{0,k}, y_{0,k}, v_{x}, v_{y}]^T$, where $k$ is the time step. After each step, the goal is to estimate the state vector $\textbf{x}_{k}$ from the measurement vector $\textbf{z}_{k}= [x_{0,k}, y_{0,k}]$ that is only the position of the center of masses provided by clustering. The event-based sensing allows for very accurate small time steps ($\Delta t$) in the update of cluster centers. In our Kalman filter implementation, $A$ is the transition matrix, $H$ the measurement matrix, and we assume a Gaussian cluster-tracking system to define the noise for the process and the measurement.
\[
A=
\begin{bmatrix}
1 & 0 & \Delta t & 0\\
0 & 1 & 0 & \Delta t\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{bmatrix}
,\quad \quad \quad
H=
\begin{bmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0
\end{bmatrix}
\]
One of the advantages of using event sensors is that the timing for measuring the velocity is very accurate thanks to the high temporal resolution; this allows for obtaining very accurate estimates for the Kalman filtering updates.
\section{EXPERIMENTS}
We configured three sets of experiments to validate our approach for clustering events using mean-shift. Additionally, we also demonstrated the value of our method in a visual multi-target tracking application. For the first experiment, a public dataset \cite{mueggler_event_2017} provides several sequences with different object shapes undergoing various motions in front of an event sensor. In two other experiments we mounted our DVS sensor on a Baxter robot in an eye-in-hand configuration. The first of these experiments was performed moving the Baxter arm in front of a pattern similar to the one of \cite{mueggler_event_2017}. The arm was moved in different directions and at different speeds to analyze our method's sensitivity to orientation and motion direction. In the second set of experiments, we configured a real-world scenario for a manipulation task with the Baxter robot. Different objects were placed on a tabletop, with some occlusions and at different depths from the Baxter. Again, the robot moved its arm in different directions and speeds
The evaluation of the clustering and tracking accuracy required manually labeling of thousands of chunks of events, randomly chosen and with 4 to 5 clusters in average.
We use three external error metrics for the evaluation of clustering accuracy: 1) the Adjusted Rand Index (ARI), which measures the similarity between our assignment and the ground-truth ignoring permutations, and it is normalized for chance (random assignments score should be close to 0) to give equal weight to false positives and false negatives; 2) the F-measure (F), which allows weighting false negatives stronger than false positives; 3) the Normalized Mutual Information (NMI), which allows us to measure the quality of our assignment taking into account the number of clusters. For the F-score computation, pairs of events are counted as: if assigned to the same cluster and labeled within the same cluster (TP) or labeled in different clusters (FN); if assigned to different clusters and labeled within the same cluster (FP) or labeled in different clusters (TN).
For the multi-target real-time tracker, the accuracy was evaluated from the distance in the position of the tracked cluster center of masses to the ground-truth labeled cluster center. We also estimate the percentage of valid tracked positions: the proportion of the count of estimates that are estimated with an error lower than a threshold.
We also analyze the time performance for the real-time clustering and the tracking and provide a comparison to the conventional frame-based methods.
\subsection{Event-based clustering evaluation}
The event-based mean-shift clustering labels each event with the cluster it belongs to. Previous solutions proposed approaches to join event locations from the same contours for problems such as optical flow estimation or segmentation; these solutions create artificial information without solving the problem. Fig.~\ref{fig:bandwidth}b and Fig.~\ref{fig:bandwidth}c show an example of the clustering for a chunk of data from the \textit{shapes\_rotation} sequence. Note the different cluster shapes and sizes and how temporal information enables grouping of events that are triggered almost simultaneously, even if they are not very close. The only parameter for the mean-shift clustering is the so-called bandwidth $h$, which defines the kernel radius. Fig.~\ref{fig:bandwidth}a shows the evolution of the F-measure for clustering with different bandwidth options; in our experiments we use a value of 0.1 that obtains the best accuracy (h is normalized $\in [0,1]$).
\begin{figure}[tp]
\begin{center}
\begin{minipage}[b]{0.3\columnwidth}
\centering
\includegraphics[width=\textwidth, height=2.5cm]{fmeasurevsbandwidth.eps}
\end{minipage}
\begin{minipage}[b]{0.34\columnwidth}
\centering
\includegraphics[width=\textwidth, height=2.5cm]{cluster3D_1.eps}
\end{minipage}
\begin{minipage}[b]{0.33\columnwidth}
\centering
\includegraphics[width=\textwidth, height=2.5cm]{cluster3D_2.eps}
\end{minipage}
\vspace{0.00cm}
\begin{minipage}[b]{0.3\columnwidth}
\centering
\footnotesize{(a)}
\end{minipage}
\begin{minipage}[b]{0.34\columnwidth}
\centering
\footnotesize{(b)}
\end{minipage}
\begin{minipage}[b]{0.33\columnwidth}
\centering
\footnotesize{(c)}
\end{minipage}
\end{center}
\vspace{-5mm}
\caption{(a) Evolution of the clustering accuracy given by the F-measure tuning the mean-shift bandwidth parameter $h$. (b-c) Detected clusters in the (x,y,t) space for 5 ms of the \textit{shapes\_rotation} sequence. The cluster labels are encoded with different colors, randomly chosen.}
\label{fig:bandwidth}
\end{figure}
\begin{table}[h]
\caption{Clustering accuracy evaluation}
\label{tab:clustering_evaluation}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|l|l||c|c|c|c|c|}
\hline
& & ARI & NMI & Precision & Recall & Fmeasure\\
\hline
\textit{shapes} & E-MS & \textbf{0.941} & \textbf{0.952} & \textbf{0.981} & \textbf{0.927} & \textbf{0.951}\\
\textit{\_translation} & KM & 0.775 & 0.858 & 0.924 & 0.741 & 0.815\\
\hline
\textit{shapes} & E-MS & \textbf{0.947} & \textbf{0.947} & \textbf{0.984} & \textbf{0.929} & \textbf{0.955}\\
\textit{\_rotation} & KM & 0.793 & 0.863 & 0.919 & 0.764 & 0.828\\
\hline
\textit{shapes} & E-MS & \textbf{0.912} & \textbf{0.945} & \textbf{0.977} & \textbf{0.888} & \textbf{0.926}\\
\textit{\_6dof} & KM & 0.718 & 0.840 & 0.916 & 0.666 & 0.763\\
\hline
\textit{baxter\_001} & E-MS & \textbf{0.912} & \textbf{0.947} & \textbf{0.901} & \textbf{0.987} & \textbf{0.937}\\
& KM & 0.743 & 0.837 & 0.718 & 0.937 & 0.800\\
\hline
\textit{baxter\_002} & E-MS & \textbf{0.949} & \textbf{0.920} & \textbf{0.974} & \textbf{0.958} & \textbf{0.965}\\
& KM & 0.694 & 0.750 & 0.700 & 0.870 & 0.769\\
\hline
\textit{baxter\_003} & E-MS & \textbf{0.854} & \textbf{0.874} & \textbf{0.872} & \textbf{0.927} & \textbf{0.900}\\
& KM & 0.671 & 0.765 & 0.652 & 0.903 & 0.753\\
\hline
\end{tabular}
}
\end{table}
\begin{figure*}[tb]
\begin{center}
\begin{minipage}[b]{0.8\textwidth}
\centering
\includegraphics[width=\textwidth, height=2.5cm]{num_clusters.eps}
\end{minipage}
\vspace{0.05cm}
\begin{minipage}[b]{0.8\textwidth}
\centering
{\footnotesize (a)}
\end{minipage}
\begin{minipage}[b]{0.15\textwidth}
\centering
\includegraphics[width=\textwidth, height=3.4cm]{num_of_MSproc.eps}
\end{minipage}
\begin{minipage}[b]{0.15\textwidth}
\centering
\includegraphics[width=\textwidth, height=3.4cm]{num_of_MSKFproc.eps}
\end{minipage}
\begin{minipage}[b]{0.15\textwidth}
\centering
\includegraphics[width=\textwidth, height=3.4cm]{num_of_clustdetected.eps}
\end{minipage}
\begin{minipage}[b]{0.15\textwidth}
\centering
\includegraphics[width=\textwidth, height=3.4cm]{speed_vs_Fmeasure.eps}
\end{minipage}
\begin{minipage}[b]{0.15\textwidth}
\centering
{\footnotesize (b)}
\end{minipage}
\begin{minipage}[b]{0.15\textwidth}
\centering
{\footnotesize (c)}
\end{minipage}
\begin{minipage}[b]{0.15\textwidth}
\centering
{\footnotesize (d)}
\end{minipage}
\begin{minipage}[b]{0.15\textwidth}
\centering
{\footnotesize (e)}
\end{minipage}
\end{center}
\vspace{-3mm}
\caption{(a) Number of detected clusters of our event-based mean-shift method over time for three sequences; although some objects leave the sensor field of view or get occluded by other objects, the correct number of detected clusters consistently remains. (b) and (c) computational cost represented as the number of mean-shift and mean-shift operations with tracking performed per second (logarithmic scale), when the sensor moves at different speeds: notice almost no difference between the two results and the asymptotic trend for speeds greater than 2.5-3x. The solid red lines represent the computational cost for the frame-based method. (d) Number of clusters detected per second and their growth with higher speed factors. Here the red dashed line at the bottom represents the detection for the frame-based method. (e) F-measure for clustering accuracy with respect to the velocity of the sequence: the first column shows the velocity of the original sequence. No significant differences in the F-measure for higher speeds are shown.}
\label{fig:performance}
\end{figure*}
Before analyzing the results, let us note that we use a preprocessing stage to filter the background activity noise: this filter removes uncorrelated events that is events that do not have any support from past events in the recent time.
Table~\ref{tab:clustering_evaluation} shows the accuracy table for the event-based clustering method proposed (E-MS) in comparison to the classic K-means clustering method (KM). Due to the lack of event-based clustering methods in the state of the art to compare with, we used as baseline the K-means method. The K-means clustering was performed on the same parameters as our method, namely the position, polarity, and time information. Additionally, the exact number of clusters was also provided to the method, since K-means requires it. We evaluated three sequences from \cite{mueggler_event_2017} and three sequences collected with the Baxter robot in a manipulation task scenario. Our event-based method achieves the best performance for all the metrics and sequences presented, reaching an F-measure that is more precise by approximately 0.13 in the worst case and 0.16 in the best case, and up to 0.2 for the AR Index. Let us remark that the accuracy for \textit{shapes\_6dof} is a bit lower due to the complexity of the scene with shapes rapidly changing with zoom-in zoom-out movements and some shapes are sometimes partially out of the sensor field of view. This results validate our event-based clustering accuracy.
Regarding clustering consistency, Fig.~\ref{fig:performance}a shows the number of detected clusters for three sequences, varying from 7 to 9 during the 10~s shown in the plot. The number of detected clusters remains consistently steady along time, showing no significant differences for different velocities and camera trajectories. Also, the objects in the scene are not visible during the whole sequence due to occlusions or because they leave (partially or completely) the sensor field of view. Fig~\ref{fig:performance}e shows the F-measure value for these sequences and how it varies with the speed. There are no significant variations in the clustering accuracy for higher speeds showing the potential of our method exploiting the high temporal resolution.
We also evaluated the time performance of our event-based computation with respect to the frame-based conventional estimation. The implementation was done under ROS (Robot Operating System): clustering and tracking was implemented in C++ and the Baxter robot uses a Python interface. All code runs in an Intel i7 @ 4GHz with 32 GB of RAM.
For the conventional frame-based mean-shift method, the computational complexity depends on the number of pixels and the number of features; we assume a conventional camera that captures 30 fps (frames per second), with the same spatial resolution than our event-based sensor ($180~\times~240$ or $128~\times~128$ depending on the version). In the frame-based mean-shift method the computation is done for each pixel, resulting in a steady computational rate of $30~\times~180~\times~240 = 1.296 \cdot 10^{6}$ mean-shift processed operations per second. Meanwhile, in our event-based clustering method we are only processing the events that are triggered asynchronously. Therefore, the computational rate in our case depends on the number of events at every time and thus depends on the speed of the camera or the objects in the scene, and the scene structure. The computational complexity in Fig.~\ref{fig:performance}b shows how the mean-shift operations grow with the speed of the sequence, compared to the complexity of the frame-based estimation. The different speeds for the sequences from the dataset in \cite{mueggler_event_2017} are simulated changing the timestamps, while in the robot case the arm moves at different speeds while exploring the real scene. Colors here show the results for different sequences, and the operation rate is computed for 20~s of each sequence. The potential of the event-based method to reduce the computational cost is clear: in the best case, it is reduced in approximately 88\%, and the average computational cost reduction for the original speed is about 83\% (notice the logarithmic scale of the Y axis).
\begin{figure*}[tb]
\begin{center}
\begin{minipage}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth, height=2.5cm]{tracking_shapes_translation_posX.eps}
\end{minipage}
\begin{minipage}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth, height=2.5cm]{tracking_shapes_translation_posY.eps}
\end{minipage}
\begin{minipage}[b]{0.1\textwidth}
\centering
\includegraphics[width=\textwidth, height=2.5cm]{tracking_shapes_translation.eps}
\end{minipage}
\hspace{2.65cm}
\vspace{0.01cm}
\begin{minipage}[b]{0.32\textwidth}
\centering
\footnotesize{(a)}
\end{minipage}
\begin{minipage}[b]{0.32\textwidth}
\centering
\footnotesize{(d)}
\end{minipage}
\begin{minipage}[b]{0.1\textwidth}
\centering
\footnotesize{(g)}
\end{minipage}
\hspace{2.65cm}
\vspace{0.01cm}
\begin{minipage}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth, height=2.5cm]{tracking_shapes_rotation_posX.eps}
\end{minipage}
\begin{minipage}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth, height=2.5cm]{tracking_shapes_rotation_posY.eps}
\end{minipage}
\begin{minipage}[b]{0.1\textwidth}
\centering
\includegraphics[width=\textwidth, height=2.5cm]{tracking_shapes_rotation.eps}
\end{minipage}
\hspace{0.15cm}
\begin{minipage}[b]{0.075\textwidth}
\centering
\includegraphics[width=\textwidth, height=1.75cm]{legend.eps}
\end{minipage}
\hspace{0.95cm}
\vspace{0.01cm}
\begin{minipage}[b]{0.32\textwidth}
\centering
\footnotesize{(b)}
\end{minipage}
\begin{minipage}[b]{0.32\textwidth}
\centering
\footnotesize{(e)}
\end{minipage}
\begin{minipage}[b]{0.1\textwidth}
\centering
\footnotesize{(h)}
\end{minipage}
\hspace{2.65cm}
\vspace{0.01cm}
\begin{minipage}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth, height=2.5cm]{tracking_shapes_6dof_posX.eps}
\end{minipage}
\begin{minipage}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth, height=2.5cm]{tracking_shapes_6dof_posY.eps}
\end{minipage}
\begin{minipage}[b]{0.1\textwidth}
\centering
\includegraphics[width=\textwidth, height=2.5cm]{tracking_shapes_6dof.eps}
\end{minipage}
\begin{minipage}[b]{0.125\textwidth}
\centering
\includegraphics[width=\textwidth, height=2.5cm]{barerror_tracking_pix.eps}
\end{minipage}
\hspace{0.33cm}
\vspace{0.01cm}
\begin{minipage}[b]{0.32\textwidth}
\centering
\footnotesize{(c)}
\end{minipage}
\begin{minipage}[b]{0.32\textwidth}
\centering
\footnotesize{(f)}
\end{minipage}
\begin{minipage}[b]{0.1\textwidth}
\centering
\footnotesize{(i)}
\end{minipage}
\begin{minipage}[b]{0.125\textwidth}
\centering
\footnotesize{(j)}
\end{minipage}
\hspace{0.33cm}
\end{center}
\vspace{-3mm}
\caption{Tracking accuracy results for three sequences for the X (a,b,c) and Y (d,e,f) positions of the cluster centers. Each color encodes a different object, dotted lines represent the tracked position, solid lines the tracked position smoothed via the Kalman filtering, and dashed lines the ground-truth position (GT). (g),(h), and (i) plot the average percentage of correctly tracked positions over time for the same sequences and objects. (j) shows the average error (in pixels) and standard deviation for all sequences, for each tracked object.}
\label{fig:tracking}
\end{figure*}
Fig.~\ref{fig:performance}d shows the number of detected clusters per second with our event-based mean-shift method and the evolution for different speeds. We compute the detected clusters per second as the total count of detections along 20 seconds of each sequence. The dashed line on the bottom shows the number of clusters per second estimated for the frame-based alternative, assuming again a frame rate of 30 fps and the average number of clusters during the same 20 seconds (about 180 detections/s). The detection rate of the event-based processing increases with the speed and stabilizes when the speed factor reaches around 3x, although this number also depends on the original speed of the sequence in each case (speed factor = 1). The results for the detections here are the same at lower speeds although for a speed factor greater than 2.5 the accuracy diminishes. Let us also remark that for each detection, the frame based approach needs about 1/30~s to start detecting clusters while in our case, this latency is a few microseconds.
\subsection{Multi-target tracking evaluation}
The multi-target tracking application using the mean-shift partitioned clusters exploits the high-temporal resolution demonstrating the potential of event-based sensors for robust tracking. The evaluation of the accuracy focuses on the distance between the positions for the manually-labeled ground-truth and the tracked position of the center of masses of the cluster. We also included a posterior Kalman filtering that provides a corrected smoother result for the trajectory. Fig.~\ref{fig:tracking}a, b, c and Fig.~\ref{fig:tracking}d, e, f show respectively the $x$ and $y$ positions for two chunks of three different sequences, and four different objects encoded in different colors. Up to 15~s of sequence are shown, and the gap in the middle corresponds to the separation between the first and second chunk within the same sequence. Also, there are some parts missing (e.g. Object 1 in Fig.~\ref{fig:tracking}a and Fig.~\ref{fig:tracking}d) when the object leaves the field of view of the sensor. Object centers are successfully tracked along time although the geometric center of the labeled ground-truth object sometimes does not correspond exactly to the center of masses of the cluster. Let us explain this with an example: in Fig.~\ref{fig:bandwidth}b, the geometry of most objects is pretty well defined for the current motion, except for the rectangle and the hexagon. The rectangle (in dark blue) triggers more events in the bottom-left side that is wider because of the sensitivity of the sensor and the velocity of the motion, slightly shifting the center of masses to that side. In the case of the hexagon, some sides are lost because the direction of the motion is perpendicular to their local gradient. Locally, no change in the intensity is happening since all pixels on that side have the same intensity and thus, no events are triggered. Consequently, the location of the center of masses is affected, being more harmful with non-symmetric or irregular shapes. Despite this, the average error of the center position for each object along the sequences does not go above 2.5 pixels in the worst case (see Fig.~\ref{fig:tracking}j). Additionally, we also applied a final Kalman filtering stage to correct our predictions based on the previous history and the system model. Smoother trajectories are obtained for all sequences except for the X position of the second chunk in Fig.~\ref{fig:tracking}c. In this sequence, the image motions are due to zoom-in and out sensor motion and this affects not only the estimation but also the available data for the manual labeling. Nevertheless, we achieve lower error for the Y position in the first part of the sequence yielding to a low average error.
\begin{figure*}[tp]
\begin{center}
\begin{minipage}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{frame1.png}
\end{minipage}
\begin{minipage}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{frame2.png}
\end{minipage}
\begin{minipage}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{frame3.png}
\end{minipage}
\begin{minipage}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{frame4.png}
\end{minipage}
\begin{minipage}[b]{0.1\textwidth}
\centering
\includegraphics[width=\textwidth]{frame5.png}
\end{minipage}
\begin{minipage}[b]{0.1\textwidth}
\centering
\includegraphics[width=\textwidth]{frame6.png}
\end{minipage}
\begin{minipage}[b]{0.1\textwidth}
\centering
\includegraphics[width=\textwidth]{frame7.png}
\end{minipage}
\begin{minipage}[b]{0.1\textwidth}
\centering
\includegraphics[width=\textwidth]{frame8.png}
\end{minipage}
\vspace{0.00cm}
\begin{minipage}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{frame9.png}
\end{minipage}
\begin{minipage}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{frame10.png}
\end{minipage}
\begin{minipage}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{frame11.png}
\end{minipage}
\begin{minipage}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{frame12.png}
\end{minipage}
\begin{minipage}[b]{0.1\textwidth}
\centering
\includegraphics[width=\textwidth]{frame13.png}
\end{minipage}
\begin{minipage}[b]{0.1\textwidth}
\centering
\includegraphics[width=\textwidth]{frame14.png}
\end{minipage}
\begin{minipage}[b]{0.1\textwidth}
\centering
\includegraphics[width=\textwidth]{frame15.png}
\end{minipage}
\begin{minipage}[b]{0.1\textwidth}
\centering
\includegraphics[width=\textwidth]{frame16.png}
\end{minipage}
\end{center}
\vspace{-3mm}
\caption{Examples including clustering and tracking for the sequences with patterns or the Baxter manipulation scenario. Colors encode the cluster and the solid tails show the trajectories for the last second. Accurately calculating cluster centers is harder when dealing with larger or textured objects. These images are generated plotting packets of 1500 events.}
\label{fig:examples}
\end{figure*}
We also estimated the percentage of valid tracked positions in Fig.~\ref{fig:tracking}g, h, i, for the three sequences. This metric shows the percentage of the count of estimates that are computed with an error lower than the threshold $\tau$ such that $||\hat{\textbf{p}} - \textbf{p}|| \leq \tau $, where $\hat{\textbf{p}}$ and $\textbf{p}$ are the estimated tracked position and the ground-truth, and $||.||$ is the L-2 norm. Again, each color shows the object, and solid lines represent the estimated positions while the dashed lines are used for the positions after the Kalman filtering. In average, using a threshold $\tau$ of 2.5 pixels approximately 80\% of the positions along time for all sequences are computed correctly.
Regarding the performance, Fig.~\ref{fig:performance}c shows very similar values for the mean-shift clustering only and the mean-shift followed by tracking, with a slight difference only for 2x speeds. Thus, the tracking and Kalman filtering does not represent a significant burden for the overall computation.
\section{CONCLUSIONS}
We have presented an event-based mean-shift clustering method that, to the best of our knowledge is the first real-time event-based clustering method, and shown its application to a tracking task. Although some methods have presented solutions for tracking as mentioned in the introduction, none of them are available to be compared (we are making all our code and datasets available). While in previous attempts synchronous frame-based methods have been adapted to the asynchronous event-based framework, in our work no integration or accumulation of events is performed. For every event that inputs our pipeline, one extended event is produced with extra information that contains the label of the cluster this event belongs to.
Our results demonstrate that our method is very robust to cluster shapes and to different speeds: similar accuracy results were obtained for the same trajectories executed by the Baxter robot at different speeds. Regarding tracking, our clustering information is provided with very high temporal resolution taking advantage of the sensor asynchronous nature. This also helps considerably reducing the computational resources reaching an average reduction of 83\% compared to the conventional method. The experiments also show very accurate tracking examples for different sequences with an error of 2.5 pix, even when parts of the objects are occluded or out of the scene. In the examples with the Baxter robot, the number of clusters was steady along time for different sequences in a typical environment for robotic manipulation tasks. We consider this work as a first step for other applications in robotic manipulation. For example after segmenting and tracking, Baxter can recognize and grab with its arm, or it can generate fast affordances of objects or real-time tracking of objects moving in dynamic environments.
\addtolength{\textheight}{-12cm}
{\small
\bibliographystyle{ieee}
|
1,477,468,750,597 | arxiv |
\section{Performance Data}
We perform 2 tasks, ingest data with insertMany with ordered=False and a conditional find on two indexed fields. These tasks are written in python using pymongo. The data we choose to ingest was time series metric data of Blue Waters compute nodes collected by OVIS. The data spans 5 years, sample each node independently once every minute, and includes about 75 distinct metrics (e.g. memory use, cpu activity, network activity). The totality of this data is about 70 billion rows. Storage for this data in flat csv file on Blue Waters Luster filesystem is about 200 terabytes. We index on timestamp and node id.
We perform scaling analysis by increasing the number of shard-router pairs. For example, a job of 32 nodes is scheduled. 2 nodes will be for the configuration server, 7 shards, and 7 routers. This leaves 16 nodes to run the ingest script. Ingest is run with 4 processing elements per node, thus 64 insertMany will be processed concurrently across 7 MongoDB routers. A job of 64 nodes would have 2 for configuration, 15 shards, 15 router servers and so on. The larger the cluster, the more data we upload.
\begin{center}
\begin{table}[ht]
\caption{}
\begin{tabular}{| c | c |}
\hline
Nodes & Days of Data \\
\hline
32 & 3 \\
64 & 7 \\
128 & 14 \\
256 & 14 \\
\hline
\end{tabular}
\label{daysTable}
\end{table}
\end{center}
A instertMany is performed by collecting a list of python dictionaries from the metric data csv file. The keys of the dictionaries become analogous to SQL table column names. A list of dictionaries is allowed to have different keys. However, ingesting this dataset allows the same keys for each document. Indexes were created for the timestamp key and node id key. In Figure \ref{ingestScaling} MongoDB scales close to linear between 32, 64, and 128 nodes. We are still investigating the limitations at 256 nodes and beyond.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{ingest_scaling}
\caption{}
\label{ingestScaling}
\end{figure}
The query test was done by doing a conditional find. The query is constructed by reading user jobs metadata for time run, duration, and which nodes were assigned. The candidate user jobs were selected from a time period starting January 1, 2018 until the number of days described in Table \ref{daysTable}. The total number of documents returned by a query is number of user job nodes times duration of user job in minutes. Indeed, Figure \ref{queryScaling} indicates cluster size maintains a similar query performance for various MongoDB cluster sizes. It is important to point out that each cluster size is servicing more concurrent quarries. Cluster with size 32 was servicing between 16 and 64 concurrent find queries, cluster size 64 was between 32 and 128 queries, and so on.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{query_time}
\caption{Sharded MongoDB cluster diagram \cite{MongoManShard}}
\label{queryScaling}
\end{figure}
\section{Introduction}
Queryable data stores have been a crucial part of modern data science. They provide a relatively small set of primitive operations by which a large variety of data analysis can be performed. Machine learning and other sophisticated statistical analysis are becoming more popular across all disciplines \cite{WilsonAS16}. These algorithms require more flexibility than the average SQL or NoSQL data store and can take advantage of a wider variety of compute resources than the average datastore host architecture. However, it is common for a data scientist to computationally balance their process between a datastore and compute resource. Data and algorithms are becoming large enough and computationally intensive enough that they can take advantage of a high performance architecture. The 2018 Gordon Bell prize "Employing Deep Learning Methods to Understand Weather Patterns" is an example. In this paper we present a performance profile of the NoSQL distributed data store MongoDB running as transient queued batch job on Blue Waters super computer. This will provide guidance about how to balance tasks in a data science pipelines on a HPC architectures.
\section{Background}
The major primitive operations of a traditional SQL data store are CRUD (Create, Read, Update, Delete), groupby, and join. NoSQL data stores, such as MongoDB, still maintain a core set of primitives that are analogous to traditional SQL: insert, find, update, remove, aggregate, lookup. This work will not examine all of these primitives, but we still wish to speak of them.
Finds and inserts naturally lend themselves to distributed computing. A program can dispatch them to many workers witch report back either nothing is found or the results of their task. The more sophisticated aggregate and lookup can not in general be executed without some synchronization or interprocess communication. Take for example a SQL inner join on column $C$ between tables $T_A$ and $T_B$. The naive algorithm goes as: Take first item in $C$ of table $T_A$, $C_{A0}$, perform a find on column $C$ in table $T_B$. In a distributed datastore, this will require each worker to look at $C_{AN}$ of $T_A$ hosted locally, search through the portion of $T_B$ hosted locally and pass $C_{AN}$ to every other worker. If the columns are indexed, there is some optimization you can do, but every worker will have to touch every other work at some point. Groupby haves similar problems. A map-reduce execution model can provide a single stage inter-worker communication to perform a groupby. The algorithms for performing joins and groupbys on a distributed data store are rebranded to give the user more flexibility. As such, this paper will only be concerned with insert and find type operations.
\section{System Setup}
Blue Waters is a mixture of Cray XE and XK blades with 4 Interlogos AMD processors XE nodes and two Interlogs AMD, K20 Nvidia GPU pairs on XK nodes. These nodes are all connected by Cray's Gemini interconnect. Shared filesystem storage is hosted on Cray's Synexion racks with a luster file system. Blue Waters has 4 login nodes that users can use to access the shared file system and submit run scripts for Moab and Torque schedule on the requested compute resources. \cite{Bode2013} \cite{Kramer2015}
Data stores are often intended to be persistent and service tasks that demand zero downtime. For instance, users data on the backend of a web app, hospital patient data, or product order data from a retailer. Compute resources on an HPC system that are available to a typical user are often ephemeral and are contrary to the execution model of common data stores. Fortunately, the final destination of a data stores data is on an underlying filesystem. MongoDB by default uses the WiredTiger storage engine to manage data underlying files on the host filesystem. For Blue Waters, that is a massively parallel and redundant file system\cite{Bode2013} \cite{Kramer2015}.
\subsection{MongoDB Basics}
A sharded MongoDB cluster runs with 3 types of worker processes that we host one on each processing element of a Blue Waters job: configuration server, shard server, and router. The configuration server manages the metadata of the collections \cite{MongoManShard}. Config servers store the metadata for a sharded cluster. The metadata reflects state and organization for all data and components within the sharded cluster. The metadata includes the list of chunks on every shard and the ranges that define the chunks\cite{MongoManConfigServ}. A shard contains a subset of sharded data for a sharded cluster. Together, the cluster?s shards hold the entire data set for the cluster\cite{MongoManShardServ}. Each shard will be assigned a unique filesystem path to deposit data too. MongoDB routers route queries and write operations to shards in a sharded cluster. A router provide the only interface to a sharded cluster from the perspective of applications. Applications never connect or communicate directly with the shards\cite{MongoManRouter}.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sharded-cluster-production-architecture}
\caption{Sharded MongoDB cluster diagram \cite{MongoManShard}}
\end{figure}
\subsection{User Execution Model}
For a typical Blue Waters user to deploy a MongoDB cluster, they must construct a run-script that assigns to each processing element which roll it will be taking (config, shard, router). MongoDB is nativly deployed on a TCP/IP network therefore worker reference each other by hostname and port. In our run script we chose the number of processing elements to be equal number of node hosts. Once the assignments have been made, the run scrip will configure each worker to reference the config server. The config server will then share the settings with all members of the cluster. In addition to taking advantage of distributed compute resources, a sharded MongoDB cluster will natively leverage the distributed nature of Luster on a Cray Synexian. When each shard worker is assigned a directory to place files, luster will distribute those files to an object storage server that should optimize further I/O.
This describes a run scripts execution for the datastore portion. In practice, once the datastore is active, a run script will continue making queries and processing. The runsript makes available through environment variables or a shared file a list of host names of the MongoBD clusters router servers. With this list, a run script may use either the mongo shell command or the Python package pymongo as the API to perform queries.
|
1,477,468,750,598 | arxiv |
\section{Introduction}
Powerful deep neural networks (DNNs)' prohibitive complexity calls for hardware efficient DNN solutions \cite{eyeriss,10.1145/3210240.3210337,8050797}. When it comes to DNNs' hardware efficiency in IoT devices, the model complexity (e.g., bit-widths), dataflows, and hardware architectures are major performance determinators.
Early works mostly provide \textit{static} solutions, i.e., once developed, the algorithm/dataflow/hardware are fixed, whereas IoT applications often have dynamic time/energy constraints over time. Recognizing this gap, recent works \cite{jin2019adabits,guerra2020switchable} have attempted to develop efficient DNNs with instantaneous accuracy-cost trade-off capability. For example, switchable-precision networks (SP-Nets) \cite{jin2019adabits,guerra2020switchable} can maintain a competitive accuracy under different bit-widths without fine-tuning under each bit-width, making it possible to allocate bit-widths on the fly for adapting IoT devices' instant resources over time.
Despite SP-Nets' great promise \cite{jin2019adabits,guerra2020switchable}, there are still major challenges in enabling their deployment into numerous IoT devices. First, existing SP-Nets are manually designed, largely limiting their extensive adoption as \textit{each application} would require a \textit{different} SP-Net.
Second, while the best dataflow for SP-Nets under \textit{different bit-widths} can be different and is an important determinator for their on-device efficiency \cite{venkatesanmagnet}, there is still a lack of a generic and publicly available framework that can be used to suggest optimal dataflows for SP-Nets under \textit{each of their bit-widths} on \textit{different IoT devices}. Both of the aforementioned hinder the fast development and deployment of SP-Nets powered DNN solutions for
\textit{diverse} hardware platforms of IoT devices.
To tackle the aforementioned challenges, we make the following contributions:
\vspace{-0.1cm}
\begin{itemize}
\item We propose InstantNet, an end-to-end framework that automates the development (i.e., the generation of SP-Nets given a dataset and target accuracy) and deployment (i.e., the generation of the optimal dataflows) of SP-Nets. To our best knowledge, InstantNet is \textbf{the first} to simultaneously target both development and deployment of SP-Nets.
\item We develop switchable-precision neural architecture search (SP-NAS) that integrates an novel cascade distillation training to ensure that the generated
SP-Nets under all bit-widths achieve the same or better accuracy
than both \textit{NAS generated} DNNs optimized for individual bit-widths and SOTA \textit{expert-designed} SP-Nets.
\item We propose AutoMapper, which integrates a generic dataflow space and an evolutionary algorithm to navigate over the discrete and large mapping-method space and automatically search for optimal dataflows given a DNN (e.g., SP-Nets under a selected bit-width) and target device.
\item
Extensive experiments based on real-device measurements and hardware synthesis validate InstantNet's effectiveness in consistently outperforming
SOTA designs, e.g., achieving 84.68\% real-device Energy-Delay-Product improvement while boosting the accuracy by 1.44\%, over the most competitive competitor under the same settings.
\end{itemize}
\section{Related works}
\textbf{Static and switchable-precision DNNs.}
DNN quantization aims to compress DNNs at the most fine-grained bit-level~\cite{fractrain, fu2021cpt}.
To accommodate constrained and time-varying resources on IoT devices,
SP-Nets~\cite{jin2019adabits, guerra2020switchable} aim for instantaneously switchable accuracy-efficiency trade-offs at the bit-level.
However, designing such DNNs and the corresponding mapping methods for every scenario can be engineering-expensive and time consuming, considering the ever-increasing IoT devices with diverse hardware platforms and application requirements. As such, techniques that enable
fast development and deployment of SP-Nets are highly desirable for expediting the deployment of affordable DNNs into numerous IoT devices.
\textbf{Neural Architecture Search for efficient DNNs.}
To release human efforts from laborious manual design, NAS~\cite{zoph2016neural, fu2020autogandistiller}
have been introduced to enable the automatic search for efficient DNNs with both competitive accuracy and hardware efficiency given the datasets.
\cite{wang2019haq, chen2018joint, wu2018mixed} incorporate quantization bit-widths into their search space and search for mixed-precision networks. However, all these NAS methods search for quantized DNNs with only one \textit{fixed} bit-width, lacking the capability to instantly adapt to other bit-widths without fine-tuning.
\textbf{Mapping DNNs to devices/hardware.}
When deploying DNNs into IoT devices with diverse hardware architectures, one major factor that determines hardware efficiency is the dataflow
\cite{venkatesanmagnet}. For devices with application-specific integrated circuit (ASIC) or FPGA hardware,
various innovative dataflows \cite{eyeriss, Optimize_fpga_for_DNN, zhang2018dnnbuilder,10.1109/ISCA45697.2020.00082} have been developed to maximize the reuse opportunities. Recently, MAGNet has been proposed to automatically identify optimal dataflows and design parameters of a tiled architecture. However, its highly template-based design space, e.g., a pre-defined set of nested loop-orders, can restrict the generality and result in sub-optimal performance.
Despite its promising performance,
the exploration to automatically identify optimal mapping methods for DNNs with different bit-widths has not yet been considered.
\section{The proposed InstantNet framework}
\vspace{-0.1cm}
Here we present our InstantNet framework, starting from an overview and then its key enablers including cascade distillation training (CDT), SP-NAS, and AutoMapper.
\vspace{-0.1cm}
\subsection{InstantNet overview}
\label{sec:overview}
Fig.~\ref{fig:overview} shows an overview of InstantNet. Specifically, given the target application and device, it automates the development and deployment of SP-Nets. Specifically, InstantNet integrates two key enablers: (1) SP-NAS and (2) AutoMapper. SP-NAS incorporates an innovative cascade distillation to search for SP-Nets, providing IoT devices' desired instantaneous accuracy-efficiency trade-off capability. AutoMapper adopts a generic dataflow design space and an evolution-based algorithm to automatically search for optimal dataflows of SP-Nets under different bit-widths on the target device.
\begin{figure*}[!bt]
\centering
\includegraphics[width=0.8\textwidth]{Figs/final_output_63.png}
\vspace{-0.6em}
\caption{Visualizing the prediction distribution of MobileNetV2 on CIFAR-100 under \textbf{(left)}: 4-bit training with vanilla distillation, \textbf{(middle)} 4-bit training with the proposed CDT, and \textbf{(right)} 32-bit training.}
\label{fig:output}
\vspace{-2em}
\end{figure*}
\vspace{-0.1cm}
\subsection{InstantNet training: Bit-Wise Cascade Distillation}
\label{sec:cdt}
\vspace{-0.1cm}
Unlike generic quantized DNNs optimized to maximize accuracy under one individual bit-width, InstantNet aims to generate SP-Nets of which the accuracy \textit{under all bit-widths} are the same or even higher than that of DNNs customized for individual bit-widths. The key challenge is to ensure high accuracy for lower bit-widths, which is particularly difficult for compact DNN models whose accuracy is more sensitive to quantization. For example, SOTA SP-Nets ~\cite{guerra2020switchable} fails to work on lower bit-widths when being applied to MobileNetV2~\cite{sandler2018mobilenetv2}.
The above challenge has motivated InstantNet's CDT method, which takes advantage of the fact that the quantization noises of SP-Nets under adjacent or closer bit-widths are smaller. Our hypothesis is that distillation between
adjacent and closer bit-widths will help to more smoothly enforce the accuracy (or activation distribution) of SP-Nets under low bit-widths to approach their full-precision counterparts. In this way, CDT can simultaneously boost accuracy of SP-Nets under all bit-widths by enforcing SP-Nets under each bit-width to have distillation from \textbf{all higher bit-widths}:
\vspace{-1em}
\begin{equation}
\vspace{-0.3em}
\begin{split}
L_{total} &= \frac{1}{N} \sum_{i=0}^{N-1} L_{train}^{cas}(Q_i(\omega)) \\
where \; L_{train}^{cas}&(Q_i(\omega)) = L_{ce}(Q_i(\omega), label) \\
+ \beta \sum_{j=i+1}^{N-1}& L_{mse}(Q_i(\omega), SG(Q_j(\omega)))
\end{split} \label{eqn:cdt}
\end{equation}
\noindent where $L_{total}$ is SP-Nets' average loss under all the $N$ candidate bit-widths, $L_{ce}$ and $L_{mse}$ are the cross-entropy and mean square error losses, respectively, $Q_i(\omega)$ is the SP-Net characterized with weights $\omega$ under the $i$-th bit-width,
$\beta$ is a trade-off parameter, and $SG$ is the stopping gradient function, i.e., gradient backpropagation from higher bit-widths is prohibited when calculating the distillation loss~\cite{guerra2020switchable}.
To verify the effectiveness of CDT, we visualize the prediction distribution (classification probability after softmax) of MobileNetV2 on CIFAR-100 under the bit-width set of 4, 8, 12, 16, 32 (quantized by SBM~\cite{banner2018scalable}) trained using different strategies in Fig.~\ref{fig:output}.
We show the prediction distribution of the following three cases using a random sampled image from the test dataset to verify and visualize the effectiveness of our CDT: (1) 4-bit trained using vanilla distillation, i.e., only consider the distillation with 32-bit width, (2) 4-bit trained using our CDT technique and (3) the 32-bit trained network. We can observe that vanilla distillation fails to narrow the gap between 32-bit and the lowest 4-bit due to the large quantization error gap. This is actually a common phenomenon among efficient models with depthwise layers which are sensitive to low precision on all the considered test datasets, e.g., we observe that the validation accuracy of the 4-bit network with only the aforementioned vanilla distillation is around 1\%, indicating the failure of vanilla distillation for tackling the bit-width set with a large dynamic range. In contrast, our CDT notably helps the prediction distribution of the 4-bit network smoothly evolve to that of the 32-bit one, and also boost its accuracy to 71.21\%, verifying CDT's effectiveness.
\vspace{-0.1cm}
\subsection{InstantNet search: Switchable-Precision NAS }
\label{sec:banas}
\vspace{-0.1cm}
Here we introduce another key enabler of InstantNet, SP-NAS. To our best knowledge, InstantNet is \textbf{the first} to address \textit{how to automatically generate networks which naturally favor working under various bit-widths}. In addition, to resolve the performance bottleneck in SOTA SP-Nets (manually designed) ~\cite{jin2019adabits, guerra2020switchable}, i.e., large accuracy degradation under the lowest bit-width, we develop a heterogeneous scheme for updating the weights and architecture parameters. Specifically, we update the weights based on our CDT method (see Eq.~\ref{eqn:cdt}) which explicitly incorporates switchable-bit property into the training process; and for updating the architecture parameters of SP-Net, we adopt \textit{only the weights under the lowest bit-width}, for generating networks forced to inherently tackle SP-Nets' bottleneck of high accuracy loss under the lowest bit-width:
\vspace{-1em}
\begin{equation} \label{eqn:banas}
\vspace{-0.3em}
\begin{split}
& \min \limits_{\alpha} L_{val}(Q_0(\omega^*), \alpha)+\lambda L_{eff}(\alpha) \\
& s.t. \quad \omega^* = \underset{\omega}{\arg\min} \,\, \frac{1}{N} \sum_{i=0}^{N-1} L_{train}^{cas}(Q_i(\omega), \alpha)
\end{split}
\end{equation}
\noindent where $\omega$ and $ \alpha$ are the supernet's weights \cite{liu2018darts} and architecture parameters, respectively,
$L_{eff}$ is an efficiency loss (e.g., energy cost), and $Q_0(\omega)$ is the SP-Net under the lowest bit-width.
Without loss of generality, here we adopt SOTA differentiable NAS~\cite{liu2018darts} and search space~\cite{wu2019fbnet}.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.8\textwidth]{Figs/map_overview.pdf}
\vspace{-0.5em}
\caption{Overview of the goal, generic dataflow space, and InstantNet's AutoMapper, where TBS denotes ``to be searched''.}
\label{fig:map_overview}
\vspace{-1em}
\end{figure*}
\vspace{-0.1cm}
\subsection{InstantNet deploy: Evolution-based AutoMapper}
\vspace{-0.1cm}
This subsection introduces InstantNet's AutoMapper, of which an overview is shown in Fig.~\ref{fig:map_overview}. Motivated by the fact that different mapping methods can have orders-of-magnitude difference in hardware efficiency \cite{venkatesanmagnet}, AutoMapper aims to accept (1) DNNs (e.g., SP-Nets generated by our SP-NAS), (2) the target device, and (3) target hardware efficiency, and then generate mapping methods that maximize both the task accuracy and hardware efficiency of the given SP-Nets under all bit-widths when being executed on the target device.
\textbf{Generic Dataflow Design Space.}
A generic dataflow design space is a prerequisite for effective algorithmic exploration and optimization of on-device dataflows, yet is challenging to develop. There are numerous choices for how to temporally and spatially schedule all the DNN's operations to be executed in the target accelerators. Specifically,
as there are many more operations in DNNs than the number of operations (e.g., $19.6E+9$ \cite{simonyan2014very} vs. 900 MACs \cite{xilinxzc706} assuming a 16-bit precision) an IoT device can execute in each clock cycle, numerous possible dataflows exist for running DNNs on a device.
To tackle the aforementioned challenge, we propose a generic design space for on-device dataflows, which (1) covers all design choices for generalization and (2) is easy to understand for ease of adoption. Our proposed space leverages commonly used nested \textit{for-loop} descriptions~\cite{eyeriss,DNNCHIPPREDICTOR}. For better illustration, here we describe the high-level principles.
From a nested \textit{for-loop} description, our dataflow space extracts all possible choices characterized by the following factors:
\textit{loop-order}: the processing order of each dimension within each memory hierarchy,
and can be derived from all possible permuted choices without overlap.
\textit{loop-size}: the no. of operations in one iteration of a specific dimension, which can not be easily determined. We design a simple analytical algorithm to derive all possible choices.
\textit{Pipeline/multi-cycle:} use pipeline or multi-cycle. The former processes a small chunk of each layer in a pipeline manner, while the latter processes all the layers sequentially.
Considering AlexNet \cite{krizhevsky2012imagenet} and six layers of nested loops, there are over \textbf{$10^{27}$ total number of discrete mapping-method choices}, posing a great need for developing efficient and effective search algorithms.
\begin{figure}[!b]
\vspace{-1em}
\begin{minipage}{\linewidth}
\let\@latex@error\@gobble
\begin{algorithm}[H]
\DontPrintSemicolon
\KwIn{Efficiency Goal, DNN, Design Space (DS)}
\KwOut{Optimal algorithm-to-device mapping}
Build a $pool$ with $n$ random samples from DS
\While{\textit{Efficiency Goal not met}}
{
\uIf{$size(pool) \leq n$}
{
\For{$m$ iterations}
{
Random Pick $p \in pool$\;
Random Perturb $k$ features of $p$
Add $p$ to $pool$
}
}
\Else
{
Rank the samples in $pool$\ with the given DNN
Remove the worst $m$ samples from \textit{pool}\;
}
}
\Return{optimal mapping in $pool$}
\caption{Evolutionary AutoMapper}\label{alg:eaalg}
\end{algorithm}
\end{minipage}
\end{figure}
\textbf{Evolutionary Search Algorithm.} To navigate the large and discrete space of mapping methods, we adopt an evolutionary based search algorithm, considering that evolutionary algorithms have more exploitation than random search and are better suited for the highly discrete space \cite{google_ev,genesys}.
Specifically, we will keep track of the hardware efficiency ranking of the current sampled mapping methods at each iteration. Afterwards, if the pool size of current samples is smaller than a specified value, we select a few of the best performing sampled mapping methods and randomly perturb a small number of their features associated with the aforementioned design factors to generate new mapping methods to be evaluated in the next iteration; otherwise, new mapping methods with completely randomly selected design factors will be generated.
We summarize our Evolutionary AutoMapper in Alg.\ref{alg:eaalg}.
\section{Experiment results}
We first describe our experiment setup and then evaluate each enabler of InstantNet, i.e., CDT, SP-NAS, and AutoMapper. After that, we benchmark InstantNet over SOTA SP-Nets on SOTA accelerators \cite{zhang2018dnnbuilder, XilinxCH65, eyeriss}.
\vspace{-0.1cm}
\subsection{Experiment setup}
\label{sec:exp_setup}
\vspace{-0.1cm}
\subsubsection{Algorithm experiment setup}
\textbf{Datasets \& Baselines.} We consider three datasets (CIFAR-10/CIFAR-100/ImageNet), and evaluate InstantNet over (1) all currently published SP-Nets (AdaBits~\cite{jin2019adabits} and SP~\cite{guerra2020switchable}) with the DoReFa~\cite{zhou2016dorefa} quantizer and (2) a SOTA quantized DNN method SBM~\cite{banner2018scalable} to train a SOTA compact DNN MobileNetV2~\cite{sandler2018mobilenetv2} under individual bits.
\textbf{Search and training on CIFAR-10/100 and ImageNet.} \underline{Search space:} we adopt the same search space as~\cite{wu2019fbnet} except the stride settings for each group to adapt to the resolution of the input images in CIFAR-10/100.
\underline{Search settings.} On CIFAR-10/100, we search for 50 epochs with batch size 64. In particular, we (1) update the supernet weights with our cascade distillation technique as in Eq.(2) on half of the training dataset using an SGD optimizer with a momentum of 0.9 and an initial learning rate (LR) 0.025 at a cosine decay, and (2) update network architecture parameters with the lowest bit-width as in Eq.(2) on the other half of the training dataset using an Adam optimizer with a momentum of 0.9 and a fixed LR 3e-4. We apply gumbel softmax on the architecture parameters as the contributing coefficients of each option to the supernet (following~\cite{wu2019fbnet}), where the initial temperature is 3 and then decayed by 0.94 at each epoch. On ImageNet, we follow the same hyper-parameter settings for the network search as~\cite{wu2019fbnet}.
\underline{Evaluate the derived networks:} for training the derived networks from scratch using our CDT, on CIFAR-10/100 we adopt an SGD optimizer with a momentum of 0.9 and an initial LR 0.025 at a cosine decay. Each network is trained for 200 epochs with batch size 128. On ImageNet, we follow~\cite{wu2019fbnet}.
\subsubsection{Hardware experiment setup}
\textbf{Implementation methodology.} We consider two commonly used IoT hardware platforms, i.e., ASIC and FPGA, for evaluating our AutoMapper. Specifically, for FPGA, we adopt the Vivado HLx design tool-flow where we first synthesize the mapping-method design in C++ via Vivado HLS, and then plug the HLS exported IPs into a Vivado IP integrator to generate the corresponding bit streams, which are programmed into the FPGA board for on-board
execution and measurements; for ASIC, we synthesize the Verilog designs based on the generated dataflows using a Synopsys Design Compiler on a commercial CMOS technology, and then place and route using a Synopsys IC Compiler for obtaining the resulting design's actual area.
\textbf{Baselines.}
We evaluate AutoMapper over expert/tool generated SOTA dataflows for both FPGA and ASIC platforms, including DNNBuilder~\cite{zhang2018dnnbuilder} and CHaiDNN~\cite{XilinxCH65} for FPGA, and Eyeriss~\cite{eyeriss} and MAGNet~\cite{venkatesanmagnet} for ASIC. For DNNBuilder~\cite{zhang2018dnnbuilder}, MAGNet~\cite{venkatesanmagnet} and CHaiDNN~\cite{XilinxCH65}, we use their reported results; For Eyeriss~\cite{eyeriss}, we use their own published and verified simulator~\cite{Gao2017Tetris} to obtain their results.
\vspace{-0.1cm}
\subsection{Ablation study of InstantNet: CDT}
\label{sec:exp_cd}
\vspace{-0.1cm}
\textbf{Experiment settings.} For evaluating InstantNet's CDT, we benchmark it over an SOTA quantized DNN training method (independently train DNNs at each bit-width) and two SP-Nets (AdaBits~\cite{jin2019adabits} and SP~\cite{guerra2020switchable}). In light of our IoT application goal, we consider MobileNetV2~\cite{sandler2018mobilenetv2} (an SOTA efficient model balancing task accuracy and hardware efficiency) with CIFAR-100, and adopt two different bit-width sets with both large and narrow bit-width dynamic ranges. Without losing generality, our CDT is designed with SOTA quantizer SBM~\cite{banner2018scalable} and switchable batch normalization as in SP~\cite{guerra2020switchable}.
\textbf{Results and analysis.}
From Tab.~\ref{tab:cascade}, we have three observations: (1) our CDT consistently outperforms the two SP-Net baselines under all the bit-widths, verifying CDT's effectiveness and our hypothesis that progressively distilling from all higher bit-widths can help more smoothly approach accuracy of the full-precision; (2) CDT is particularly capable of boosting accuracy in low bit-widths which has been shown to be the bottleneck in exiting SP-Nets~\cite{jin2019adabits}, e.g., a 2.71\%$\sim$4.4\% higher accuracy on the lowest 4-bit over the two SP-Net baselines; and (3) CDT always achieves a higher or comparable accuracy over the SOTA quantized DNN training method SBM that independently trains and optimizes each individual bit-width: for bit-widths ranging from 4-bit to 8-bit, CDT achieves 0.32\%$\sim$0.72\% improvement in accuracy over SBM, indicating the effectiveness of our CDT in boosting DNNs' accuracies under lower bit-widths.
\begin{table}[!t]
\centering
\caption{CDT over independently trained SBM~\cite{banner2018scalable} on \textbf{ResNet-38}, where the values in the bracket represent CDT's accuracy gain over SBM (the higher, the better)}.
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{ccc|cc}
\toprule
Dataset & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\
\midrule
\multicolumn{1}{c}{Bit-widths} & \multicolumn{1}{c}{SBM} & \textbf{CDT (Proposed)} & \multicolumn{1}{c}{SBM} & \textbf{CDT (Proposed)} \\
\midrule
4 & 90.91 & \textbf{91.45 (+0.54)} & 63.82 & \textbf{64.18 (+0.36)} \\
8 & 92.78 & \textbf{93.03 (+0.25)} & 66.71 & \textbf{67.45 (+0.74)} \\
12 & 92.75 & \textbf{93.06 (+0.31)} & 67.13 & \textbf{67.42 (+0.29)} \\
16 & 92.90 & \textbf{93.09 (+0.19)} & 67.17 & \textbf{67.50 (+0.33)} \\
32 & 92.5 \ & \textbf{93.08 (+0.58)} & 67.18 & \textbf{67.47 (+0.29)} \\
\midrule
\midrule
4 & 90.91 & \textbf{91.88 (+0.97)} & 63.82 & \textbf{64.12 (+0.30)} \\
5 & 92.35 & \textbf{92.56 (+0.21)} & 66.20 & \textbf{66.68 (+0.48)} \\
6 & 92.80 & \textbf{92.93 (+0.13)} & 66.48 & \textbf{66.55 (+0.07)} \\
8 & 92.78 & \textbf{93.02 (+0.24)} & 66.71 & \textbf{66.88 (+0.17)} \\
\bottomrule
\end{tabular}%
}
\label{tab:resnet38}%
\vspace{-5pt}
\end{table}%
\begin{table}[!t]
\centering
\caption{CDT over independently trained SBM~\cite{banner2018scalable} on \textbf{ResNet-74}, where the values in the bracket represent CDT's accuracy gain over SBM (the higher, the better). }
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{cccccccc}
\toprule
Dataset &\multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\
\midrule
\multicolumn{1}{c}{Bit-widths} & \multicolumn{1}{c}{SBM} & \textbf{CDT (Proposed)} & \multicolumn{1}{c}{SBM} & \textbf{CDT (Proposed)} \\
\midrule
4 & 91.82 & \textbf{92.34 (+0.52)} & 66.31 & \textbf{67.35 (+1.04)} \\
8 & 93.22 & \textbf{93.56 (+0.34)} & 69.85 & \textbf{69.98 (+0.13)} \\
12 & 93.26 & \textbf{93.53 (+0.27)} & 69.97 & \textbf{69.99 (+0.02)} \\
16 & 93.40 & \textbf{93.51 (+0.11)} & 69.92 & \textbf{70.01 (+0.09)} \\
32 & 93.38 & \textbf{93.49 (+0.11)} & 69.46 & \textbf{69.98 (+0.52)} \\
\midrule
\midrule
4 & 91.82 & \textbf{92.51 (+0.69)} & 66.31 & \textbf{67.34 (+1.03)} \\
5 & 92.98 & \textbf{93.54 (+0.56)} & 68.66 & \textbf{69.49 (+0.83)} \\
6 & 93.19 & \textbf{93.47 (+0.28)} & 69.42 & \textbf{69.65 (+0.23)} \\
8 & 93.22 & \textbf{93.72 (+0.50)} & 69.85 & \textbf{70.02 (+0.17)} \\
\bottomrule
\end{tabular}%
}
\label{tab:resnet74}%
\vspace{-1.5em}
\end{table}%
\begin{table}[btp]
\vspace{-1.5em}
\centering
\caption{CDT over SP~\cite{banner2018scalable} on ResNet-18 and TinyImageNet in terms of test accuracy, where the values in the bracket represent CDT's accuracy gain over SBM. }
\begin{tabular}{cc|cc}
\toprule
\multicolumn{2}{c}{Bit-widths} & \multicolumn{2}{c}{Methods} \\
\midrule
Weight & Activation & SP & \textbf{CDT (Proposed)} \\
\midrule
2 & 2 & 47.8 & \textbf{52.3 (+4.5)} \\
2 & 32 & 50.5 & \textbf{51.3 (+0.8)} \\
32 & 2 & 51.8 & \textbf{53.4 (+1.6)} \\
\bottomrule
\end{tabular}
\label{tab:tinyimagenet}
\end{table}
We also benchmark CDT on ResNet-38/74~\cite{wang2018skipnet} with CIFAR-10/CIFAR-100 over independently trained SBM~\cite{banner2018scalable}. As shown in Tab.~\ref{tab:resnet38} and~Tab.~\ref{tab:resnet74} for ResNet-38 and ResNet-74, respectively, CDT consistently achieves a better/comparable accuracy (0.02\%$\sim$1.04\%) over the independently trained ones under all the models/datasets/bit-widths, and notably boosts the accuracy of the lowest bit-width (4-bit) by 0.30\%$\sim$1.04\%.
To evaluate CDT's performance when involving extremely low bit-width (2-bit), we further benchmark CDT on ResNet-18~\cite{he2016deep} and TinyImageNet~\cite{le2015tiny} over the SP~\cite{banner2018scalable} baseline. The results are shown in Tab.~\ref{tab:tinyimagenet}. It can be observed that the CDT is particularly effective in boosting the accuracy in lower bit-widths.
Specifically, when the weights and activations both adopt 2-bit, the proposed CDT achieves a 4.5\% higher accuracy than that of the baseline SP method.
\begin{figure}[!tb]
\vspace{-0.3em}
\centering
\includegraphics[width=0.45\textwidth]{Figs/exp_spnas.pdf}
\vspace{-0.2cm}
\caption{InstantNet's SP-NAS over Full-Precision-NAS (FP-NAS) and Low-Precision-NAS (LP-NAS) on CIFAR-100 under large, middle, and small FLOPs constraints trained for two bit-width sets: (a) [4, 8, 12, 16, 32], and (b) [4, 5, 6, 8].}
\label{fig:exp_spnas}
\end{figure}
\vspace{-0.1cm}
\subsection{Ablation study of InstantNet: SP-NAS}
\label{sec:exp_nas}
\vspace{-0.1cm}
From Fig.~\ref{fig:exp_spnas}, we can see that: (1) SP-NAS consistently outperforms the baselines at the lowest bit-width, which is the bottleneck in SOTA SP-Nets~\cite{jin2019adabits}, while offering a higher/comparable accuracy at higher bit-widths. Specifically, SP-NAS achieves a 0.71\%$\sim$1.16\% higher accuracy over the strongest baseline at the lowest bit-width on both bit-width sets under the three FLOPs constraints; and (2) SP-NAS shows a notable superiority on the bit-width set with a larger dynamic range which is more favorable for IoT applications as larger bit-width dynamic ranges provide more flexible instantaneous accuracy-efficiency trade-offs. Specifically, compared with the strongest baseline, SP-NAS achieves a 1.16\% higher accuracy at the lowest bit-width and a 0.25\%$\sim$0.61\% higher accuracy at other bit-widths, while offering a 24.9\% reduction in FLOPs on the bit-width set [4, 8, 12, 16, 32]. This experiment validates that SP-NAS can indeed effectively tackle SP-Nets' bottleneck and improve its scalability over previous search methods which fail to guarantee accuracy at lower bit-widths.
\begin{figure}[!tb]
\vspace{-1em}
\centering
\includegraphics[width=0.4\textwidth]{Figs/exp_automapper.pdf}
\caption{AutoMapper over SOTA expert-crafted and tool generated dataflows on FPGA/ASIC.}
\label{fig:exp_automapper}
\vspace{-1.5em}
\end{figure}
\subsection{Ablation study of InstantNet: AutoMapper}
\label{sec:exp_mapping}
As shown in Fig.~\ref{fig:exp_automapper}, we can see that (1) the dataflows suggested by AutoMapper (taking less than 10 minutes of search time) even outperforms SOTA expert-crafted designs: the mapping generated by AutoMapper achieves 65.76\% and 85.74\% EDP reduction on AlexNet~\cite{krizhevsky2012imagenet} and VGG16~\cite{simonyan2014very} compared with Eyeriss~\cite{eyeriss}, respectively; (2) AutoMapper achieves a higher cost savings on ASIC than that of FPGA. This is because ASIC designs are more flexible than FPGA in their dataflows and thus achieve superior performance when exploring using effective automated search tools; and (3) when comparing with MAGNet, we have roughly 9.3\% reduction in terms of the energy cost. MAGNet only used a pre-defined set of loop-orders to cover different dataflow scenarios, which may not generically fit network's diverse layer structures, thus resulting in inferior performance.
\begin{figure}[!tb]
\vspace{-0.5em}
\centering
\includegraphics[width=0.5\textwidth]{Figs/exp_final_cifar.pdf}
\vspace{-2em}
\caption{InstantNet generated and SOTA IoT systems on CIFAR-10/100 under two bit-width sets.
}
\label{fig:exp_final_cifar}
\vspace{-1.8em}
\end{figure}
\vspace{-0.2em}
\subsection{InstantNet over SOTA systems}
\label{sec:exp_sota}
\begin{wrapfigure}{r}{0.25\textwidth}
\vspace{-2em}
\begin{center}
\includegraphics[width=0.25\textwidth]{Figs/exp_final_imagenet.pdf}
\end{center}
\vspace{-1.5em}
\caption{InstantNet and SOTA IoT systems on ImageNet with bit-widths of $[4, 5, 6, 8]$.}
\label{fig:exp_final_imagenet}
\vspace{-2em}
\end{wrapfigure}
\textbf{Results and analysis on CIFAR-10/100.}
As shown in Fig.~\ref{fig:exp_final_cifar}, we can see that (1) InstantNet generated systems consistently outperforms the SOTA baselines in terms of the trade-off between accuracy and EDP (a commonly-used hardware metric for ASIC) by achieving a higher or comparable accuracy and better EDP under lower bit-widths over the baselines. In particular, InstantNet can achieve up to 84.67\% reduction in EDP with a 1.44\% higher accuracy on CIFAR-100 and the bit-width set of $[4, 8, 12, 16, 32]$;
and (2) InstantNet always surpasses the SOTA baselines under the bottleneck bit-width, i.e., the lowest one, with a 62.5\%$\sim$73.68\% reduction in EDP and a 0.91\%$\sim$5.25\% higher accuracy, which is notably more practical for real-world IoT deployment.
\textbf{Results and analysis on ImageNet.} As shown in Fig.~\ref{fig:exp_final_imagenet}, InstantNet generated system achieves a $1.86\times$ improvement in Frame-Per-Second (FPS) while having a comparable accuracy (-0.05\%) over the SOTA FPGA based IoT system.
\section{Conclusion}
We propose an \textit{automated} framework termed \textbf{InstantNet} to automatically search for SP-Nets (i.e., capable of operating at variable bit-widths) that can achieve the same or even better accuracy than DNNs optimized for individual bit-widths, and to generate optimal dataflows to maximize efficiency when DNNs are executed under various bit-widths on different devices.
Extensive experiments show that
InstantNet has promised an effective automated framework for expediting development and deployment of efficient DNNs for numerous IoT applications with diverse specifications.
\section*{Acknowledgement}
The work is supported by the National Science Foundation (NSF) through the Energy, Power, Control, and Networks (EPCN) program (Award number: 1934755, 1934767).
\section{More ablation studies of Cascade Distillation Training (CDT)}
\subsection{Visualization of prediction distribution}
We visualize the prediction distribution (classification probability after softmax) of MobileNetV2 on CIFAR-100 under the bit-width set of 4, 8, 12, 16, 32 (quantized by SBM~\cite{banner2018scalable}) trained by different strategies in Fig.~\ref{fig:output}.
In particular, we show prediction distribution of the following three cases with a random sampled image from the test dataset in Fig.~\ref{fig:output} to verify the effectiveness of CDT: (1) 4-bit trained by vanilla distillation, i.e., only consider the distillation with 32-bit width, (2) 4-bit trained by the proposed CDT technique and (3) 32-bit trained by vanilla distillation. We can observe that vanilla distillation fails to narrow the gap between 32-bit and the lowest 4-bit due to the large quantization error. This is actually a common phenomenon among the test dataset based on the observation that the validation accuracy of the 4-bit network is around 1\%, denoting the failure of vanilla distillation for tackling the bit-width set with large dynamic range. While the proposed CDT notably helps the prediction distribution of the 4-bit network smoothly evolve to that of the 32-bit one, and also boost its validation accuracy to 71.21\%, verifying the superiority of CDT.
\begin{figure*}[!bt]
\centering
\includegraphics[width=\textwidth]{Figs/final_output_63.png}
\caption{Prediction distribution of MobileNetV2 on CIFAR-100 under \textbf{(left)}: 4-bit trained by vanilla distillation, \textbf{(middle)} 4-bit trained by the proposed Cascade Distillation Training and \textbf{(right)} 32-bit trained by vanilla distillation.}
\label{fig:output}
\end{figure*}
\subsection{Experiments on different networks}
To further verify the general effectiveness of the proposed CDT in addition to Sec. 4.2 in the main content, we benchmark CDT on ResNet-38/74~\cite{wang2018skipnet} with CIFAR-10/CIFAR-100 and bit-width sets of $[4,8,12,16,32]$ / $[4,5,6,8]$ over the independently trained SBM~\cite{banner2018scalable} baseline. For consistency and fair comparison, we apply the same training setting as in Sec. 4.2. As shown in Tab.~\ref{tab:resnet38} and~\ref{tab:resnet74} for ResNet-38 and ResNet-74 respectively, we can observer that (1) CDT consistently achieves better or comparable accuracy (0.02\%$\sim$1.04\%) compared with the independently trained ones under all the models, datasets and bit-width settings. (2) CDT notably boosts the accuracy of the lowest bit-width (4-bit) by 0.30\%$\sim$1.04\%, further proving CDT's capability of tackling the bottleneck bit-width in SP-Nets.
\begin{table}[htbp]
\centering
\caption{Benchmark the proposed CDT over independently trained SBM~\cite{banner2018scalable} on ResNet-38.}
\resizebox{\linewidth}{!}{
\begin{tabular}{cccccccc}
\toprule
\multicolumn{4}{c}{CIFAR-10} & \multicolumn{4}{c}{CIFAR-100} \\
\midrule
\multicolumn{1}{c}{Bit-widths} & \multicolumn{1}{c}{SBM} & \textbf{CDT (Proposed)} & \multicolumn{1}{c}{\textbf{Improvement}} & \multicolumn{1}{c}{Bit-widths} & \multicolumn{1}{c}{SBM} & \textbf{CDT (Proposed)} & \multicolumn{1}{c}{\textbf{Improvement}} \\
\midrule
4 & 90.91 & \textbf{91.45} & \textbf{0.54} & 4 & 63.82 & \textbf{64.18} & \textbf{0.36} \\
8 & 92.78 & \textbf{93.03} & \textbf{0.25} & 8 & 66.71 & \textbf{67.45} & \textbf{0.74} \\
12 & 92.75 & \textbf{93.06} & \textbf{0.31} & 12 & 67.13 & \textbf{67.42} & \textbf{0.29} \\
16 & 92.9 & \textbf{93.09} & \textbf{0.19} & 16 & 67.17 & \textbf{67.5} & \textbf{0.33} \\
32 & 92.5 & \textbf{93.08} & \textbf{0.58} & 32 & 67.18 & \textbf{67.47} & \textbf{0.29} \\
\midrule
\midrule
4 & 90.91 & \textbf{91.88} & \textbf{0.97} & 4 & 63.82 & \textbf{64.12} & \textbf{0.30} \\
5 & 92.35 & \textbf{92.56} & \textbf{0.21} & 5 & 66.2 & \textbf{66.68} & \textbf{0.48} \\
6 & 92.8 & \textbf{92.93} & \textbf{0.13} & 6 & 66.48 & \textbf{66.55} & \textbf{0.07} \\
8 & 92.78 & \textbf{93.02} & \textbf{0.24} & 8 & 66.71 & \textbf{66.88} & \textbf{0.17} \\
\bottomrule
\end{tabular}%
}
\label{tab:resnet38}%
\end{table}%
\begin{table}[htbp]
\centering
\caption{Benchmark the proposed CDT over independently trained SBM~\cite{banner2018scalable} on ResNet-74.}
\resizebox{\linewidth}{!}{
\begin{tabular}{cccccccc}
\toprule
\multicolumn{4}{c}{CIFAR-10} & \multicolumn{4}{c}{CIFAR-100} \\
\midrule
\multicolumn{1}{c}{Bit-widths} & \multicolumn{1}{c}{SBM} & \textbf{CDT (Proposed)} & \multicolumn{1}{c}{\textbf{Improvement}} & \multicolumn{1}{c}{Bit-widths} & \multicolumn{1}{c}{SBM} & \textbf{CDT (Proposed)} & \multicolumn{1}{c}{\textbf{Improvement}} \\
\midrule
4 & 91.82 & \textbf{92.34} & \textbf{0.52} & 4 & 66.31 & \textbf{67.35} & \textbf{1.04} \\
8 & 93.22 & \textbf{93.56} & \textbf{0.34} & 8 & 69.85 & \textbf{69.98} & \textbf{0.13} \\
12 & 93.26 & \textbf{93.53} & \textbf{0.27} & 12 & 69.97 & \textbf{69.99} & \textbf{0.02} \\
16 & 93.4 & \textbf{93.51} & \textbf{0.11} & 16 & 69.92 & \textbf{70.01} & \textbf{0.09} \\
32 & 93.38 & \textbf{93.49} & \textbf{0.11} & 32 & 69.46 & \textbf{69.98} & \textbf{0.52} \\
\midrule
\midrule
4 & 91.82 & \textbf{92.51} & \textbf{0.69} & 4 & 66.31 & \textbf{67.34} & \textbf{1.03} \\
5 & 92.98 & \textbf{93.54} & \textbf{0.56} & 5 & 68.66 & \textbf{69.49} & \textbf{0.83} \\
6 & 93.19 & \textbf{93.47} & \textbf{0.28} & 6 & 69.42 & \textbf{69.65} & \textbf{0.23} \\
8 & 93.22 & \textbf{93.72} & \textbf{0.50} & 8 & 69.85 & \textbf{70.02} & \textbf{0.17} \\
\bottomrule
\end{tabular}%
}
\label{tab:resnet74}%
\end{table}%
\section{Details about AutoMapper}
\subsection{The \textit{for-loop} based representation adopted in the generic design space}
\begin{figure}[!b]
\begin{minipage}{\linewidth}
\begin{algorithm}[H]
\label{alg:2dconv}
\caption{CONV as \textit{for-loop} structure}
\label{alg:2dconv}
\begin{algorithmic}[1]
\vspace{5pt}
\STATE{ \textit{for} $s=0: S-1$ \textbf{kernel\_col}}
\vspace{10pt}
\STATE{\hspace{8pt} \textit{for} $r=0: R-1$ \textbf{kernel\_row}}
\vspace{10pt}
\STATE{\hspace{16pt} \textit{for} $x=0: X-1$ \textbf{output\_col}}
\vspace{10pt}
\STATE{\hspace{24pt} \textit{for} $y=0: Y-1$ \textbf{output\_row}}
\vspace{10pt}
\STATE{\hspace{32pt} \textit{for} $k=0: K-1$ \textbf{channel\_out}}
\vspace{10pt}
\STATE{ \hspace{40pt}\textit{for} $c=0: C-1$ \textbf{channel\_in}}
\vspace{10pt}
\STATE{\hspace{48pt}\color{blue} // MAC operation}
\STATE{\hspace{48pt} ofmap[k][y][x]+=
\\ \vspace{-1em}\hspace{48pt} ifmap[c][y+r][x+s]*kernel[c][k][r][s]}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\hfill
\begin{minipage}{\linewidth}
\begin{algorithm}[H]
\label{alg:2dconv2}
\caption{CONV as \textit{for-loop} structure, with \textit{parallel-for}s and \textit{inner-loops} included }
\label{alg:2dconv2}
\begin{algorithmic}[1]
\STATE{\hspace{0pt} {{\color{blue}// DRAM level }} }
\vspace{-2pt}
\STATE{... }
\vspace{-2pt}
\STATE{\hspace{0pt} {{\color{blue}// Global buffer level }} }
\STATE{\hspace{0pt} \textit{for} $s_2=0: S_2-1$ \textbf{kernel\_col}}
\vspace{-2pt}
\STATE{\hspace{8pt}...}
\vspace{-2pt}
\STATE{\hspace{16pt}\textit{for} $c_2=0: C_2-1$ \textbf{channel\_in}}
\vspace{4pt}
\STATE{\hspace{0pt} {{\color{blue}// NoC level}} }
\STATE{ \textit{parallel-for} $y_1=0: Y_1-1$ \textbf{output\_row}}
\STATE{ \textit{parallel-for} $x_1=0: X_1-1$ \textbf{output\_col}}
\vspace{4pt}
\STATE{\hspace{0pt} {{\color{blue}// RF level}} }
\STATE{ \textit{for} $c_0=0: C_0-1$ \textbf{channel\_in}}
\STATE{\hspace{8pt}...}
\STATE{\hspace{16pt} \textit{for} $s_0=0: S_0-1$ \textbf{kernel\_col}}
\STATE{\hspace{24pt} MAC operation}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\end{figure}
For describing DNNs' mapping strategies in real accelerators, one of the most common methods is to implement the \textit{for-loop} description in Alg.\ref{alg:2dconv2} due to its compatibility with standard DNN \textit{for-loop} description (see Alg.\ref{alg:2dconv}) and straightforward representation. Specifically, the representation in Alg.\ref{alg:2dconv2} incorporate the additional primitives: 1) \textit{parallel-for} and 2) \textit{memory-hierarchy} for the base loop structure to cover parallelism and inter-memory-hierarchy data movement in the implemented accelerators. Therefore, our mapping space adopts the same convention for its great generality. The additional primitives will be elaborated below:
\underline{\textit{parallel-for}}: the parallel computation over certain data dimensions. For instance in Alg.\ref{alg:2dconv2}, computation on different rows and columns of output are distributed to $X1*Y1$ PEs and done in parallel.
\underline{\textit{memory-hierarchy}}: Multi-level memory hierarchies are commonly adopted by SOTA accelerators to maximize data reuse and time-energy efficiency. Motivated by this advantage, the description in Alg.\ref{alg:2dconv2} also uses multiple levels of nested loops each of which corresponds to one memory level. For instance, as shown in Alg.\ref{alg:2dconv2}, we extend the original loop structure to multiple levels of loops with each level representing temporal data computation and transfer for intermediate results within each memory, e.g., extending $C$ to $C_0$ and $C_2$ for tiling data between Register File(RF) and Global buffer.
\begin{table}[t]
\centering
\caption{\textbf{Detailed illustration of design factors across different memory hierarchy, with $n$ denoting the number of loops at one single memory hierarchy which also equals to the number of data dimensions.}}
\resizebox{\linewidth}{!}{
\renewcommand\arraystretch{1.2}
\begin{tabular}{c|c|c}
\cline{1-3}
\multicolumn{1}{l|}{\textbf{}} & Design factors & Search Space \\ \cline{1-3}
\multirow{2}{*}{DRAM} & \textit{loop-order} & $\{ (l_1,..l_n) | \forall i,j, \, i\neq j \rightarrow l_i\neq l_j \}$ \\ \cline{2-3}& \textit{loop-size} & $\{ (ls_1,..ls_n) \}$ \\ \cline{1-3}
\multirow{2}{*}{Global Buffer} & \textit{loop-order} & $\{ (l_1,..l_n) | \forall i,j, \, i\neq j \rightarrow l_i\neq l_j \}$ \\ \cline{2-3}& \textit{loop-size} & $\{ (ls_1,..ls_n) \}$ \\ \cline{1-3}
\multirow{2}{*}{RF} & \textit{loop-order} & $\{ (l_1,..l_n) | \forall i,j, \, i\neq j \rightarrow l_i\neq l_j \}$ \\ \cline{2-3}& \textit{loop-size} & $\{ (ls_1,..ls_n) \}$ \\ \cline{1-3}
PE& \textit{Pipeline / Multi-cycle} & $ \{0, 1\} $ \\ \cline{1-3}
\end{tabular}
}
\label{tab: design_factors}
\end{table}
\subsection{Design factors}
As shown in Tab.\ref{tab: design_factors}, the proposed generic design space contains three major design factors: 1) \textit{loop-order}, 2) \textit{loop-size}, and 3) \textit{pipeline/multi-cycle}. Each level of memory and PE array will be paired with a independent set of these design factors as illustrated in Tab.\ref{tab: design_factors}. More details about each design factor are provided below:
\begin{figure}[tbh]
\vspace{-1.5em}
\begin{minipage}{\linewidth}
\begin{algorithm}[H]
\DontPrintSemicolon
\KwIn{Efficiency Goal, DNN, Design Space (DS)}
\KwOut{Optimal algorithm-to-device mapping}
Build a $pool$ with $n$ random samples from DS
\While{\textit{Efficiency Goal not met}}
{
\uIf{$size(pool) \leq n$}
{
\For{$m$ iterations}
{
Random Pick $p \in pool$\;
Random Perturb $k$ features of $p$
Add $p$ to $pool$
}
}
\Else
{
Rank the samples in $pool$\ with given DNN
Remove worst $m$ samples from \textit{pool}\;
}
}
\Return{optimal mapping in $pool$}
\caption{Evolutionary AutoMapper}\label{alg:eaalg}
\end{algorithm}
\end{minipage}
\vspace{-2em}
\end{figure}
\textbf{\textit{loop-order}}: the order of the loops in a specific memory level, which includes $n$ loops, equal to the total number of data dimensions (six in the case of Alg.\ref{alg:2dconv}). Deciding the \textit{loop-order} is then an $n$-item ordering problem, which can be solved by picking a loop, such as loop over input channels, for each of the $n$ positions in the loop structure at each memory level. Then, the searchable choices can be represented with a list of unique integers, as in Tab.\ref{tab: design_factors}, with the element at list position $i$ denoting a certain loop at position $i$ of the nested loop structure.
\textbf{\textit{loop-size}}:
the size of each loop in the \textit{for-loop} structure, which follows the principle that the product of all \textit{loop-size}s belonging to one data dimension should equal to the size of that specific data dimension in the given DNN structure. For instance in Alg.\ref{alg:2dconv2}, $C_0*C_2*C_3=C$ (input channel size). Therefore, we generate all the possible choices for \textit{loop-size}s from the factorization of each DNN data dimension respectively. The final choice of a set of \textit{loop-size}s will subject to (1) the product should equal to the corresponding data dimension, and (2) memory size constraints. The final choices are in form of a list of integers with size $n$.
\textbf{\textit{Pipeline/multi-cycle}}: a binary choice between a layer-wise pipeline hardware architecture and a multi-cycle hardware architecture shared by all layers.
\subsection{Detailed evolutionary search algorithm}
To deal with the prohibitively large mapping space, we adopt an evolutionary based search algorithm, considering that evolutionary algorithms have more exploitation than random search while better suiting the highly discrete space \cite{google_ev,genesys}.
We summarize our Evolutionary AutoMapper in Alg.\ref{alg:eaalg}. By keeping filtering out the worst performing samples and perturbing the well performing samples, the AutoMapper balances the exploration of the generic design space and the exploitation of searched advantageous mappings. In particular, the population size $n$ is set to be $1000$ and $m$ and $k$, the number of samples and features to be updated, are set to be 30\% of the population size and 30\% of the total number of features respectively.
\section{Details about experiment setup}
\subsection{Hardware setup}
\underline{Mapping implementation methodology:}
we consider two commonly used IoT hardware platforms, i.e., ASIC and FPGA, for evaluating our AutoMapper. Specifically, for FPGA setup, we choose the Vivado HLx design tool-flow where we first synthesize the accelerator design in c++ via Vivado HLS, and then plug the HLS exported IPs into a Vivado IP integrator to generate the corresponding bit streams, which are programmed into the FPGA board for on-board executio and measurements; for ASIC setup, we synthesize Verilog designs, from the generated mapping, with a Synopsys Design Compiler, based on a commercial CMOS technology, and then place and route in a Synopsys IC Compiler for obtaining its actual area.
\subsection{Software setup}
\textbf{Search and training on CIFAR-10/100.} \underline{Search space.} We adopt the same search space as~\cite{wu2019fbnet} except the stride settings for each group (i.e., a set of blocks with the same number pf output channels) to adapt to the input resolution of CIFAR-10/100. We follow the stride settings as MobileNetV2 on CIFAR-10/100 in~\cite{wang2019e2}, i.e. $[1, 1, 2, 2, 1, 2, 1]$ for the seven groups.
\underline{Search settings.} We search for the optimal DNN network in 50 epochs with a batch size of 64. In particular, we update the supernet weights with cascade distillation as formulated in Eq.(2) on half of the training dataset with an SGD optimizer, a momentum of 0.9, and an initial learning rate of 0.025 with cosine decay, and update network architecture parameters with the lowest bit-width as in Eq.(2) on the other half of the training dataset with an Adam optimizer, a momentum of 0.9, and a fixed learning rate of 3e-4. We apply gumbel softmax on the architecture parameters as the contributing coefficients of each option to the supernet (following~\cite{wu2019fbnet}), where the initial temperature is 3 and decays by 0.94 at the end of each epoch.
\underline{Train from scratch.} For training the derived network architectures from scratch with the proposed Cascade Distillation Training (CDT), we adopt an SGD optimizer with a momentum of 0.9, an initial learning rate of 0.025 with cosine decay for 200 epochs and a batch size of 128.
\textbf{Search and training on ImageNet.} \underline{Search space.} We adopt the same search space as~\cite{wu2019fbnet}.
\underline{Search settings.} We follow the same hyper-parameter settings for network search as ~\cite{wu2019fbnet} on ImageNet. In particular, with the proposed SP-NAS algorithm in Eq.(2), we search for the optimal network for 90 epochs with a batch size of 192, update the supernet weights on 80\% of the training dataset with an SGD optimizer, a momentum of 0.9, and an initial learning rate of 0.1 with the cosine decay, and update the network architecture parameters on the remaining 20\% training dataset with an Adam optimizer, a momentum of 0.9, and a fixed learning rate of 1e-2. The initial temperature of gumbel softmax is 5 and decays by 0.956 every epoch.
\underline{Train from scratch.} Following~\cite{wu2019fbnet}, we apply the proposed CDT to train the derived network with an SGD optimizer, a momentum of 0.9, an initial learning rate of 0.1 which decays by 0.1 at the 90-th, 180-th, 270-th epoch, respectively, within the total 360 epochs and a batch size of 256.
|
1,477,468,750,599 | arxiv | \section{Introduction}
In discussing the strong interaction, it is customary to assume
the validity of charge symmetry, which interchanges protons and
neutrons (simultaneously interchanging up and down quarks). For example,
all phenomenological analyses of deep inelastic scattering
data in terms of parton distribution functions assume
charge symmetry from the beginning. Our faith in charge
symmetry is justified from our experience in
nuclear physics, where this symmetry
is respected to a high degree of precision. Most experimental
low-energy tests of charge symmetry find that it is good to at least
$1\%$ in reaction amplitudes \cite{Miller,Henley}.
Until recently such an assumption seemed to be
justified, as there was no compelling experimental evidence against
parton charge symmetry. The quantitative evidence which could be
extracted from high energy experiments, although not particularly precise,
was consistent with charge symmetric parton distributions
\cite{Lon98}.
Experimental verification of charge symmetry is difficult, partly
because the relative charge symmetry violation (CSV) effects
are expected to be small, requiring high precision experiments to
measure CSV effects,
and partly because CSV often mixes with parton flavor symmetry violation
(FSV). Recent experimental measurements by the NMC Collaboration
\cite{NMCfsv}, demonstrating the violation of the Gottfried
sum rule \cite{Gottfried}, have been widely interpreted as
evidence for what is termed SU(2) FSV. The measurement of the
ratio of Drell-Yan cross sections in proton-deuteron and
proton-proton scattering, first by the NA51-Collaboration at CERN
\cite{Na51} and more recently by the E866 experiment
at FNAL \cite{E866}, also indicate substantial FSV.
However, both of these experiments
could in principle be explained by sufficiently large CSV effects
\cite{Ma1,Steffens}, even in the limit of exact flavor symmetry.
In view of these ambiguities in the interpretation of current
experimental data, it would be highly desirable to have experiments
which separate CSV from FSV.
A few experiments have been already proposed
\cite{Tim1,Tim2} and could be carried out in the near future.
Recent experiments now allow us for the first time to make
precision tests which could put tight upper limits on parton
CSV contributions. The NMC measurements of muon DIS on deuterium
\cite{NMC} provide values for the charged lepton structure
function $F_2^\mu(x,Q^2)$. In a similar $Q^2$ regime the
CCFR Collaboration \cite{CCFR} extract the structure functions
$F_2^\nu(x,Q^2)$ from neutrino--induced charge-changing reactions.
As we show in sec.\ II, the ``charge ratio'',
which can be constructed from these two quantities (plus
information about the strange quark distribution) can in
principle place strong constraints on parton CSV distributions.
We will show that, for intermediate values $ 0.1 \le x \le 0.4$,
the agreement between the two structure functions is impressive,
and provides the best upper limit to date on parton CSV terms.
However, the charge ratio shows a substantial
deviation from unity in the region $x < 0.1$, which might suggest
surprisingly
large charge symmetry violation. In a recent Letter \cite{Bor98} we
argued that the data supported this conclusion. However,
several important corrections have to be applied to the data
before any conclusions can be reached. These corrections are
especially important for the neutrino cross sections.
In sec.\ II we discuss the uncertainties involved in the
analysis of the data. Most corrections have already been accounted
for in the present experimental analysis. We particularly focus on
two aspects of the neutrino reactions: heavy target
corrections and effects due to strange and antistrange quark distributions.
In sec.\ III we demonstrate that neither of these effects are
sufficient to account for the apparent discrepancy at small $x$.
The charge symmetry violating distributions can be obtained from a
combination of neutrino charged current structure functions, muon
structure functions and strange quark distributions extracted from
dimuon production in neutrino reactions. We construct such a
combination and extract the CSV terms. Assuming the validity of
the experimental data, we find CSV effects on the order of 25\% of
the sea quark distributions at low $x$. In sec.\ IV
we discuss the consequences of
such large CSV effects on other observables. We
examine the role played by CSV in the extraction of the FSV
ratio $\bar d/\bar u$, in the Gottfried sum rule
and in the experimental determination of the Weinberg
angle sin$^2\theta_W$.
In sec.\ V we suggest an experiment which could measure the substantial
CSV suggested by our analysis.
\section{Comparing Structure Functions From Neutrino and Charged
Lepton Reactions}
Our analysis of parton charge symmetry violation is based on
the ``charge ratio,'' which we review here. This depends on
the ratio of $F_2$ structure functions extracted from charged
lepton reactions with those from neutrino charge--changing
reactions. Because neutrino cross sections are so small, at
present the structure functions can only be measured for
heavy targets such as iron. Furthermore, in order to obtain
useful statistics, the data must be integrated over all
energies for a given $x$ and $Q^2$ bin. As a result, only
certain linear combinations of neutrino and antineutrino
structure functions can be obtained. The process by which
we attempt to extract parton CSV contributions is complicated,
and requires input from several experiments. In this section
we review this process in detail.
\subsection{The ``Charge Ratio'' and Charge Symmetry Violation}
Structure functions measured in neutrino and muon deep inelastic
scattering are interpreted in terms of parton distribution functions.
Since the operation of charge symmetry maps up quarks to down quarks,
and protons to neutrons, at the level of parton distributions,
charge symmetry implies the equivalence between up (down)
quark distributions in the proton and down (up) quark distributions
in the neutron. In order to take CSV in
the parton distributions into account, we define
the charge symmetry violating distributions as
\begin{eqnarray}
\delta u(x)& =& u^p(x) -d^n(x) \nonumber\\
\delta d(x)& =& d^p(x) -u^n(x),
\end{eqnarray}
where the superscripts $p$ and $n$ refer to
quark distributions in the proton and neutron, respectively.
The relations for CSV in antiquark distributions
are analogous. If charge symmetry were exact then the
quantities $\delta u(x)$ and $\delta d(x)$ would vanish.
In the quark-parton model the structure functions measured in
neutrino, antineutrino and charged lepton DIS
on an isoscalar target, $N_0$,
are given in terms of the
parton distribution functions and the
charge symmetry violating distributions defined above by
\cite{Lon98}
\begin{eqnarray}
F_2^{\nu N_0} (x,Q^2) &=& x[u(x)+ \bar u(x) +d(x) +\bar d(x)
+ 2 s(x) + 2 \bar c(x) -\delta u(x)-\delta \bar d(x)] \nonumber \\
F_2^{\bar \nu N_0} (x,Q^2) &=& x[u(x)+ \bar u(x) +d(x) +\bar d(x)
+ 2 \bar s(x) + 2 c(x) -\delta d(x)-\delta \bar u(x)] \nonumber \\
xF_3^{\nu N_0}(x,Q^2) &=& x[u(x) + d(x) -\bar u(x) - \bar d(x)
+2 s(x)-2 \bar c(x) -\delta u(x) +\delta\bar d(x)]
\nonumber\\
xF_3^{\bar\nu N_0}(x,Q^2) &=& x[u(x) + d(x) -\bar u(x) - \bar d(x)
-2 \bar s(x)+2 c(x) -\delta d(x) +\delta\bar u(x)] \nonumber\\
F_2^{\ell N_0}(x,Q^2) & =& \frac{5}{18} x
[ u(x) + \bar u(x)
+d(x) +\bar d(x) + \frac{2}{5} (s(x) + \bar s(x))
+ \frac{8}{5}(c(x)+\bar c(x)) \nonumber\\
& - & \frac{4}{5}
(\delta d(x)+\delta \bar d(x)) - \frac{1}{5} ( \delta u(x)+\delta
\bar u(x))]
\label{eq2}
\end{eqnarray}
Here, and in the following, quark distributions without
superscripts denote quark distributions in the proton. From
now on, we will disregard charm quark contributions to the
structure functions.
Since phenomenological parton distribution functions assume
the validity of charge symmetry, possible CSV effects are folded
into the commonly used phenomenological parton
distribution functions in a highly
non-trivial way. Nevertheless, using the above relations, it is
possible to test the validity of charge symmetry by
building appropriate linear combinations or ratios of the measured
structure functions. One such possibility is to
calculate the ``charge ratio'', which relates the neutrino
structure function to the structure function measured in charged
lepton deep-inelastic scattering
\begin{eqnarray}
R_c(x,Q^2) & \equiv & \frac{F_2^{\mu N_0}(x,Q^2)}{\frac{5}{18}
F_2^{\nu N_0}(x,Q^2) -x( s(x) +\bar s(x))/6} \nonumber\\
&\approx & 1 - \frac{s(x) -\bar s(x)}{\overline{Q}(x)} +
\frac{4\delta u(x) - \delta \bar u(x) - 4 \delta d(x)
+\delta \bar d(x)}{5 \overline{Q}(x)}.
\label{rc}
\end{eqnarray}
Here, we defined
$\overline{Q}(x) \equiv \sum_{q=u,d,s} (q(x)+\bar q(x))-
3(s(x)+\bar s(x))/5$,
and we have expanded Eq.\ \ref{rc} to lowest order in small quantities.
From Eq.\ \ref{rc} we see that any deviation of the charge ratio from
unity, at any value of $x$, would be due either to CSV effects or to
different strange and antistrange quark distributions. Analogous
relations could be obtained using structure functions from antineutrinos,
or from a linear combination of neutrino and antineutrino structure
functions. For example, we can derive
\begin{eqnarray}
{\cal R}_c (x,Q^2) & \equiv &
\frac{F_2^{\mu N_0}(x,Q^2)}{\frac{5}{18}
{\cal F}_2^{\nu N_0}(x,Q^2) -x( s(x) +\bar s(x))/6} \nonumber\\
&\approx & 1 + \frac{3\left( \delta u(x) + \delta \bar u(x) - \delta d(x)
-\delta \bar d(x)\right)}{10 \overline{Q}(x)}.
\label{rc2}
\end{eqnarray}
In Eq.\ \ref{rc2} ${\cal F}_2^{\nu N_0}(x,Q^2) = (F_2^{\nu N_0}(x,Q^2) +
F_2^{\bar\nu N_0}(x,Q^2))/2$ is the average of the structure functions
from neutrino and antineutrino reactions; deviations from one in
the ratio ${\cal R}_c (x)$ depend only on parton CSV
contributions, and have no contribution from strange or antistrange
quark distributions.
The recent measurement of the structure function
$F_2^\nu$ by the CCFR-Collaboration \cite{CCFR} makes it possible
to carry out a precise comparison between $F^\nu_2(x,Q^2)$
and $F_2^{\mu}(x,Q^2)$ for the first time.
The CCFR-Collaboration compared the neutrino structure function
$F_2^\nu (x,Q^2)$ extracted from their data on an iron target \cite{CCFR}
with $F_2^\mu (x,Q^2)$ measured for the deuteron by the NMC
Collaboration \cite{NMC}. In the region of intermediate values of
Bjorken $x$ ($0.1 \le x \le 0.4$), they found very good agreement between
the two structure functions. In this $x$ region, this allows us to set
upper limits of a few percent on parton CSV contributions.
On the other hand, in the small
$x$-region ($x < 0.1$), the CCFR group found that the two
structure functions differ by as much as 10-15$\%$.
This can be seen in Fig.\ref{fig1} where the ``charge ratio''
has been obtained by integrating over the region of overlap in
$Q^2$ of the two experiments. The open and solid circles in
Fig.\ \ref{fig1} represent two different ways of calculating
nuclear shadowing corrections, as we will discuss later.
\subsection{Extracting Structure Functions From Neutrino Cross
Sections}
In order to perform tests of parton distributions through, say,
the charge ratio of Eq.\ \ref{rc}, we need the structure functions
from neutrino charge--changing reactions on a free proton
and neutron. These are written in terms of parton distributions
in Eq.\ \ref{eq2}. Because of the extremely small cross sections
for neutrino--induced reactions, we are able to
obtain statistically meaningful cross sections only from heavier
targets such as iron. We then have to make the following
corrections in order to extract the neutrino structure functions on
``free'' nucleons, averaged over proton and neutron,
$F_j^{\nu,\bar\nu\,N_0}(x,Q^2)$, ($j=2,3$): i) The nuclear structure
functions $F_j^{\nu,\bar\nu\,Fe}(x,Q^2)$ must be extracted from the
cross sections; ii) The nuclear structure functions need to be corrected
for the excess of neutrons in iron (isoscalar effects);
iii) Kinematic corrections must be applied to account for
heavy quark thresholds, particularly charm quark threshold effects
(Eq.\ \ref{rc} is valid only well above all heavy quark thresholds);
iv) Heavy target corrections must be applied, to
convert structure functions for nuclei to those for free protons
and neutrons; v) The neutrino and muon cross sections must be
properly normalized. In order to test charge symmetry, all these
corrections have to be taken
into account. The data have already been
corrected for normalization, isoscalar and charm threshold effects by the
CCFR-Collaboration in their analyses \cite{CCFR}. There is a
thorough discussion of these points in the thesis by W. Seligman
\cite{Sel97}. Here we will review how the nuclear structure functions
are extracted from the cross sections, the heavy target corrections
for neutrino reactions, and the role of both strange quarks and
CSV effects in neutrino structure functions.
The cross sections for neutrino and antineutrino scattering on a
nuclear target containing $A$ nucleons can be written as
\begin{equation}
\frac{d\sigma^{\nu ,\bar\nu\,A}}{dxdQ^2} =
\frac{G_F^2}{2\pi x} [ \frac{1}{2}(F_2^{\nu ,\bar\nu\,A}(x,Q^2)
\pm xF_3^{\nu ,\bar\nu\,A} (x,Q^2))
+ \frac{\xi^2}{2}(F_2^{\nu ,\bar\nu\,A} (x,Q^2)\mp
x F_3^{\nu ,\bar\nu\,A} (x,Q^2)) ].
\label{nuxsec}
\end{equation}
In Eq.\ \ref{nuxsec} the upper (lower) sign is associated with neutrino
(antineutrino) cross sections. We have assumed the validity of the
Callan-Gross relation and neglected terms of order $Mxy/2E$, and
we introduced the variable $\xi=(1-y)$. It would be straightforward
to remove these assumptions. With a large enough count rate, the
$x$ and $y$ dependence of the cross sections could be separately
measured. By plotting the measured differential
cross sections for fixed $x$ and $Q^2$ as a function of $\xi^2$, the
structure functions $F_2$ and $F_3$ can be determined from the slopes
and intercepts of the resulting straight lines.
The crucial question is, of course, whether the statistics
of the experiment are sufficient for the structure functions to
be extracted in this way.
To illustrate this problem we calculated the statistical
errors in each energy bin. For this calculation,
we used the experimental determined
fluxes, the total and differential neutrino and antineutrino cross sections
to obtain the expected number of events in a given $x$, $Q^2$
and energy bin. We estimated the statistical errors using
$\Delta\sigma =\sigma/\sqrt{N}$. In Fig.\ref{fig2}
$\sigma^{\nu ,\bar\nu} (x,Q^2,\xi^2)/(G_F^2/2\pi x)$
is plotted as a function of $\xi^2$.
The solid lines are the results using the CTEQ parton
distribution functions and assuming the validity of the
Callan-Gross relation. The dotted lines are the results obtained
without using the Callan-Gross relation. Here, we used the
parametrization of Whitlow \cite{Whi90b} for the ratio of the longitudinal
and transverse photo-absorption cross sections. The current
statistics do not allow one to extract the individual structure
functions. The error bars represent the expected statistical errors.
An order of magnitude more events would be necessary to decrease the
statistical errors sufficiently that one could consider extracting
the structure functions directly, and systematic errors would
further complicate this analysis.
Since the number of events is so small that individual structure
functions cannot be extracted from the data, the cross sections
in a given $x$ and $Q^2$ bin are integrated over all energies.
After this integration is performed, Eq.\ \ref{nuxsec} can be
written as two linear equations, one for neutrino and the other
for antineutrino events:
\begin{eqnarray}
N^\nu(x,Q^2) &=& A_2^\nu \, \,F_2^{\nu Fe}(x,Q^2) + A_3^\nu \,
xF_3^{\nu Fe}(x,Q^2) \nonumber \\
N^{\bar\nu}(x,Q^2) &=& A_2^{\bar\nu}\,F_2^{\bar\nu Fe}(x,Q^2) -
A_3^{\bar\nu}\,xF_3^{\bar\nu Fe}(x,Q^2) ~~~.
\label{nneut}
\end{eqnarray}
In Eq.\ \ref{nneut} $N^\nu$ ($N^{\bar\nu}$) is the number of
neutrino (antineutrino) events in a given $x$ and $Q^2$-bin integrated
over the incident neutrino and antineutrino energies.
$A^\nu_i$ and $A_i^{\bar\nu}$ ($i=2,3$)
represent the coefficients, $A_i(y)$, of the structure functions
multiplied by the neutrino and antineutrino fluxes,
$\Phi^\nu (E)$ and $\Phi^{\bar\nu} (E)$, respectively, and integrated
over all energies
\begin{eqnarray}
A^\nu_i &=& \int dE\, A_i(y)\, \Phi^\nu (E) \nonumber \\
A^{\bar\nu}_i & = &\int dE\, A_i(y)\, \Phi^{\bar\nu} (E) .
\end{eqnarray}
The individual structure functions for neutrino and antineutrino
reactions are extracted by taking linear combinations of the
relations in Eq.\ \ref{nneut} and making corrections using
phenomenological parton distribution functions. For example,
from Eq.\ \ref{eq2} we see that for an isoscalar target,
$F_2^{\nu\,N_0}(x,Q^2)=F_2^{\bar\nu\,N_0}(x,Q^2)$
if charge
symmetry is valid and $s(x) = \bar{s}(x)$. Thus we can form
linear combinations of the terms in Eq.\ \ref{nneut} such that
these terms cancel and we are left only with the $F_3$ structure
functions. Similarly, assuming charge symmetry we have
\[ F_3^{\nu\,N_0}(x,Q^2) - F_3^{\bar\nu\,N_0}(x,Q^2) = 2[s(x) -
\bar{s}(x)] ~~. \]
We can then take a linear combination of the terms in Eq.\ \ref{nneut}
which gives this function. If the strange quark distribution is
taken from a phenomenological model, we can extract a linear
combination of the $F_2$ structure functions for neutrinos and
antineutrinos on a nuclear target.
We will discuss how the structure functions are extracted, and
particularly the role of CSV and strange quark distributions in
this process. However, at this stage we review how heavy target
corrections are calculated, in order to extract the structure
functions for free nucleons from those measured on a heavy
nuclear target.
\subsection{Heavy Target Corrections in Neutrino Reactions}
As is well known, the structure functions measured on heavy
targets are not equal to those observed for light targets such
as the deuteron. At small $x$ values, nuclear shadowing effects
play a major role; at large $x$, nuclear Fermi motion effects
dominate, and at intermediate $x$ ``EMC'' effects play a
significant role \cite{Arneodo}. Such effects have been
systematically measured in charged lepton reactions.
In analyzing neutrino scattering data, it is generally assumed
that heavy target corrections will be the same as those observed
in charged lepton reactions. {\it A priori}, there is no reason to assume
that neutrino and charged lepton heavy target corrections should be
identical. Heavy target corrections for neutrinos are generally
applied by multiplying the experimental structure functions at
a given $x$ value by
the quantity $R\equiv F^{\ell A}_{2}(x,Q^2)/F^{\ell D}_{2}(x,Q^2)$,
the ratio between the $F_2$ structure function measured on heavy targets
and that of the deuteron for charged lepton deep inelastic scattering,
at the same $x$ value.
However, as is well known, shadowing corrections are very much
$Q^2$ dependent for smaller $Q^2$ values (where a considerable part
of the available data was taken), and the $Q^2$ and $x$-dependence of
the data are strongly correlated because of the
fixed target nature of these experiments.
We re-examined heavy target corrections to deep-inelastic neutrino
scattering, focusing on the differences between neutrino and charged
lepton scattering and on effects due to the $Q^2$-dependence of
shadowing for moderately large $Q^2$. This work will be
published elsewhere \cite{Boros}; here we briefly review the results
of that work. We used a two phase model
which has been successfully applied to the description of shadowing
in charged lepton DIS
\cite{Badelek,Melni}. In this approach,
vector meson dominance is used to describe the low $Q^2$ virtual
photon or $W$ interactions, and Pomeron exchange is used for the
approximate scaling region. In generalizing this approach
to weak currents, the essential differences in shadowing
between neutrino and charged lepton
deep inelastic scattering are:
(i) the axial-vector current is only partially conserved,
in contrast to the vector current; and (ii) the weak current
couples not only to vector but also to axial vector mesons
\cite{Stodolsky,VMD,Bell,Boris}.
Partial conservation of the axial current (PCAC) requires
that the divergence of the axial current does not vanish
but is proportional to the pion field for $Q^2=0$. This
is Adler's theorem \cite{Adler},
which relates the neutrino cross section to the pion
cross section on the same target for $Q^2=0$.
Thus, for low $Q^2$($\approx m_\pi^2$) shadowing in neutrino
scattering is determined by the absorption of pions on the target.
For larger $Q^2$-values the contributions of vector and axial vector mesons
become important. The coupling of the weak current
to the vector and axial
vector mesons and that of the electro-magnetic current
to vector mesons are related to each other by the ``Weinberg sum rule''
$f_{\rho^+}^2=f_{a_1}^2=2f_{\rho^0}^2$.
Since the coupling of the vector (axial vector)
mesons to the weak current is twice as large as the coupling
to the electro-magnetic current, but the structure function is
larger by a factor of $\sim18/5$ in the neutrino
case, we expect that shadowing due to VMD in neutrino reactions is
roughly half of that in charged lepton scattering.
For larger $Q^2$-values, shadowing due to Pomeron exchange between the
projectile and two or more constituent nucleons dominates. Since
Pomeron-exchange models the interaction between partons
in different nucleons and the scattering of the $W$ takes
place on only one parton, this processes is of leading twist in
contrast to the VMD and pion contributions.
The coupling is given by the coupling
of the photon or $W$ to the quarks in the exchanged Pomeron.
It changes in the same way as the structure function does
in switching from neutrino to charged lepton
scattering. Thus, for large $Q^2$ values ($>10$ GeV$^2$), shadowing in both
cases should have approximately the same magnitude.
In the intermediate $Q^2$-region ($1<Q^2<10$ GeV$^2$), where VMD
is relatively important, we expect to see differences between shadowing
in neutrino and charged lepton scattering.
We recall that this is precisely the region where
the discrepancy between CCFR and NMC is significant.
There are also nuclear effects in the deuteron. However,
because of the low density of the deuteron,
these are (relatively speaking)
very small and have a negligible effect on the charge ratio.
We calculated the shadowing corrections to the CCFR neutrino
data using the two-phase model of Ref.\cite{Badelek,Melni}. With
this corrected CCFR data, we calculated the charge ratio $R_c$ of
Eq.\ \ref{rc} between CCFR and NMC data. The result is shown in
Fig.\ref{fig1}. The open triangles show the charge ratio when no
shadowing corrections are used. The open
circles show the charge ratio when heavy target shadowing corrections
from charged lepton reactions are applied to the neutrino data,
and the solid circles show the result when the neutrino shadowing
corrections from our two-phase model are applied.
At small $x$, using the ``correct'' neutrino shadowing
corrections reduces the deviation of the charge ratio from unity.
Nevertheless, the charge
ratio is still not compatible with one at small $x$.
In summary, properly accounting for shadowing corrections in the
neutrino structure function decreases, but does not resolve,
the low-$x$ discrepancy between the CCFR and the NMC data.
\subsection{Strange Quark and CSV Contributions to Structure Functions}
In Eq.\ \ref{nneut} we showed that, after integrating neutrino
charged--current cross sections over all energies, we obtain two
equations in four unknowns, the structure functions $F_2$ and
$F_3$ for neutrino and antineutrino reactions. If the neutrino and
antineutrino structure functions were equal, $F_2^{\nu Fe}(x,Q^2)=
F_2^{\bar\nu Fe}(x,Q^2)$,
with an analogous relation for $xF_3^{\nu Fe}(x,Q^2)$,
then Eq.\ \ref{nneut} would provide two linear equations in two unknowns.
As we discussed previously, several corrections need
to be applied before we can extract the structure functions on
a ``free'' isoscalar target $N_0$, and compare the structure functions
to the parton distributions given in Eq.\ \ref{eq2}. First,
since iron is not an isoscalar target we need to make corrections for
the excess neutrons. Next, we need to estimate the contributions
from strange quark distributions and charge symmetry violating
parton distributions. Finally, we need to make heavy target corrections
as reviewed in the preceding section.
We begin by splitting the neutrino and antineutrino structure
functions on iron into isoscalar and non-isoscalar
parts. For a target with $Z$ protons and $N=A-Z$ neutrons we define
the quantity $\beta\equiv (N-Z)/A$:
\begin{equation}
F_i^{\nu ,\bar\nu Fe}=
\frac{1}{2} [F_i^{\nu ,\bar\nu p} + F_i^{\nu ,\bar\nu n}]
-\frac{\beta}{2} [F_i^{\nu ,\bar\nu p} - F_i^{\nu ,\bar\nu n}]\, .
\label{s12}
\end{equation}
The first term on the right of Eq. \ref{s12}
corresponds to the neutrino and antineutrino structure functions
on an isoscalar target, $N_0$.
The second terms include corrections arising from the
non-isoscalarity of the target.
In the absence of CSV, these corrections are basically given by
the difference between up and down valence quark distributions
and have been taken into account in the extraction
of the structure functions. However, the non-isoscalarity of the target
leads also to CSV corrections.
We define the sum and difference of the neutrino and
antineutrino structure functions on a target $A$ as
\begin{eqnarray}
{\cal F}_i^A & \equiv& \frac{1}{2} [ F_i^{\nu A} + F_i^{\bar\nu A}]
~~, \nonumber \\
\Delta {\cal F}_i^A & \equiv& \frac{1}{2} [ F_i^{\nu A} -
F_i^{\bar\nu A}] ~~;
\end{eqnarray}
the structure functions $F_i^{\nu ,\bar\nu Fe}$ can then be written as
\begin{equation}
F_i^{\nu ,\bar\nu Fe} =
{\cal F}_i^{N_0} \pm \Delta {\cal F}_i^{N_0}
- \frac{\beta}{2} \{ [{\cal F}_i^p - {\cal F}_i^n]
\pm [\Delta {\cal F}_i^p -
\Delta {\cal F}_i^n ]\}\, .
\label{F2fe}
\end{equation}
Here, ``$+$'' and ``$-$'' refer to the neutrino and antineutrino
structure functions, respectively.
The last three terms of the right hand side of Eq. \ref{F2fe}
contain corrections coming from excess neutrons, strange quarks,
CSV and $s(x)\ne \bar s(x)$.
Correcting the data for excess neutrons and
for strange quark contributions corresponds to
subtracting the number of events due to the corresponding corrections
from the left hand side of Eqs. \ref{nneut}
\begin{eqnarray}
N^\nu- \sum_{i=2}^3 A_i^\nu \, (\delta{\cal F}^{\nu}_i)_{n,s}
&= &A_2^\nu\,[{\cal F}_2^{N_0}+
(\delta {\cal F}_2^\nu)_{CSV}^{s\bar s}]
+A_3^\nu\, x[{\cal F}_3^{N_0}+ \delta
({\cal F}_3^\nu)_{CSV}^{s \bar s}] \nonumber\\
N^{\bar\nu}- \sum_{i=2}^3 (-1)^i
A_i^{\bar\nu} (\delta {\cal F}^{\bar\nu}_i)_{n,s}
&= &A_2^{\bar\nu}\,[{\cal F}_2^{N_0}+ (\delta
{\cal F}_2^{\bar\nu})_{CSV}^{s\bar s}]
-A_3^{\bar\nu}\, x[{\cal F}_3^{N_0}+ (\delta
{\cal F}_3^{\bar\nu})_{CSV}^{s\bar s}]\, .
\label{nneut2}
\end{eqnarray}
In Eq.\ \ref{nneut2}, we have calculated corrections to the structure
functions from excess neutrons and strange quarks, and have used these
to produce the effective number of events on the left hand side of
Eq.\ \ref{nneut2}. $(\delta {\cal F}_i)_{n,s}$ and
$(\delta {\cal F}_i)^{s\bar s}_{CSV}$
refer to corrections arising from excess neutrons,
strange quark distributions,
because of charge symmetry violation and $s(x)\ne \bar s(x)$, respectively.
The CCFR Collaboration assumed the validity of
charge symmetry, and they also took $s(x)=\bar s(x)$ based on the
results of a next to leading order [NLO] analysis of dimuon
production in neutrino--induced reactions \cite{CCFRNLO}. We have
left the correction terms
coming from CSV and $s(x)\ne\bar s(x)$ on the
right hand side of Eq. \ref{nneut} as these have been absorbed into the
extracted structure functions. Under the assumption of charge symmetry
and $s(x)=\bar s(x)$, Eq.\ \ref{nneut2} simplifies to
\begin{eqnarray}
N^\nu- \sum_{i=2}^3 A_i^\nu (\delta {\cal F}^\nu_i)_{n,s}
&= &A_2^\nu\,{\cal F}_2^{CCFR,A} +
A_3^\nu \, x{\cal F}_3^{CCFR,A} \nonumber\\
N^{\bar\nu}- \sum_{i=2}^3 (-1)^i A_i^{\bar\nu}
(\delta {\cal F}^{\bar\nu}_i)_{n,s}
&= &A_2^{\bar\nu}\,{\cal F}_2^{CCFR,A}
-A_3^{\bar\nu}\, x {\cal F}_3^{CCFR,A}.
\label{nneut3}
\end{eqnarray}
These equations provide a system of two linear equations for the two
nuclear structure functions ${\cal F}_2^{CCFR,A}$ and
${\cal F}_3^{CCFR,A}$. From these structure functions we can
calculate the structure functions for a ``free'' nucleon target
using the heavy target correction factors described in the
previous section. The resulting structure functions
${\cal F}_i^{CCFR}$ still contain charge symmetry violating
contributions and terms proportional to $s(x)-\bar s(x)$, as can
be seen from Eq.\ \ref{eq2}. To relate the measured structure
functions, $F_i^{CCFR}$ to the various parton distributions,
we take the sum and difference of
the measured number densities in Eqs. \ref{nneut2} and \ref{nneut3}
(for a fixed energy) and compare the coefficients of $A_i(y)$. In this way,
we see that the measured structure functions, $F_i^{CCFR}$, can effectively
be identified with a flux weighted
average of the neutrino and antineutrino structure
functions, $F_i^{\nu N_0}$ and $F_i^{\bar\nu N_0}$, and
correction terms arising from CSV effects
\begin{eqnarray}
F_i^{CCFR} &= &{\cal F}_i^{N_0} + (2\alpha -1)
\Delta {\cal F}_i^{N_0} - \frac{\beta}{2}
[{\cal F}^p_i-{\cal F}^n_i]_{CSV} \nonumber\\
& - & \frac{(2\alpha -1)\beta}{2} [\Delta {\cal F}_i^p
-\Delta {\cal F}_i^n ]_{CSV}.
\label{f2ccfr}
\end{eqnarray}
Here, we defined the relative neutrino flux, $\alpha$, as
$\alpha\equiv \Phi^\nu/(\Phi^\nu+\Phi^{\bar\nu})$.
The experimental value of $\alpha$ depends on the incident neutrino and
antineutrino energies and is also different for the E744 and E770
experiments. Because of the kinematical constraint $y<1$, relative
fluxes at energies $\ge 150$ GeV are relevant for small $x$.
Here, $\alpha \approx 0.83$ \cite{CCFR}
so that $F_2^{CCFR}(x,Q^2)$ can be approximately regarded
as a neutrino structure function.
The different contributions to $F_2^{CCFR}$ can be expressed
in terms of the quark distribution functions
\begin{eqnarray}
\frac{1}{2} [{\cal F}^p_2-{\cal F}^n_2]_{CSV} &=&
-[{\cal F}^{N_0}_2]_{CSV}
= \frac{x}{2}\, [\delta u(x) + \delta\bar u(x)
+ \delta d(x) +\delta\bar d(x)] \nonumber \\
\frac{1}{2} [\Delta {\cal F}_2^p
-\Delta {\cal F}_2^n]_{CSV} & = &
-\frac{x}{2}\,
[\delta d(x) -\delta\bar d(x) -\delta u(x) + \delta\bar u(x)]
\nonumber \\ \Delta {\cal F}_2^{N_0} &=& - \frac{1}{2}
[\Delta {\cal F}_2^p -\Delta {\cal F}_2^n]_{CSV}
+ x[s(x)-\bar s(x)].
\label{terms}
\end{eqnarray}
The second expression in Eq.\ \ref{terms} is obtained by
subtracting the $F_2$ structure function for neutrinos on
protons from that for antineutrinos on protons; from this is
subtracted the corresponding term for neutrons. It depends
only on charge symmetry violation in the {\it valence} quark
distributions. The last expression in Eq.\ \ref{terms} is
obtained by taking the difference between neutrino and antineutrino
$F_2$ structure functions on an isoscalar system. It also
depends on valence quark CSV, and has an additional contribution
from the difference between strange and antistrange
parton distributions. The first term in Eq.\ \ref{terms} is obtained by
averaging the $F_2$ structure functions over neutrino and
antineutrino reactions, and taking the difference of the $F_2$
structure functions measured on proton and neutron targets. This
quantity is free from strange quark
effects, and is also sensitive to CSV in the sea-quark
distributions.
\section{Evidence for Large Charge Symmetry Violation in Parton
Sea Quark Distributions}
The most likely explanation for the discrepancy in the small-$x$
region of the charge ratio involves either differences between the
strange and antistrange quark distributions \cite{Signal,Melni97,JT,HSS},
or charge symmetry violation. First, we will examine
the role played by the strange and antistrange quark
distributions. Assuming that charge symmetry is exact,
the strange and antistrange quark distributions are given by a
linear combination of the structure functions measured in neutrino
and in muon DIS, as can be seen from Eqs. \ref{eq2}, \ref{f2ccfr} and
\ref{terms},
\begin{equation}
\frac{5}{6} F_2^{CCFR}(x,Q^2) -3 F_2^{NMC}(x,Q^2)
= \frac{1}{2}\, x \, [s(x) + \bar s(x)]
+\frac{5}{6} (2\alpha -1)\, x \,[s(x)-\bar s(x)].
\label{diff}
\end{equation}
Under the assumption that $s(x)=\bar s(x)$, this relation could be
used to extract the strange quark distribution.
However, as is well known, the strange quark distribution
obtained in this way is inconsistent with the distribution
extracted from independent experiments.
\subsection{Direct Measurement of Sea Quark Distributions}
The strange quark distribution can be determined directly
from opposite sign dimuon production in deep inelastic neutrino and
antineutrino
scattering. To leading order in a charge-changing reaction, the
incoming neutrino (antineutrino) emits a muon and a virtual $W$
boson, which scatters on an $s$ or $d$ ($\bar s$ or $\bar d$)
quark, producing a charm (anticharm) quark which fragments into a charmed
hadron. The semi-leptonic decay of the charmed hadron produces an opposite
sign muon. The CCFR Collaboration performed a LO \cite{CCFRLO}
and NLO analysis \cite{CCFRNLO} of their dimuon data using
the neutrino (antineutrino) events to extract the strange
(antistrange) quark distributions.
Their result differs substantially from the strange quark distribution
extracted from Eq.(\ref{diff}), as mentioned above.
In the dimuon data one extracts the strange and antistrange
quark distributions from the neutrino and antineutrino data separately.
The analysis performed by the CCFR Collaboration suggests that, while
there is a difference between the strange and antistrange
distributions in LO analysis \cite{CCFRLO}
they are equal within experimental errors
in NLO \cite{CCFRNLO}. However, since the number
of antineutrino events is much smaller than that of the neutrino
events, the errors of this analysis are inevitably large.
Since the dimuon experiments are carried out on an iron target,
shadowing corrections could modify the extracted
strange quark distribution, and might account
for some of the discrepancies between the two different
determinations of the strange quark distributions.
The CCFR-Collaboration normalized the dimuon cross section
to the ``single muon'' cross section and argued that
the heavy target correction should cancel in the ratio.
However, the charm producing part of the structure function
$F_2^{cp}(x,Q^2)$ could be shadowed differently
from the non-charm producing part $F_2^{ncp}(x,Q^2)$,
unless charm threshold effects cancel in the shadowing ratio.
This could be the case, because vector mesons with higher masses
are involved in the charm producing part, and because
charm production threshold effects have to be taken into
account in the Pomeron component as well.
We calculated the shadowing ratio, $R\equiv
F_2^{\nu A}(x,Q^2)/F_2^{\nu D}(x,Q^2)$,
between the structure functions on a heavy target
and on a deuteron target for both the
charm and non-charm producing part of the structure
function. We took charm production threshold effects into account
in the Pomeron component through the
slow rescaling mechanism
by replacing $x_{I\!P}$, which is the momentum fraction of the
Pomeron carried by the struck quark, by
$\xi_{I\!P}=x_{I\!P}(1+\frac{m_c^2}{Q^2})$.
Here $m_c$ is the mass of the charm quark. In the VMD
component of $F_2^{cp}(x,Q^2)$ we included the vector mesons
$D^{*+}(2010)$, $D^{*+}_s(2110)$ and the axial vector partner
$D^{*+}_{As}(2535)$ of $D^{*+}_s$ \cite{Data}, which describe the lightest
coherent states of the $c\bar d$ and $c\bar s$ fluctuations of the
$W^+$-boson.
They have the same coupling to
$W^+$ as $\rho^+$ and $a_1^+$ but have much heavier masses.
(The $c\bar d$ fluctuations are suppressed by sin$^2\Theta_c$.)
Because of the larger mass of the charmed vector mesons
($\sim 2.5$ GeV), we applied a cut at $M^2_X\ge 6.3$ GeV$^2$
in the diffractively
produced invariant mass of the Pomeron component.
This is to be compared with $M^2_X\ge 1.5$ GeV$^2$ in the non-charm producing
part of the structure function (these cuts are necessary to avoid
double counting.) Because of the {\it light} quark component
of the D-mesons, we expect that the D-meson-nucleon total
cross sections are comparable to the corresponding cross sections
of lighter mesons with the same light quark content. We
use $\sigma_{D^*N}\approx \sigma_{\rho N}$ and
$\sigma_{D^*_sN}\approx \sigma_{\phi N}$.
The calculated ratios,
$R=F_2^A(x,Q^2)/F_2^D(x,Q^2)$,
are shown in Fig. \ref{fig3} for $Q^2=5$ GeV$^2$.
Here, $F_2^A=F_2^D+\delta F_2^{(V)}+F_2^{I\!P}$ and
$\delta F_2^{(V)}$ and $F_2^{(I\!P)}$, the shadowing
corrections to the structure functions
due to vector mesons and Pomeron-exchange,
respectively, are calculated in
the two phase model. Since the pion component is negligible for
$Q^2=5$ GeV$^2$, we did not include it.
There is no substantial difference
in shadowing between the charm producing ($cp$) and non-charm
producing ($ncp$) parts. (The difference is about $2\%$ in the
small $x$-region).
Note that the shadowing correction in
$F_2^{cp}(x,Q^2)$ decreases faster with increasing
$x$, because the larger masses of the charmed
vector mesons, $m_V$, enter in the coherence condition
$\tau =\frac{1}{Mx}(1+\frac{m_V^2}{Q^2})^{-1}$
($\tau$ is the lifetime of the quark antiquark fluctuation,
and $M$ is the nucleon mass),
compared with the smaller masses of the $\rho$ and $a_1$.
Our results justify the assumption that shadowing corrections
approximately cancel in the ratio of dimuon and single muon cross sections.
\subsection{Estimate of Parton CSV Contribution}
It would appear that a likely explanation for the deviation of
the charge ratio of Eq.\ \ref{rc} from one is due to differences
between the strange and antistrange quark densities.
To test this hypothesis, we combined the
data in dimuon production, averaged over both neutrino and
antineutrino events, with the difference between the
structure functions in neutrino and charged lepton scattering
(Eq.(\ref{diff})).
In combining the neutrino and antineutrino events, one measures a
flux-weighted average of the strange and antistrange quark distributions.
If we define $\alpha^\prime= N_\nu/(N_\nu+N_{\bar\nu})$, where
$N_\nu =5,030 $, $N_{\bar\nu}=1,060$ ($\alpha^\prime \approx 0.83$)
are the number of neutrino and antineutrino
events of the dimuon production experiment \cite{CCFRNLO},
we have for the measured distribution $x s(x)^{\mu\mu}$
\begin{equation}
x s^{\mu\mu}(x) = \frac{1}{2}\, x \,[s(x) + \bar s(x)] + \frac{1}{2}
(2\alpha^\prime -1 )\, x\, [s(x) - \bar s(x)].
\label{s2}
\end{equation}
Now, this equation together with Eq.(\ref{diff})
forms a pair of linear equations which can be solved for
$\frac{1}{2} x [s(x)+\bar s(x)]$ and $\frac{1}{2} x [s(x)-\bar s(x)]$.
In this way we can also test the compatibility of the two
experiments. In addition we have the sum rule that the nucleon
contains no net strangeness,
\begin{equation}
\int_0^1 [s(x) - \overline{s}(x)]\, dx = 0
\label{smrule}
\end{equation}
In the following expressions, we have not enforced the sum rule
requirement on the antistrange quark distributions.
Compatibility of the two experiments requires that physically
acceptable solutions for
$\frac{1}{2} x [s(x)+\bar s(x)]$ and $\frac{1}{2} x [s(x)-\bar s(x)]$,
satisfying both Eq. \ref{diff} and Eq. \ref{s2}, can be found.
Using the experimental values $\alpha = \alpha^\prime \approx 0.83$,
we can write $x [s(x)-\bar s(x)]=\Delta(x)/\delta$,
where $\Delta(x) =\frac{5}{6}F_2^{CCFR}(x) -3 F_2^{NMC}(x)-
s^{\mu\mu}(x)$, and $\delta = (2\alpha -1)/3 \approx 0.22$.
Consequently even rather small values for $\Delta(x)$ can lead to
large differences between $s$ and $\bar s$. Note that the value
of the relative neutrino flux, $\alpha$, depends on the incident
neutrino energy. While $\alpha\approx 0.83$ for small $x$, $\alpha$
is somewhat smaller for higher $x$-values. However, smaller
$\alpha$ would lead to an even smaller $\delta$ and
would require even larger differences between $s$ and $\bar s$.
In Fig.\ \ref{fig4} we show the results obtained for $x s(x)$ (open
circles) and $x \bar s(x)$ (solid circles) by solving the resulting
linear equations, Eqs.\ \ref{diff} and \ref{s2}
using the values $\alpha =\alpha^\prime =0.83$. The results are
completely unphysical, since the antistrange
quark distribution is negative, which is not possible since the
distribution is related to a probability. In Fig.\ \ref{fig5} we
show the corresponding results
for the linear combinations $\frac{1}{2}x[s(x)+\bar s(x)]$ (solid
circles) and $\frac{1}{2}x[s(x)-\bar s(x)]$ (open circles).
The unphysical nature of the solution is demonstrated by the fact
that $\frac{1}{2}x[s(x)-\bar s(x)]$ is larger than
$\frac{1}{2}x[s(x)+\bar s(x)]$.
We also solved the equations using
the values $\alpha = 0.83$ and $\alpha^\prime =1$
which corresponds to
using a subsample of the di-muon data containing only
neutrino events. In this case, even the {\it sum} of the strange and
anti strange distributions is negative. This is shown in Fig. \ref{fig6}.
Thus, our analysis strongly suggests that the discrepancy between
$F_2^{CCFR}(x,Q^2)$ and $F_2^{NMC}(x,Q^2)$ cannot be completely
attributed to differences between the strange and antistrange
quark distributions. In other words, assuming parton charge
symmetry the two experiments are incompatible with each other,
even if the antistrange quark distribution is allowed
to be different from the strange distribution. (Note, that
absolutely {\it no} restrictions were placed on the
antistrange quark distribution, aside from the condition
that since it represents a
probability density, it must be non negative.)
We stress that our conclusion is quite different from that
of Brodsky and Ma \cite{Brodsky},
who suggested that allowing
$s(x)\ne \bar s(x)$ could account for the difference between the two
determinations of the strange quark distribution.
However, they treated the
CCFR structure functions
as an average beween the neutrino and the antineutrino structure
function which corresponds to setting $\alpha =0.5$.
At this point there are two possibilities to explain the
low-$x$ discrepancy observed between the CCFR neutrino and
the NMC muon structure functions. Either one of the experimental
structure functions (or the strange quark distributions) is incorrect
at low $x$, or parton charge symmetry is violated in this
region, since we have shown that neither neutrino shadowing
corrections nor an inequality between strange and antistrange
quark distributions can explain this experimental anomaly.
If we include the possibility of parton CSV, then we
can combine the dimuon data for the strange quark distribution,
Eq.\ \ref{s2}, with the relation between neutrino and muon
structure functions, Eq.\ \ref{diff}, to obtain the relation
\begin{eqnarray}
\frac{5}{6} F_2^{CCFR}(x,Q^2) &-&3 F_2^{NMC}(x,Q^2)
-x s^{\mu\mu}(x) = \frac{(2\alpha -1)\,x}{3}
[s(x) -\bar{s}(x)] \nonumber \\
&+& \frac{(3-5\beta)\,x}{12} \, [\delta d(x) + \delta\bar d(x)] -
\frac{(3+5\beta)\,x}{12} \, [\delta u(x) + \delta\bar u(x)] \nonumber \\
&-& \frac{5(1+\beta)(2\alpha-1)\,x}{12}[\delta u_v(x) -\delta d_v(x)]
\label{csv1}
\end{eqnarray}
In Eq.\ \ref{csv1} we have used the experimental value
$\alpha = \alpha^\prime$, and we have defined the valence quark CSV
terms $\delta q_v(x)\equiv \delta q(x) -\delta \bar q(x)$. We have
neglected the effects of possible CSV on the extraction of
$s(x)$ from the dimuon data, or on the identification of the structure
functions from the neutrino data. This will be discussed below.
Since the discrepancy between CCFR and NMC data lies
primarily in the very small $x$-region, where the valence quark
distribution is much smaller than the sea quark, the charge
symmetry violation should be predominantly
in the sea quark distributions. If we set
$\delta q_v(x) \approx 0$ in
this region, Eq.(\ref{csv1}) can be written as
\begin{eqnarray}
\frac{5}{6} F_2^{CCFR}(x,Q^2) &-& 3 F_2^{NMC}(x,Q^2)
- x s^{\mu\mu}(x) \approx \frac{(2\alpha -1)\,x}{3}[s(x)
-\bar{s}(x)] \nonumber\\
&+& \frac{x}{2} \, [\delta \bar d(x) -\delta\bar u(x)]
- \frac{5\beta\,x}{6} \, [\delta \bar d(x) +\delta\bar u(x)]~~.
\label{csv2}
\end{eqnarray}
Since $\beta \approx 0.06$ is quite small, CSV arising from the
non-isoscalar nature of the iron target can be neglected, so in
the following we neglect the last term of Eq.\ \ref{csv2}.
Using the experimental data we find that
the left hand side of Eq.\ \ref{csv2} is positive. Consequently,
the smallest value for charge symmetry violation will be obtained
if we set $\bar{s}(x) = 0$ \cite{smrul}. In Fig.\ref{fig7} we show the
magnitude of charge symmetry violation needed to satisfy the
experimental values in Eq.\ \ref{csv2}. The open circles
are obtained if we set $\bar{s}(x) = 0$, and the solid circles
result from setting $\bar{s}(x) = s(x)$.
If we use only the neutrino induced di-muon events,
(i.e we set $\alpha^\prime =1$),
the coefficient of $x[s(x)-\bar s(x)]$, $(5\alpha -3\alpha^\prime -1)/3$,
is still positive but smaller in
magnitude. Consequently, the influence of the uncertainty in $\bar s(x)$
on the extracted CSV is smaller.
This is shown as open triangles in Fig. \ref{fig7}.
In obtaining these results,
both the structure functions and the strange quark distribution
have been integrated over the overlapping kinematic regions and
we used the CTEQ4L parametrization for $s^{\mu\mu}$
\cite{Lai}.
In Fig.\ref{fig8} we show the sensitivity of extracted CSV
to the parametrization used for $s^{\mu\mu}$.
The uncertainty due to different parametrizations
has been partly taken into account since the calculated errors already
include the uncertainty of the dimuon measurement and most of the
parametrizations lie within the experimental
errors of the dimuon data (except for LO-CCFR $s(x)$).
We note that the magnitude of the observed charge symmetry
violation in the sea quark distributions is
independent of whether we use a pure neutrino or antineutrino
structure function or a linear combination of neutrino and
antineutrino structure functions. This is quite different from strange
and antistrange quark effects which are sensitive to the relative
weighting of neutrino and antineutrino events in the data sample.
Thus, effects due to CSV are independent of the precise value of the
relative neutrino flux, $\alpha$.
The CSV effect required
to account for the NMC-CCFR discrepancy is extraordinarily large.
It is roughly the same size as the strange quark distribution at
small $x$ (compare the open circles in Fig.\ \ref{fig7} with the solid line in
Fig. \ref{fig7}). The charge symmetry violation necessary to
provide agreement with the experimental data is about 25\% of the
light sea quark distributions for $x < 0.1$.
The level of CSV required is
two to three orders of magnitude larger than the theoretical estimates
of charge symmetry violation \cite{Ben98,Lond,Sather,Ben97}.
Note that, if $\bar s(x) < s(x)$ in this region,
as suggested in Ref.\cite{Brodsky},
we would need an even larger CSV to account for the
CCFR-NMC discrepancy.
Theoretical considerations suggest that $\delta\bar d(x) \approx
-\delta\bar u(x)$ \cite{Ben98,Lond}. In fact, since
charge symmetry violation seems to be surprisingly large, it is
reasonable to assume that these distributions have opposite
signs.
We note that with this sign CSV effects also require large flavor
symmetry violation. One might ask whether such large CSV
effects would be seen in other experiments. For example, CSV
in the nucleon sea could contribute to the observed violation of the
Gottfried sum rule
\cite{Ma1,Lond,Ben98} and could explain the Fermilab Drell-Yan experiment
\cite{Ma1}. This will be discussed in section IV.
Clearly, CSV effects of this magnitude need further experimental
verification. The NuTeV-experiment at Fermilab \cite{NuTeV} is able to
operate either with pure neutrino or pure antineutrino
beams. The extracted structure functions can be used to build
different linear combinations, proportional to
various combinations of the $\delta\bar q$'s and $s$-$\bar s$.
This will be useful to separate CSV from $s$-$\bar s$ effects.
At small $x$, our results can be summarized by
\begin{eqnarray}
\delta \bar{d}(x) - \delta \bar{u}(x) &\approx & {1\over 2} (s(x)
+ \bar{s}(x) ) \approx {1\over 4} \left({\bar{u}(x) + \bar{d}(x)
\over 2} \right) \nonumber \\
\delta \bar{d}(x) + \delta \bar{u}(x) &\approx & 0 .
\label{csvappr}
\end{eqnarray}
From Eq.\ \ref{eq2} we note that such a CSV effect would have
little or no effect on the the $F_2$ structure functions of
isoscalar targets, for either neutrinos or antineutrinos. The major
effect for isoscalar targets would be a
significant positive contribution to $F_3^{\nu N_0}(x,Q^2)$ at small $x$,
and an equally large negative contribution to
$F_3^{\bar\nu N_0}(x,Q^2)$.
However, if CSV effects of this magnitude are really present
at small $x$, then we should include charge symmetry violating
amplitudes in parton phenomenology from the outset, and re-analyze
the extraction of
all parton distributions. Given the experimental values
$\kappa = 2S/(U+D) \approx 0.5$, where $S$, $U$ and $D$ are the
probabilities for strange, up and down quarks averaged over $x$,
and the size of CSV effects suggested by the preceeding analysis,
we would predict that at small $x$, $\bar{d}^n(x) \approx
1.25\,\bar{u}^p(x)$ and $\bar{u}^n(x) \approx 0.75\,\bar{d}^p(x)$.
\section{Effects of Parton CSV on Other Observables}
If there is substantial CSV, it should also effect
other observables. In the following we review the effects which such
large CSV terms might have on three quantities; first, the recent
search for parton ``flavor symmetry violation'' [FSV] by the Fermilab
Drell-Yan experiment E866; second, the extraction of the strange
quark distribution; and third, experimental determination of the
Weinberg angle $\sin^2(\theta_W)$.
\subsection{Flavor Symmetry Violation in the Proton Sea}
The results of the recent Fermilab Drell-Yan experiment \cite{E866} and
the comparison of the proton and neutron structure functions
measured by the NMC Collaboration \cite{NMCfsv} indicate substantial
flavor symmetry violation. However, both experimental observations
could be
attributed to charge symmetry violation, as pointed out by Ma \cite{Ma1}
(see also \cite{Steffens}).
Furthermore, both CSV and FSV could be present, as suggested by our analysis
of the CCFR-NMC discrepancy. Therefore, it is
important to examine the effects of CSV
on the interpretation of the Fermilab and NMC experiments.
First, we discuss the Drell-Yan experiment which measures
the ratio of the dimuon cross sections from proton-deuteron
and proton-proton scattering. Since CSV is significant in the small
$x$ region, it is a reasonable first approximation
to keep only the contributions to the Drell-Yan cross sections
which come from the annihilation of
quarks of the projectile and antiquarks of the target \cite{seacomm}.
In this approximation, the ratio $R\equiv \sigma^{pD}/(2\sigma^{pp})$
is given by
\begin{equation}
\frac{\sigma^{pD}}{2\sigma^{pp}}
\approx \frac{[1+\frac{\bar d_2}{\bar u_2} -
\frac{\delta\bar d_2}{\bar u_2}] +\frac{R_1}{4}[1+\frac{\bar d_2}{\bar u_2}
-\frac{\delta\bar u_2}{\bar u_2}]}{2\left( 1+\frac{R_1}{4}\frac{\bar d_2}
{\bar u_2}\right)} .
\end{equation}
Here, we introduced the notation $q_{j}\equiv q(x_j)$ for the quark
distributions ($x_1$ is the projectile $x$ value and $x_2$ refers to
the target), and $R_1\equiv \frac{d_1}{u_1}$.
For large $x_F$, which corresponds to large $x_1$, the quantity
$R_1$ is small; if we ignore it, we have the approximate result
\begin{equation}
R =\frac{\sigma^{pD}}{2\sigma^{pp}}
\approx \frac{1}{2} \{ 1 + \frac{(\bar d_2 -
\delta\bar d_2)}{\bar u_2} \} .
\label{r}
\end{equation}
If charge symmetry is valid and $\bar d_2 = \bar u_2$, then we
would have $R=1$. The experimental values give $R > 1$ at small
$x_2$; from Eq.\ \ref{r}, this could be satisfied if either
$\bar d_2 > \bar u_2$ or $\delta\bar d_2$ was large and negative.
However, the value of $\delta\bar d(x)$ extracted from the existing
neutrino and muon experiments, as discussed in the preceding
section, was large and positive at small $x$. The enhancement is on
the order of
$25\%$ in the small $x$ region where CSV could be important.
In Fig. \ref{fig9} the solid circles show the ratio
$\bar{d}(x)/\bar{u}(x)$ extracted from the Drell-Yan experiment
if we assume the validity of charge symmetry. The open circles
in Fig. \ref{fig9} show
the result for $\bar{d}(x)/\bar{u}(x)$ if we include
the CSV term which was extracted from the CCFR--NMC data (this
is shown in Fig.\ \ref{fig7}).
Inclusion of parton charge symmetry violation
suggested by the CCFR-NMC discrepancy plays an important role in the
extraction of the FSV ratio $\bar d/\bar u$, in the region $x<0.1$.
The flavor symmetry violation in the sea has to be substantially
larger to overcome the CSV term which goes in the opposite direction.
In particular, the ratio $\bar d(x)/\bar u(x)$ does not approach 1 for
small $x$ values.
We can invert the extracted ratio to obtain the difference
$[(\bar d -\delta\bar d) -\bar u]$
\begin{equation}
(\bar d -\delta\bar d) -\bar u =\frac{(\bar d - \delta\bar d)/\bar u -1}
{(\bar d - \delta\bar d)/\bar u+1}
[(\bar d - \delta\bar d)+ \bar u] .
\label{inv}
\end{equation}
As a rough approximation, we could neglect $\delta \bar d$ in the
sum on the right hand side of Eq. \ref{inv}
and keep it in the difference between $\bar d$ and $\bar u$ on the left
hand side. For $\bar u + \bar d$ one could use a parametrization. This is
exactly the way that $\bar d -\bar u$ has been extracted from the Drell-Yan
data, so that in fact the extracted quantity corresponds to
$(\bar d -\delta\bar d) -\bar u$ if CSV is present.
The difference, $\bar d -\bar u$, can also be extracted
from the difference between the proton and neutron structure
functions measured by the NMC Collaboration \cite{NMCfsv} using
muon deep inelastic scattering. In this case we have
\begin{equation}
\frac{1}{2}(u_v(x)-d_v(x))-\frac{3}{2x}(F_2^p(x)-F_2^n(x)) =
(\bar d(x) -\bar u(x) ) -\frac{2}{3}(\delta d(x) +\delta \bar d(x))
-\frac{1}{6} (\delta u(x) +\delta\bar u(x)).
\end{equation}
We can make the approximations $\delta q(x) \approx \delta\bar{q}(x)$
and $\delta\bar{d}(x) \approx -\delta\bar{u}(x)$, (the latter may not be
a good approximation since we have FSV), and obtain
\begin{equation}
\frac{1}{2}(u_v(x)-d_v(x))-\frac{3}{2x}(F_2^p(x)-F_2^n)(x) \approx
[(\bar{d}(x) -\delta\bar{d}(x))-\bar{u}(x)].
\label{nmcdiff}
\end{equation}
Comparing this with Eq. \ref{inv} we see that, in a first approximation,
the quantities extracted from the two experiments are the same even if
both CSV and FSV are present. However, if CSV is present, the
term $\delta\bar d$ has to be subtracted from the measured
quantity to obtain the difference $\bar d -\bar u$.
We inverted Eq. \ref{nmcdiff} by dividing both sides by
$\bar d-\delta \bar d +\bar u \equiv \bar u (r_2+1)$, approximating
$\bar d-\delta \bar d +\bar u$ on the left hand side of Eq. \ref{nmcdiff}
by a parametrization of $\bar d + \bar u$ and solving for $r_2 =
\bar{d}(x_2)/\bar{u}(x_2)$.
The structure functions and the parton distribution
are integrated for each data point
over the same $Q^2$ regions as in the analysis of the charge ratio.
The result is shown in Fig. \ref{fig9} as solid triangles.
If we subtract the contribution of CSV from the ratio $r_2$ we obtain
the result shown as open triangles in Fig. \ref{fig9}.
We see that charge symmetry violation, as
suggested by the CCFR-NMC discrepancy, considerably enhances
the FSV ratio $\bar d/\bar u$ in the region $x<0.1$.
It is interesting to investigate the influence of CSV on the
Gottfried sum rule. If both CSV and FSV are present
the Gottfried sum rule can be expressed as
\begin{equation}
S_G=\frac{1}{3} - \frac{2}{3} \int_0^1 dx [\bar d(x) - \bar u(x)]
+\frac{2}{9} \int_0^1 dx [4\delta\bar d(x) +\delta\bar u(x)].
\end{equation}
Now, if $\delta\bar d(x)\approx -\delta\bar u(x)$
we have
\begin{equation}
S_G=\frac{1}{3} - \frac{2}{3} \int_0^1 dx \{[\bar d(x) -
\delta\bar d(x)] - \bar u(x)\},
\end{equation}
so that, although the CSV suggested by the CCFR experiment does influence
the magnitude of the extracted
FSV, it does not change the experimental value of the Gottfried sum rule
since the extracted quantities appear in exactly the same form in the
Gottfried sum rule as in the Drell-Yan and NMC experiments.
\subsection{Extraction of Strange Quark Distributions}
The differential cross section for the production of opposite-sign
dimuons, for neutrino and antineutrino deep inelastic
scattering from an isoscalar target, are proportional to the quark
distributions, the
CKM-matrix elements, the fragmentation function $D(z)$ of the struck
quark and the weighted average of the
semi-leptonic branching ratios of the charmed hadrons
$B_c$
\begin{equation}
\frac{d\sigma (\nu N_0\rightarrow \mu^-\mu^+ X)}
{d\xi dy dz} \sim \{
[u(\xi ) + d (\xi ) -\delta u (\xi )] |V_{cd}|^2
+ 2 s(\xi ) |V_{cs}|^2 \} D(z) B_c(c \rightarrow \mu^+ X).
\label{dimuon}
\end{equation}
For antineutrino scattering the quark distributions should be
replaced by the corresponding antiquark distributions.
In this equation, $\xi$ is the rescaling variable
defined by $\xi=x(1+\frac{m^2_c}{Q^2})$, with $m_c$
the mass of the produced charm quark.
The CCFR-Collaboration used
this expression together with
the parametrization of the quark distributions extracted
from their structure function data to determine the strange quark
distributions.
The strange quark component of the quark sea was allowed
to have a different magnitude and shape from the
non-strange component. These two degrees of freedom
were parametrized by two free parameters $\kappa$ and $\alpha$,
respectively.
Further, they treated $B_c$ and the mass of the charm quark, $m_c$,
as free parameters and performed a $\chi^2$ minimization
to find the four free parameters by fitting to
distributions of the measured number densities.
We note first, that, provided $\delta \bar u=-\delta \bar d$
(see Eqs.(\ref{eq2}) and (\ref{csvappr})), charge symmetry violation
does not effect the extraction of the non-strange parton distributions
from the structure function data for small $x$-values.
For an isoscalar target, these distributions
can be determined quite accurately, even if charge symmetry is broken
in the manner given by Eq.\ \ref{csvappr}.
However, in extracting the strange quark distribution,
charge symmetry violation plays an important role.
The distribution extracted by the CCFR-Collaboration is {\it
not} the strange quark distribution, but a linear combination
of the true strange quark distribution and
the term in Eq.(\ref{dimuon}) coming from CSV. Hence,
the distribution measured in the experiment,
$s^{CCFR}(x)$, is related to the ``true'' strange quark distribution
$s(x)$ by
\begin{equation}
s(x)=s^{CCFR}(x) +
\frac{1}{2} \frac{|V_{cd}|^2}{|V_{cs}|^2} \, \delta \bar u(x).
\end{equation}
Since $\frac{|V_{cd}|^2}{|V_{cs}|^2}\approx 0.05$, the
error one makes is roughly two per cent,
if $\delta \bar u$ is of the same order of magnitude
as $s(x)$, as the experimental data suggest. $\delta\bar u(x)$ is
negative and hence the true strange quark distribution
should be smaller than that determined by CCFR neglecting
charge symmetry violation. Note that we have neglected all other
contributions of CSV to the extraction of any other parton
distributions, and
we neglect higher order corrections, which could be sizable
\cite{Barone,Reya}.
\subsection{Determination of $\mbox{sin}^2(\theta_W )$}
It might appear that the precision measurement of $\mbox{sin}^2(\theta_W )$
from neutrino deep-inelastic scattering, carried out by the CCFR
Collaboration \cite{SWeinberg}, rules out the possibility of large CSV
in the parton sea distributions. Sather \cite{Sather} has previously
pointed out that the measurement of $\mbox{sin}^2(\theta_W )$ is sensitive
to possible CSV effects in parton distributions.
If charge symmetry is valid the ratio of the differences of neutrino
and antineutrino neutral-current and charged-current
cross sections is given by the Paschos-Wolfenstein
relation \cite{Paschos}
\begin{equation}
R^- \equiv \frac{\sigma^{\nu N_0}_{NC}
-\sigma^{\bar\nu N_0}_{NC}}
{\sigma^{\nu N_0}_{CC}
-\sigma^{\bar\nu N_0}_{CC}}=\frac{1}{2}
- \mbox{sin}^2(\theta_W )\, .
\end{equation}
The CCFR Collaboration used this relation to extract
$\mbox{sin}^2(\theta_W )$ and obtained the value
$\mbox{sin}^2(\theta_W )=0.2255\pm 0.0018(\mbox{stat})\pm 0.0010
(\mbox{syst})$ \cite{SWeinberg} which is in very good agreement with
the Standard Model prediction of $0.2230\pm 0.0004$ based on
measured Z, W and top masses \cite{Data}. The precision of this result
puts strong constraints on CSV in parton distributions.
However, since the measurement of $\mbox{sin}^2(\theta_W )$
based on the Paschos-Wolfenstein relation is only sensitive
to CSV in {\it valence } quark distributions,
the substantial charge symmetry violation
in sea-quark distributions found in this analysis does not
contradict the precision measurement of $\mbox{sin}^2(\theta_W )$.
This can be seen as follows.
The difference between the neutrino and antineutrino charged-current
cross sections is proportional to the difference between
$F_2^\nu$ and $F_2^{\bar\nu}$ and to the sum of
$xF_3^\nu$ and $xF_3^{\bar\nu}$. We see that
these linear combinations of
the structure functions are only sensitive
to $\delta q(x)-\delta \bar{q}(x)$ i.e. CSV in {\it valence} quark
distributions (see Eq. \ref{eq2}).
The neutral-current neutrino cross sections on an iso-scalar target,
omitting second generation quark contributions, is given by
\begin{eqnarray}
\frac{d\sigma^{\nu N_0}_{NC}}{dxdQ^2}
= \frac{G_F^2}{2\pi x} \,\,\, & \{ &
a_u \, [u(x)+d(x)-\delta d(x)]x +
a_d \,[u(x)+d(x)-\delta u(x)]x+ \nonumber \\
& + & b_u \,[\bar u(x)+\bar d(x)-\delta\bar d(x)]x +
b_d \, [\bar u(x)+\bar d(x)-\delta\bar u(x)]x \,\, \}
\label{nc}
\end{eqnarray}
Here, we defined $a_f = l_f^2+r_f^2(1-y)^2$ and
$b_f = l_f^2 (1-y)^2 +r_f^2$ with $f=u,d$ and the couplings
of the quarks to the neutral-currents are $l_u=1/2-2/3 \,
\mbox{sin}^2(\theta_W)$,
$r_u=-2/3\, \mbox{sin}^2(\theta_W)$ and
$l_d=-1/2+1/3\,\mbox{sin}^2(\theta_W)$,
$r_d=1/3\, \mbox{sin}^2(\theta_W)$, respectively.
The antineutrino cross section can be obtained by interchanging
quarks with antiquarks in Eq. \ref{nc}.
We immediately see that the difference between neutrino and
antineutrino neutral-current cross sections is
only sensitive to CSV in valence quark distributions.
Thus the large CSV effects in the nucleon sea quark distributions,
suggested by the CCFR-NMC discrepancy, do
not influence the measurement of $\mbox{sin}^2(\theta_W)$
based on the Paschos-Wolfenstein relation.
\section{Test of Parton CSV from W Production at Hadron Colliders}
Clearly, it is important that the charge symmetry violating
distributions, $\delta\bar d$ and $\delta\bar u$,
enter with different weights
in any observable. Otherwise effects due to CSV are not measurable.
In this connection we also note that
most of the measured physical observables are proportional
to the {\it sum} rather than the {\it difference}
of the charge symmetry violating quark distributions.
However, $\delta\bar d$ and $\delta\bar u$ are weighted with
the charges of the quarks in electro-magnetic interactions,
such as deep inelastic scattering
with charged leptons and Drell-Yan processes. In fact, a comparison
between charged lepton and neutrino induced structure functions
was necessary to detect CSV as we have shown in this paper.
We also discussed the implications of CSV on the Drell-Yan process.
In the following, we show that $W$-boson production in proton
deuteron collisions can also be used to test the CSV found in this
paper, if we define a suitable observable. Such measurements could be
carried out at RHIC and LHC. Vigdor \cite{Vig97} originally
suggested that asymmetry in $W$--boson production could be used
as a test of parton charge symmetry.
The cross sections for $pD\rightarrow W^+ X$
and $pD\rightarrow W^- X$ are given by
\begin{eqnarray}
\frac{d\sigma}{dx_F}(pD\rightarrow W^+ X )
&\sim & \{u(x_1) [\bar u(x_2)+\bar d(x_2)-\delta\bar u(x_2)] +
\bar d(x_1) [u(x_2)+d(x_2)-\delta d(x_2)]\}
\mbox{cos}^2\Theta_c \nonumber \\
&+& \{u(x_1)\bar s(x_2) + \bar s(x_1)[u(x_2)+d(x_2)-\delta d(x_2)]\}
\mbox{sin}^2\Theta_c\\
\frac{d\sigma}{dx_F}(pD\rightarrow W^- X )
&\sim & \{ d(x_1) [\bar u(x_2)+ \bar d(x_2)-\delta\bar d(x_2)] +
\bar u(x_1) [u(x_2)+d(x_2)-\delta u(x_2)]\}
\mbox{cos}^2\Theta_c \nonumber\\
&+& \{\bar u(x_1) s(x_2) + s(x_1)
[\bar u(x_2)+\bar d(x_2)-\delta\bar d(x_2)]\}
\mbox{sin}^2\Theta_c \,\,.
\label{wfull}
\end{eqnarray}
We note that the Cabibbo favored terms in the
sum of the $W^+$ and $W^-$ cross sections are invariant
under the interchange of $x_1$ and $x_2$, if charge symmetry is
valid. However,
the Cabibbo suppressed part of the sum contains terms which are not
invariant under $x_1\leftrightarrow x_2$, even if charge symmetry is a good
symmetry.
Thus, if we define the forward-backward asymmetry as
\begin{equation}
A(x_F) =
\frac{(\frac{d\sigma}{dx_F})^{W^+} (x_F) +
(\frac{d\sigma}{dx_F})^{W^-}(x_F)
-(\frac{d\sigma}{dx_F})^{W^+}(-x_F)
-(\frac{d\sigma}{dx_F})^{W^-}(-x_F)}
{(\frac{d\sigma}{dx_F})^{W^+}(x_F)+
(\frac{d\sigma}{dx_F})^{W^-}(x_F)
+(\frac{d\sigma}{dx_F})^{W^+}(-x_F)
+(\frac{d\sigma}{dx_F})^{W^-}(-x_F)} ,
\label{asym}
\end{equation}
we see that it
will be proportional to charge symmetry violating terms and terms
containing strange quarks.
Assuming $s(x)=\bar s(x)$ the numerator of Eq. \ref{asym},
$\Delta(\frac{d\sigma}{dx_F})(x_F)$
is given by
\begin{eqnarray}
\Delta(\frac{d\sigma}{dx_F})(x_F) = \{
& - &[u(x_1)\delta\bar u(x_2)+d(x_1)\delta\bar d(x_2) +
\bar u(x_1)\delta u(x_2)+\bar d(x_1)\delta d(x_2)]
\, \mbox{cos}^2\Theta_c \nonumber \\
& +& [s(x_1) [d(x_2)+\bar d(x_2) -\delta d(x_2)-\delta\bar d(x_2)]
\,\mbox{sin}^2\Theta_c \} - (x_1\leftrightarrow x_2) \,.
\end{eqnarray}
In the following, we use
$\delta\bar u \approx -\delta\bar d$, as suggested by our analysis,
and note that as the charge symmetry violating distribution
is of the same order of magnitude as the the strange quark distribution,
terms proportional to sin$^2\Theta_c$ can be neglected.
Further, we make the approximations $\delta\bar q(x_2)\approx\delta q(x_2)$
for $x_2\le 0.1$ and $\delta \bar q(x_1)\approx 0$ for large $x_1$.
We then obtain
\begin{eqnarray}
\Delta(\frac{d\sigma}{dx_F})(x_F) &=&
\{ [u(x_1)+\bar u(x_1) -d(x_1)-\bar d(x_1)] \delta\bar d(x_2)
\nonumber\\
& + & [\delta u(x_1) \bar u(x_2) +\delta d(x_1)\bar d(x_2)]
\} \mbox{cos}^2\Theta_c \,\,.
\label{W}
\end{eqnarray}
For large $x_F$,
the forward-backward asymmetry (due to the first term in Eq. \ref{W})
is proportional
to $\delta\bar d$ times the difference between the
up and down valence quark distributions. The second term is
sensitive to CSV in valence quark distributions.
However, if $\delta d\approx -\delta u$ for valence quarks,
as suggested by theoretical
considerations \cite{Lond}, the second term of Eq. \ref{W} is
approximately $[\bar d(x_2) - \bar u(x_2)] \delta d(x_1)$
and is only non-zero if we have FSV. Further, if
$\delta d(x_1)$ is positive for large $x_1$, as theoretical calculations
suggest \cite{Lond,Sather} then
the second term will contribute positively to the asymmetry, since
$\bar d -\bar u >0$, so that it would enhance any asymmetry
expected on the basis of CSV in the sea quark distributions suggested
by the NMC-CCFR data.
We calculated the expected asymmetry $A(x_F)$
for $\sqrt{s}=500$ GeV and $\sqrt{s}=1000$ GeV
using the values of $\delta\bar d$ extracted in section II.
The results are shown in Fig. \ref{fig10}. The error bars represent
the errors associated with $\delta\bar d$ and do not include
the errors of the $W$ experiment.
In the calculation, we
retained all terms in Eq. \ref{wfull}.
The result obtained by using the approximation in Eq. \ref{W}
differs only by a few percent from the full calculation.
We predict considerable asymmetries for large $x_F$.
\section{Conclusions}
In conclusion, we have examined in detail the discrepancy at
small $x$ between the CCFR neutrino and NMC muon structure
functions. Assuming that both the structure functions and
strange quark distributions have been accurately determined
in this region, we explored the possible reasons for this
discrepancy. First, we re-examined
the shadowing corrections to neutrino deep inelastic scattering
and concluded that shadowing cannot account for more than half
of the difference between the CCFR and NMC structure functions.
Next, we compared two determinations of the strange quark
distributions: the ``direct'' method, obtained by measuring
opposite sign dimuon production from neutrino and antineutrino
reactions, and by comparing the CCFR and NMC structure
functions. The strange quark distributions extracted by these
two methods are incompatible with each other, even if we allow
the antistrange quark distribution to differ
from the strange distribution in an unconstrained fashion.
The only way we can make these data compatible is by assuming
charge symmetry violation in the sea quark distributions. The
CSV amplitudes necessary to obtain agreement with experiment
are extremely large -- they are of the same order of magnitude
as the strange quark distributions, or roughly 25\% the size of the
nonstrange sea quark distributions at small $x$. Such CSV
contributions are surprisingly large: at least two orders
of magnitude greater than theoretical predictions of charge
symmetry violation. We discussed their influence on other observables,
such as the FSV ratio measured recently in a proton deuteron
Drell-Yan experiment, on the Gottfried sum rule and on the
experimental determination of the Weinberg angle
sin$^2\theta_W$.
We showed that such large CSV effects could be tested by measuring
asymmetries in $W$ boson production at hadron colliders such as RHIC
or LHC. Such experiments could detect sea quark CSV effects, if they
were really as large as is suggested by current experiments.
\vspace*{0.5cm}
\noindent
{\bf {ACKNOWLEDGMENTS}}
\vspace*{0.5cm}
This work was supported in part by the Australian Research Council.
One of the authors [JTL] was supported in part by the National
Science Foundation research contract PHY--9722706. JTL wishes
to thank the Special Research Centre for the Subatomic Structure
of Matter for its hospitality during the period when this
research was carried out.
\references
\bibitem{Miller} G. A. Miller, B. M. K. Nefkens and I. Slaus, Phys. Rep.
{\bf 194},1 (1990).
\bibitem{Henley} E. M. Henley and G. A. Miller in {\it Mesons
in Nuclei}, eds M. Rho and D. H. Wilkinson
(North-Holland, Amsterdam 1979).
\bibitem{Lon98} J. T. Londergan and A. W. Thomas,
in {\it Progress in Particle and Nuclear Physics},
Volume\ 41, p.\ 49,
ed.\ A. Faessler (Elsevier Science, Amsterdam, 1998).
\bibitem{NMCfsv} NMC-Collaboration, P. Amaudruz {\it et al.},
Phys. Rev. Lett. {\bf 66}, 2712 (1991);
Phys. Lett. {\bf B295}, 159 (1992).
\bibitem{Gottfried} K. Gottfried, Phys. Rev. Lett. {\bf 18}, 1174 (1967).
\bibitem{Na51} NA51-Collaboration, A. Baldit {\it et al.},
Phys. Lett. {\bf B332}, 244 (1994).
\bibitem{E866} E866-Collaboration, E. A. Hawker {\it et al.},
Phys.\ Rev.\ Lett.\ {\bf 80}, 3715 (1998).
\bibitem{Ma1} B. Q. Ma, Phys. Lett. {\bf B274} (1992) 433;
B. Q. Ma, A. W. Sch\"afer and W. Greiner, Phys. Rev.
{\bf D47}, 51 (1993).
\bibitem{Steffens} F. M. Steffens and A. W. Thomas, Phys. Lett.
{\bf B389}, 217 (1996).
\bibitem{Tim1} J. T. Londergan, S. M. Braendler and A. W. Thomas,
Phys. Lett. {\bf B424}, 185 (1998).
\bibitem{Tim2} J. T. Londergan, Alex Pang and A. W. Thomas, Phys. Rev
{\bf D54}, 3154 (1996).
\bibitem{NMC} NMC-Collaboration, M. Arneodo et al., Nucl. Phys.
{\bf B483}, 3 (1997).
\bibitem{CCFR} CCFR-Collaboration, W. G. Seligman et al.,
Phys. Rev. Lett. {\bf 79}, 1213 (1997).
\bibitem{Bor98} C. Boros, J. T. Londergan and A. W. Thomas,
Phys.\ Rev.\ Lett., to be published
(preprint {\it hep-ph/}9805011).
\bibitem{Sel97} W.G. Seligman, Ph.D. Thesis, Nevis Report 292, 1997.
\bibitem{Whi90b} L.W. Whitlow {\it et al.}, Phys.\ Lett.\ {\bf B250},
193 (1990).
\bibitem{Arneodo} M. Arneodo, Phys. Rep. {\bf 240} (1994) 301 and the
references given therein.
\bibitem{Boros} C. Boros, J. T. Londergan and A. W. Thomas,
Phys.\ Rev.\ {\bf D}, to be published
(preprint {\it hep-ph/}9804410).
\bibitem{Badelek} J. Kwiecinski and B. Badelek, Phys. Lett.
{\bf B208}, 508 (1988) and Rev. Mod. Phys. {\bf 68} (1996) 445.
\bibitem{Melni} W. Melnitchouk and A. W. Thomas, Phys. Lett.
{\bf B317}, 437 (1993) and Phys. Rev. {\bf C52}, 311 (1995).
\bibitem{Stodolsky} C. A. Piketty and L. Stodolsky,
Nucl. Phys. {\bf B15}, 571 (1970).
\bibitem{VMD} T. H. Bauer, R. D. Spital, D. R. Yennie and
F.M. Pipkin, Rev. Mod. Phys. {\bf 50}, 261 (1978).
\bibitem{Bell} J. S. Bell, Phys. Rev. Lett. {\bf 13}, 57 (1964).
\bibitem{Boris} B. Z. Kopeliovich and P. Marage, Int. J. Mod. Phys.
{\bf A 8}, 1513 (1993).
\bibitem{Adler} S. L. Adler Phys. Rev. {\bf B135}, 963 (1964).
\bibitem{CCFRNLO} CCFR-Collaboration, A. O. Bazarko et al.,
Z. Phys. {\bf C65}, 189 (1995).
\bibitem{Signal} A. I. Signal and A. W. Thomas,
Phys. Lett. {\bf B191}, 205 (1987).
\bibitem{Melni97} W. Melnitchouk and M. Malheiro,
Phys. Rev. {\bf C55}, 431 (1997).
\bibitem{JT} X. Ji and J. Tang,
Phys. Lett. {\bf B362}, 182 (1995).
\bibitem{HSS} H. Holtmann, A. Szczurek and J. Speth,
Nucl. Phys. {\bf A596}, 631 (1996).
\bibitem{CCFRLO} S. A. Rabinowitz et al., CCFR-Collaboration,
Phys. Rev. Lett. {\bf 70}, 134 (1993).
\bibitem{Data} Particle Data Group, Phys. Rev. {\bf D50}, 1173 (1994).
\bibitem{Brodsky} S. J. Brodsky and B. Q. Ma, Phys. Lett.
{\bf B381}, 317 (1996).
\bibitem{smrul} The assumption $\bar{s}(x) =0$ violates the sum
rule constraint of Eq.\ \protect\ref{smrule}.
We use this umphysical assumption only to illustrate a
range of possibilities for the parton CSV contribution.
\bibitem{Lai} H. L. Lai et al., Phys. Rev. {\bf D55}, 1280 (1997)
\bibitem{Ben98} C. J. Benesh and J. T. Londergan, Phys.\ Rev.\
{\bf C58}, 1218 (1998).
\bibitem{Lond} E. Rodionov, A. W. Thomas and J. T. Londergan,
Mod. Phys. Lett. {\bf A9}, 1799 (1994).
\bibitem{Sather} E. Sather, Phys. Lett. {\bf B274}, 433 (1992).
\bibitem{Ben97} C. J. Benesh and T. Goldman, Phys.\ Rev.\ {\bf C55},
441 (1997).
\bibitem{NuTeV}
NuTeV Collaboration, T. Bolton {\it et al.},
1990, {\it ``Precision Measurements of Neutrino Neutrino Neutral Current
Interactions Using a Sign Selected Beam'',} Fermilab Proposal P-815.
\bibitem{seacomm} In general, one should retain contributions from
antiquarks in the projectile and quarks in the target. However, since
the CSV terms we extract seem to be so large, in this case one can neglect
these contributions for large Feynman $x_F$.
\bibitem{Barone} V. Barone, M. Genovese, N. N. Nikolaev,
E. Predazzi and B. G. Zahkarov, Phys. Lett. {\bf B317}, 433 (1993);
Phys. Lett. {\bf B328}, 143
(1994).
\bibitem{Reya} M. Gluck, S. Kretzer and E. Reya,
Phys. Lett.
{\bf B380}, 171 (1996); Erratum-ibid. {\bf B405}, 391 (1997)
and Phys. Lett. {\bf B398}, 381 (1997).
\bibitem{SWeinberg} CCFR Collaboration, K. S. McFarland, {\it et al.},
hep-ex/9806013.
\bibitem{Paschos} E. A. Paschos and L. Wolfenstein, Phys. Rev. {\bf D 7},
91, (1973).
\bibitem{Vig97} S. Vigdor, {\it Second International Symposium
on Symmetries in Subatomic Physics}, Seattle, WA, June
1997 (unpublished).
\begin{figure}
\epsfig{figure=fig1c.eps,height=12.cm}
\caption{The ``charge ratio'' $R_c$ of Eq.\ \protect\ref{rc} vs.\ $x$
calculated using CCFR \protect\cite{CCFR} data for neutrino
and NMC \protect\cite{NMC} data for muon
structure functions. Open triangles: no heavy target
corrections; open circles: $\nu$ data corrected for heavy
target effects using corrections from charged lepton scattering;
solid circles: $\nu$ shadowing corrections calculated in the
``two phase'' model.
Both statistical and systematic errors are shown. }
\label{fig1}
\end{figure}
\begin{figure}
\epsfig{figure=fig2c.eps,height=12.cm}
\caption{The differential cross sections for neutrino
(solid circles) and antineutrino (open circles)
deep inelastic scattering as a function of the variable
$\xi^2\equiv (1-y)^2$ for $x=0.03$ and $Q^2=4$ GeV$^2$.
The solid and dotted lines are the results with and without the
Callan-Gross relation, respectively. The statistical errors are
estimated using the experimental fluxes of neutrinos
and antineutrinos.}
\label{fig2}
\end{figure}
\begin{figure}
\epsfig{figure=fig3c.eps,height=14.cm}
\caption{Shadowing corrections in
the (a) charm and (b) non-charm producing parts of the
neutrino structure function, as a function
of $x$, for a fixed $Q^2=5$ GeV$^2$. The dashed (dash-dotted)
lines stand for VMD (Pomeron) contributions. The solid lines
represent the total shadowing.}
\label{fig3}
\end{figure}
\begin{figure}
\epsfig{figure=fig4c.eps,height=14.cm}
\caption{The strange quark distribution $x\,s(x)$ (open circles) and
antistrange distribution $x\,\bar{s}(x)$ (solid circles)
extracted from the CCFR and NMC structure functions.
The difference between the CCFR neutrino and NMC muon
structure functions $\frac{5}{6} F_2^{CCFR}-3F_2^{NMC}$
(see Eq.\ \protect\ref{diff}) is shown as solid
triangles. The strange quark distribution extracted
by CCFR in a LO-analysis \protect\cite{CCFRLO} is shown as solid
stars, while that from a NLO-analysis \protect\cite{CCFRNLO} is
represented by the solid line, with a band indicating $\pm 1\sigma$
uncertainty in the distribution. Statistical and systematic errors
are added in quadrature.}
\label{fig4}
\end{figure}
\begin{figure}
\epsfig{figure=fig5c.eps,height=14.cm}
\caption{$\frac{1}{2}x[s(x)+\bar s(x)]$ (solid circles) and
$\frac{1}{2}x[s(x)-\bar s(x)]$ (open circles)
as extracted from the CCFR and NMC structure functions
and from the dimuon production data. See Fig.\
\protect\ref{fig4} for the
definition of the other quantities.
The strange quark distribution extracted by the CCFR
Collaboration is shown as a solid line with a band
indicating $\pm 1\sigma$ uncertainty.
Statistical and systematic errors are added in
quadrature. }
\label{fig5}
\end{figure}
\begin{figure}
\epsfig{figure=fig6c.eps,height=14.cm}
\caption{$\frac{1}{2}x[s(x)+\bar s(x)]$ (solid circles) and
$\frac{1}{2}x[s(x)-\bar s(x)]$ (open circles)
as extracted from the CCFR and NMC structure functions
and from the dimuon production data using $\alpha^\prime =1$.}
\label{fig6}
\end{figure}
\begin{figure}
\epsfig{figure=fig7c.eps,height=14.cm}
\caption{
Charge symmetry violating distributions $x(\delta\bar{d}(x) -
\delta\bar{u}(x))/2$ extracted
from the CCFR and NMC structure function data and
the CCFR dimuon production data under the assumption
that $s(x)=\bar s(x)$ (solid circles) and
$\bar s(x)\approx 0$ (open circles) for
$\alpha^\prime =0.83$, and $s(x)=\bar s(x)$ (solid circles) and
$\bar s(x)\approx 0$ (open triangles) for $\alpha^\prime =1$.
(For the latter only statistical errors are shown.)
$xs(x)$ at $Q^2=4$ GeV$^2$ obtained by the CCFR
Collaboration in a NLO analysis
\protect\cite{CCFRNLO} is shown for comparison (solid curve, with
$1\sigma$ error band).}
\label{fig7}
\end{figure}
\begin{figure}
\epsfig{figure=fig8c.eps,height=14.cm}
\caption{Uncertainty in the extracted parton CSV term $x(\delta\bar{d}(x) -
\delta\bar{u}(x))/2$ due to the parametrization used
for the dimuon data on charge symmetry violation.
Open circles: LO CCFR distribution, solid circles:
CTEQ4L parton distribution \protect\cite{Lai}; solid rectangles:
CTEQ4D parton distribution; solid triangles: NLO CCFR distribution.
Here, except for the most ``critical'' point,
only statistical errors are shown.}
\label{fig8}
\end{figure}
\begin{figure}
\epsfig{figure=fig9c.eps,height=14.cm}
\caption{Solid circles: the ratio $\bar{d}(x)/\bar{u}(x)$ vs.\ $x$,
extracted from the Drell-Yan data of FNAL experiment E866
\protect\cite{E866} assuming the validity of charge symmetry.
If CS is violated this ratio corresponds to
$(\bar{d}(x)-\delta \bar{d}(x))/\bar{u}(x)$. The result obtained by
correcting for CSV is shown as open circles.
The ratio $\bar{d}(x)/\bar{u}(x)$ extracted from the difference of
proton and deuteron structure functions measured by the NMC group
\protect\cite{NMCfsv} is shown as solid and
open triangles, without and with CSV, respectively.}
\label{fig9}
\end{figure}
\begin{figure}
\epsfig{figure=fig10c.eps,height=14.cm}
\caption{
The forward-backward asymmetry for $W$ production,
as defined in Eq.
\protect\ref{asym}. The solid and open triangles
are calculated for $\protect\sqrt{s}=500$ GeV and
$\protect\sqrt{s}=1000$ GeV,
respectively. For $\delta\bar d$, the values extracted
from the comparison of the NMC and CCFR structure function
are used. The errors are the errors of $\delta\bar d$ and do not
include the errors of the $W$ experiment. }
\label{fig10}
\end{figure}
\end{document}
|
1,477,468,750,600 | arxiv | \section{Introduction}
During two past decades a significant progress in control and manipulation of separate
electrons within the solid state devices has been made.
A single electron was trapped within a quantum dot \cite{tarucha} in a localized state.
Monitoring the flow of current by resolving the passage of separate electrons
has been achieved. \cite{onthefly} An ultrafast single-electron pumping in a system of quantum dots
connected in series was realized.\cite{pump}
Single-electron Aharonov-Bohm interference was demonstrated \cite{gustav} using a Coulomb-blockaded quantum dot as a valve
injecting separate carriers into the channel via cotunneling events.
Recently, single-electron transfer in a channel placed above the Fermi energy of the reservoirs
was reported \cite{ondemand} with the surface acoustic waves used to trap the moving carrier.
A single electron moving within
the channel can be scattered inelastically and pass its energy to
the environment. On the other hand for the conventional experiments with the electron gas, inelastic scattering of the Fermi level electrons is forbidden
by the Pauli exclusion principle. The electron transport is strictly a Fermi level property in the linear
regime, where the current $I$ is necessarily an even function of the external magnetic field $B$, i.e. $I(B,V)=G(B)V$,
where $G$ is the linear conductance and $V$ the applied bias. The Landauer-B\"uttiker approach
derives the linear conductance $G(B)=\frac{e^2}{h}T(B)$ out of the electron transfer probability $T$,
and the latter is an even function of the magnetic field $T(B)=T(-B)$. The Onsager-Casimir \cite{OCo} symmetry $G(B)=G(-B)$ \cite{OC} does not hold for the non-linear transport,\cite{mb} where a finite energy window participates in the
current flow. Asymmetry of conductance by the non-linear currents carried by the { electron gas} was studied both experimentally\cite{glja,wei,let,zum,ch,bk,na} and theoretically\cite{mb,sz,bs,dsz,pb,dm,ag,sk,ar,li} in a number of papers.
Here we consider { a single} electron injected into a quantum wire and its probability
to pass through an interaction range of another electron confined in
a quantum ring placed in neighborhood, close enough to allow the capacitive coupling \cite{onthefly,gustav,ondemand} between the carriers.
We find that this probability is asymmetric in $B$.
We investigate the relation of the magnetic asymmetry
with the inelastic scattering effects.
We indicate that the magnetic symmetry of the electron transfer is restored when the inelastic
backscattering is excluded. The latter is achieved by inserting a narrow band-pass energy filter in form
of a double barrier structure into the channel
with the resonant energy fixed at the energy of the incoming electron. We show that the energy filter introduced
into the channel restores the magnetic symmetry of the transfer probability only for the electrons
traveling in one direction and not the other, hence the turnstile character of the system is observed
with or without the energy filter.
An appearance of the magnetic asymmetry of the single electron transfer probability was previously discussed
in an bent quantum wire \cite{kalina} or a cavity \cite{szafran} asymmetrically connected to terminals.
Both papers \cite{kalina,szafran} used
a time dependent wave-packet approaches and indicated that the asymmetry of the transfer probability arises when the
channel electron interacts with the surrounding environment.
The present study of the role
of the inelastic scattering requires a discussion of the incoming electron of a definite energy rather than the wave packet dynamics.
We develop such an approach
below and explain its relation to wave packet scattering. The results of this paper are based on a solution
of the two-electron Hamiltonian eigenequation with an exact account taken for the interaction and the electron-electron correlations.
This paper is organized as follows.
In the next section we first sketch the two-electron Hamiltonian used in this paper
in strictly one dimensional models of both the wire and the ring.
Next, we present a time-dependent approach to the scattering problem and then the time-independent treatment.
We demonstrate that the results of the latter can be understood as the limit
of monoenergetic wave packet scattering.
Section III contains the results and IV the discussion. Summary and conclusions are given in section V.
\begin{figure}[ht!]
\centering
\hbox{\rotatebox{0}{
\includegraphics[bb=0 0 230 360, width=50mm] {UkladFig1.eps}
}}
\caption{Schematics of the considered system: the electron $e_2$ travels along a straight channel and is scattered
on the potential of $e_1$ electron that is confined in a quantum ring of radius 30 nm placed at a distance of 35 nm from the channel. The top plot shows the energy spectrum of the electron in the ring.}
\label{schemat}
\end{figure}
\section{Theory}
The system considered in this paper is schematically depicted in Fig. \ref{schemat}.
An electron is confined in a circular quantum ring of radius $R=30$ nm.
Initially, this electron is in its ground-state, with a definite angular momentum
and circularly symmetric charge distribution. Another electron injected from outside goes along the straight channel, interacts with the ring-confined-electron and is partially backscattered. The total energy of the two-electron system is a conserved quantity. The incoming electron is scattered inelastically when the ring absorbs a part of its energy.
The Hamiltonian of the electron in the circular ring with center
in point $(x_c,y_c,0)$ is given by $h_r=\frac{1}{2m^*}({\bf p}+e{\bf A})^2+V(r_c)$
with $r_c^2=(x-x_c)^2+(y-y_c)^2$. The magnetic field $(0,0,B)$ is oriented perpendicular to the plane of electron confinement. For the symmetric gauge
${\bf A}_s=\frac{B}{2}(-(y-yc),x-x_c,0)$ the Hamiltonian of the ring electron takes the form
$h_r=-\frac{\hbar ^2}{2m^*}\nabla^2+V(r_c)+\frac{e^2B^2}{8m^*}r_c^2+\frac{eB}{2m^*}l_c$,
where $l_c$ is the operator of the angular momentum $z$-component with respect to the ring center.
Operators $h_r$ and $l_c$ have common eigenstates $\phi^c_l=f_l(r_c)\exp(il\theta)$, with the angular momentum quantum number $l$. In the limit of a thin ring
the radial wave function $f_l$ tends to the ground-state of a particle confined in an infinite quantum well
and looses its dependence on $l$. The energy spectrum is then given by
$\varepsilon_l=E_r+\frac{\hbar^2}{2m^*R^2}(l+\frac{\Phi}{\Phi_0})^2$ (see the inset to Fig. 1), where
$\Phi_0=\frac{h}{e}$ is the flux quantum, $\Phi=B\pi R^2$ and $E_r$ is the ground-state energy of the radial confinement.
The latter is independent of $l$ and as such is irrelevant for the scattering process. We skip $E_r$ in the following formulae.
For the scattering problem it is most convenient to use another gauge ${\bf A}=B(0,x,0)$, since
then the diamagnetic term produced by the kinetic energy operator ($\frac{e^2B^2}{8m^*}x^2$) vanishes
at the axis of the channel $x=0$. In the following we assume that the channel is so thin that the electron in its
motion along the channel
is in its lowest state of lateral quantization.
For the strictly 1D channel with $x=0$ axis the kinetic momentum $\pi_y={\bf p}_y+eBx$
is independent of $B$, and thus the wave vector $q$ of the motion along the lead corresponds to the same energy
and probability current flux for any $B$.
In order to replace ${\bf A}_s$ by ${\bf A}$ the gauge transformation ${\bf A}={\bf A}_s+\nabla \chi(x,y)$ is performed with $\chi=\frac{B}{2} (xy+x_cy-y_cx)$. Upon the transformation the ring wave functions change to \begin{equation} \phi_l=\phi^c_l(r_c,\theta) \exp(-\frac{ie}{\hbar}\chi(x,y)), \label{gt}\end{equation}
where the phase factor introduced by $\chi$ is independent of $l$.
Although with {\bf A} the angular momentum with respect to the ring center does not commute with the Hamiltonian, $l$
still remains a good quantum number for description of the ring eingestates.
With the assumptions explained above the two-electron Hamiltonian used in this work reads
\begin{equation}
H=h_c({\bf r}_1)+h_r({\bf r}_2)+W(|{\bf r}_1-{\bf r}_2|), \label{ww}
\end{equation}
where $h_c=-\frac{\hbar^2}{2m^*}\frac{\partial^2}{\partial y^2}$ is the channel electron Hamiltonian and $W$ is the interaction
potential. The latter is taken in the screened Coulomb form
\begin{equation}
W(r)=\frac{e^2}{4\pi\epsilon \epsilon_0 r}\exp(-r/\lambda),
\end{equation}
with dielectric constant $\epsilon=12.9$ and the screening length $\lambda=500$ nm
\subsection{Time-dependent scattering picture}
The general form of the two-electron wave function can without a loss of generality be developed in the basis of product
of single-particle eigenstates with definite angular momentum for the ring and the wave vector within the channel $q$
\begin{eqnarray}
\Psi({\bf r}_1,{\bf r}_2,t)&=&\sum_{ql} c_{ql} (t) \Phi_q({\bf r}_1)\phi_l({\bf r}_2) \label{pw}\\ &=& \sum_l \psi_l({\bf r}_1,t)\phi_l({\bf r}_2),\label{dw}
\end{eqnarray}
where the partial wave packets are defined as
\begin{equation} \psi_l({\bf r}_1,t)\equiv\sum_q c_{ql}(t)\Phi_q({\bf r}_1).\label{aas} \end{equation}
The electrons occupying separate regions in space (the wire and the ring) are essentially distinguishable. Anti-symmetrization
of Eq. (\ref{aas}) does not affect any of the results presented below due to the complete separability of the electron wave functions.\cite{dudziak}
For that reason we skipped the anti-symmetrization in the following.
One puts the wave function (\ref{dw}) into the Schr\"odinger equation $i\hbar \frac{\partial \Psi}{\partial t}=H\Psi$
and projects the result on the ring eigenstates, which leads to a set of equations for the partial wave packets
\begin{equation}
i\hbar \frac{\partial \psi_k(y_1,t)}{\partial t}=\sum_l \left([\varepsilon_l+h_c] \delta(k,l)+W_{kl}(y_1)\right)\psi_l(y_1,t),
\end{equation}
where $W_{kl}({\bf r}_1)=\langle \phi_k ({\bf r}_2)|W (|{\bf r}_1-{\bf r}_2|)| \phi_l({\bf r}_2)\rangle$.
Note, that the phase factor due to the gauge transformation ($\ref{gt}$) is canceled in the evaluation of the interaction matrix $W_{kl}$.
In the time dependent calculation we take for the initial condition a Gaussian wave packet $\Psi_l(y,t)=\sqrt{\frac{\Delta k}{2\pi^{1/4}}}\exp(-\frac{\Delta k^2}{4}(y-y_0)^2+iqy)$,
where $l$ corresponds to the ground-state angular quantum number, the average momentum $q>0$ and $y_0$ is far below the ring. For $k\neq l$, in the initial condition $\Psi_k=0$ is applied.
Calculations are performed with a finite difference scheme for the channel of length 16 $\mu m$ with $\Delta y=2$ nm.
The results converge when $|l|\leq 3$ ring eigenstates are included into the basis.
\subsection{Stationary description of the scattering}
The time-independent approach described in this section is suitable for treating the scattering for the incident electron of a definite energy.
The stationary approach is also more computationally effective and does not require
very large computational box since transparent boundary conditions can readily be applied. For $\Delta k=0$ the incoming electron has a definite momentum $\hbar q$,
and a definite energy $E_i=\frac{\hbar^2 q^2}{2m^*}$, hence the total energy $E_{tot}$ of the system is also a well-definite quantity $E_{tot}=E_i+\varepsilon_l$, where $\varepsilon_l$ is the ring ground-state energy.
Therefore, the two-electron wave function for the scattering satisfies the time-independent Schr\"odinger equation
\begin{equation}
H \Psi({\bf r}_1,{\bf r}_2)=E_{tot}\Psi({\bf r}_1,{\bf r}_2). \label{ti}
\end{equation}
We use the form of the function
\begin{equation}
\Psi({\bf r}_1,{\bf r}_2)=\sum_l \psi_l({\bf r}_1)\phi_l({\bf r}_2),\label{tidw}
\end{equation}
which is a time-independent counterpart of Eq. (\ref{dw}).
Insertion of Eq. (\ref{tidw}) into Eq. (\ref{ti}) followed by projection on a ring eigenstate gives
a system of eigenequations for $\psi_l$,
\begin{equation}
\sum_l \left([\varepsilon_l +h_c]\delta(k,l)+W_{kl}(y)\right)\psi_l(y)=E_{tot} \psi_k(y). \label{eqs}
\end{equation}
The electron in the ring is initially in its ground-state with angular momentum $l$ -- as in the
time independent picture.
Therefore, the partial wave $\psi_l$ at the input side is a superposition
of the incoming and backscattered waves $a\exp(i q_l y)+b\exp(-iq_ly)$. Since $\Psi$ is defined
up to a normalization constant, at the bottom of the computational box ($3\mu m$ long)
we simply set $\psi_l(0,y=0)=a+b=1$ as the boundary condition. After the solution
of Eqs. (\ref{eqs}) the values of the incoming $a$ and the backscattered $b$ amplitudes
are extracted from the form $\psi_l$ along the lead.
The partial waves for $k\neq l$ appear only due to the interaction of the incoming electron with the ring,
and they all correspond to the electron flow from the ring to the ends of the channels.
Thus, far away above [below] the ring the partial wave functions corresponding to $k$-th angular momentum quantum number correspond
to transferred [backscattered] electron and have the form of $c_k \exp(i q_k y)$
[$d_k \exp(-i q_k y)$], with $q_k=\sqrt{\frac{2m^*}{\hbar^2}\left(E_{tot}-\varepsilon_k\right)}$.
For $E_{tot}>\varepsilon$ the wave vector $q_k$ is real and the boundary condition $\psi_k(y+\Delta y)=\psi_k(y) \exp(i q_k \Delta y)$
[$\psi_k(y+\Delta y)=\psi_k(y) \exp(-i q_k \Delta y)$] is applied at the top [bottom] end of the computational channel.
For $E_{tot}<\varepsilon$ the wave vector $q_k$ is imaginary and the wave function vanishes
exponentially along the lead. The partial waves with imaginary $q_k$ are counterparts of the evanescent modes \cite{EM} for scattering
in two-dimensional channels. For imaginary wave vectors we put zero for $\psi_k$ at the ends of the computational box.
Upon solution of Eq. ($\ref{eqs}$), the amplitudes $a,b,c_l,d_l$ are calculated. The total transfer probability
is given by $T=\sum_k T_k$ with $T_k=\frac{|c_k|^2}{|a|^2} \frac{q_k}{q_l}$, similarly
the backscattering probability is $R=\sum_k R_k$ with $R_k=\frac{|d_k|^2}{|a|^2} \frac{q_k}{q_l}$.
\begin{figure}[ht!]
\centering
\begin{tabular}{l}
\hbox{\rotatebox{0}{
\includegraphics[bb=100 250 600 800, width=50mm] {EmptyFig1b.eps}
}}
\end{tabular}
\caption{Transfer probability of electron transport through channel in function of the incoming electron energy for
three values of the magnetic field. The inset shows the charge density as obtained by the stationary transport description
for three values of the magnetic field.}
\label{model}
\end{figure}
\begin{figure}[ht!]
\centering
\hbox{\rotatebox{0}{
\includegraphics[bb=100 250 400 800, width=30mm] {EmptyFig.eps}
}}
\caption{The green, blue and black lines show the packet transfer probability through the system of Fig. 1
as calculated by the wave packet simulation for a number of wave vector dispersions for $\Delta k\leq 10^{-3}$/nm as functions
of the external magnetic field. The red line shows the result of the time-independent scattering problem ($\Delta k=0$).
The horizontal line shows the transfer probability for fixed charge of the ring.
}
\label{model}
\end{figure}
\begin{figure}[ht!]
\centering
\begin{tabular}{l}
(a) \hbox{\rotatebox{0}{
\includegraphics[bb=100 250 600 780, width=50mm] {EmptyTk.eps} }}
\\ (b)
\hbox{\rotatebox{0}{
\includegraphics[bb=100 250 600 780, width=50mm] {EmptyRk.eps} }}
\end{tabular}
\caption{Transfer (a) and backscattering (b) probability associated with angular momentum $k$ of the ring
in the final scattering process (see text).}
\label{cnt}
\end{figure}
\begin{figure}[ht!]
\centering
\begin{tabular}{l}
\hbox{\rotatebox{0}{
\includegraphics[bb=200 250 350 600, width=15mm] {BarierPot.eps}
}}
\end{tabular}
\caption{ Double barrier structure used a the energy filter. The inset shows transmission probability through the barrier in function energy
with a peak at $1.6$ meV -- the incident electron energy. The ring center is set at $y_c=1500$ nm. }
\end{figure}
\begin{figure}[ht!]
\centering
\begin{tabular}{l}
\hbox{\rotatebox{0}{
\includegraphics[bb=200 250 350 700, width=20mm] {BarieraFig1.eps}
}}
\end{tabular}
\caption{Electron transfer probability for the DBS placed below (red curve)
or above (blue curve) the ring for the electron incident from the lower end of the wire.
The dashed curve shows the transfer probability for the electron going down with the
DBS placed below the ring.
}
\end{figure}
\begin{figure}[ht!]
\centering
\begin{tabular}{l}
\hbox{\rotatebox{0}{
\includegraphics[bb=200 250 350 600, width=20mm] {BarieraFig1d.eps}
}}
\end{tabular}
\caption{Density of partial waves for the DBS placed above the ring. Location of the ring ($y=1500$ nm) is marked
by a circle on the horizontal axis.}\label{xox}
\end{figure}
\begin{figure}[ht!]
\centering
\begin{tabular}{l}
(a)
\hbox{\rotatebox{0}{
\includegraphics[bb=100 250 600 800, width=60mm] {BarieraFig1a.eps}
}} \\ (b)
\hbox{\rotatebox{0}{
\includegraphics[bb=100 250 600 800, width=60mm] {BarieraFig1b.eps}
}}
\end{tabular}
\caption{ Same as Fig. \ref{xox}, only for the DBS placed below the ring. }
\end{figure}
\begin{figure}[ht!]
\centering
\begin{tabular}{l}
\hbox{\rotatebox{0}{
\includegraphics[bb=200 250 350 700, width=15mm] {AA.eps}
}}
\end{tabular}
\caption{Electron transfer probability for the DBS placed both below
and above the ring for the electron going up (blue curve) or down (red dashed curve) the wire.}
\end{figure}
\begin{figure}[ht!]
\centering
\begin{tabular}{l}
\hbox{\rotatebox{0}{
\includegraphics[bb=200 250 350 700, width=17mm] {AAdens.eps}
}}
\end{tabular}
\caption{ Same as Fig. \ref{xox} only for two DBS: one below and second above the ring. }
\end{figure}
\section{Results}
In Fig. 2 we plotted the electron transfer probability obtained by the time-independent method in function of the incident electron energy,
for three values of the magnetic field. For $E_i<1$ meV the transfer probability vanishes
and for $E_i>3$ meV the value of $T$ becomes close to 1 independent of $B$. Around $E_i=1.6$ meV a distinct asymmetry of $T$ as a function of $B$ is found.
The insets displays the charge density within the ring calculated as $\rho({\bf r}_2)=\int d{\bf r}_1 |\Psi({\bf r}_1,{\bf r}_2)|^2$.
For $B=0.4$ T the density is shifted off the channel (at right to the ring), and consistently $T$ is larger.
The results of the time-dependent simulation for the packet {\it average} energy of $\langle E_i\rangle=\frac{\hbar ^2 q^2}{2m^*}=1.6$ meV are plotted in Fig. 3 in function of $B$ for a number of initial dispersions of the wave vector $\Delta k$.
The horizontal line shows the result obtained for a rigid charge of the ring which is independent of $B$.
All the $B$ dependence of the transfer probabilities given in Fig. 3 is due to the properties of the ring as an inelastic scatterer
which change with the magnetic field. The discontinuities present in the transfer probabilities at $B=\pm B_0=\pm 0.73$ T result from ground-state angular
momentum transitions within the ring [see the top inset to Fig. 1]. With $\Delta k$ decreasing to 0 the results converge to the
result of the stationary description of the scattering for the
incoming electron energy of {\it definite} energy $E_i=\frac{\hbar ^2 q^2}{2m^*}=1.6$ meV which are plotted with
the red line in Fig. 2.
The rest of the results presented in this work was obtained with the stationary description of the transport.
The electron transfer probability as depicted in Fig. 3 is a distinctly asymmetric function of $B$. The asymmetry along with the character of the discontinuities
at the ring ground-state transformations can be understood as due to the relation of the backscattering to the angular momentum absorption by the ring. The incoming (backscattered) electron has a positive (negative) angular momentum with
respect to the center of the ring. When the ring electron compensates for the loss of angular momentum the backscattering is more probable.
Let us concentrate on the magnetic field interval $[-B_0,B_0 ]$ in which the ring ground state corresponds to $l=0$.
The absorption of the angular momentum by the ring is associated with transition from $l=0$ to $l=1$ energy level.
This is less energetically expensive when $B$ becomes negative due to decreasing energy spacing
between the ground state energy and the $l=1$ energy level (see the inset to Fig. 1).
Consistently, the contribution of $l=1$ energy level to the total backscattering probability grows
as $B$ decreases below 0 -- see Fig. 4(b). Fig. 4(a) shows that for $B$ just above the ring-state transition $l=1$ ring state dominates also in the transfer probability.
Below the ground-state angular momentum transition which occurs at $B=-0.73$ T
the ring ground state $l$ is 1 and the absorption of angular momentum by the ring requires an appearance
of $l=2$ wave function to the final scattering process. This becomes energetically expensive below $B<-B_0$,
hence the jump of $T$ that is observed in Fig. 3 at the ring ground-state transformation. As $B$ is decreased further $T$ drops and $l=2$ starts to dominate in the backscattering probability [see Fig. 4(b)].
Our results for the single-electron scattering indicate that the energy absorption is associated both with the electron transfer [Fig. 4(a)] and backscattering [Fig. 4(b)], which is accompanied by magnetic symmetry violation for the electron transfer probability.
We found that one can eliminate selectively the effects of inelastic scattering in the transferred or backscattering waves by a proper tailoring of the potential profile along the channel.
For that purpose we used a double barrier structure (DBS) with center placed on the channel far (1200 nm) below the ring.
Figure 5 shows the applied potential profile and the inset to the figure the electron transfer probability
through the DBS. We can see the resonant peak at the electron energy of 1.6 meV.
The resonant energy was set equal to the energy of the incoming electron, so that the DBS acts like an energy filter -- it is opaque for the electron that lost a part of its energy, i.e. to the partial waves with $k\neq l$.
In Fig. 6 we plotted with the red line the transfer probability for the DBS energy filter placed above the ring.
Fig. 7 shows the plot of partial waves along the channel. Above the DBS
one finds only the partial wave associated with $l=0$, i.e. with the ground-state of the ring.
The electron can transfer across the structure only provided that the it preserves its initial energy.
Therefore, no excitation of the ring electron is possible when the channel electron transfers across the structure.
In Fig. 7 we can see that far below the ring we have an interference of $l=0$ incoming and backscattered waves.
No interference is observed in the partial wave with $l=1$ near $x=0$ ($|\psi_1|$ is constant), since
there is no incoming wave with $l=1$. Nevertheless an oscillation of $l=1$ wave is observed between the
DBS and the ring. The potential of the ring and the DBS form
a wide quantum well in which the partial waves [for instance $l=1$ in Fig. 7] oscillate back and forth. The presence of the wide well
is also responsible for the resonances appearing at the $T(B)$ dependence in Fig. 6. $T(B)$ for the DBS placed above the ring remains an asymmetric function of $B$.
The transfer probability $T$ becomes an even function of $B$ (blue curve in Fig. 6) when the DBS energy filter
is placed below the ring, which removes inelastically scattered partial waves of the total backscattered wave function.
The partial wave function plots given in Fig. 8(a) and Fig. 8(b) show that below the DBS only the partial wave with $l=0$ is found,
but above the structure we see an appearance of the partial waves for $l\neq 0$.
For $B>0$ just below $B_0$ we found that $T(B)$ is nearly the same for the double barrier structure
placed both below and above the ring [see the blue and red curves which nearly coincide in Fig. 5 just below $B_0$].
Note, that for the DBS below the ring at $B=0.6$ T we find that the contribution of $l\neq 0$ in the transferred wave function
is negligible [Fig. 8(b)]. The absorption of the angular momentum by the ring is weak for $B\rightarrow B_0$ due to the large
energy cost of this ring excitation [see the discussion of Fig. 2], hence the similar results found for both locations
of DBS.
In Fig. 6 with the dashed curve we plotted the electron transfer probability for the DBS below the ring
and the electron incident from the upper end of the wire. In this case the electron is first scattered by
the ring and then by the DBS. We can see that for a single DBS present within the wire the transfer probability from one
end of the wire to the other is different than in the opposite direction (the dashed curve in Fig. 6 can be obtained from
the red one by inversion $B\rightarrow -B$), i.e. the system acts like a turnstile.
Figure 9 gives the electron transfer probability for two DBS placed both below and above the ring.
The inelastic scattering is switched off for both the transferred and backscattered trajectories.
The partial waves given in Fig. 10 show that the ring does get excited but only for the channel
electron staying between the two DBS.
We find that the transfer probability is symmetric with respect to both the magnetic field and the direction
from which the electron comes to the ring. The small deviations off the symmetries visible at a closer inspection of Fig. 9
are due to small but finite width of the resonance peak (see the inset to Fig. 5).
The inelastic scattering is allowed with the energy losses smaller than the width of the peak.
\section{discussion}
Results of Figs. 3 and 4 indicate that the asymmetry of the transfer probability as a function of the magnetic field
is a result of 1) geometrical asymmetry of the system 2) inelastic electron scattering
-- the absorption of the angular momentum by the ring
which is necessarily accompanied by the energy absorption 3) the energy transfer occurs through the electron-electron interaction.
For systems with the two-dimensional electron gas
it was pointed out \cite{mb,bs} that the magnetic asymmetry of conductance may result from the potential
landscape within the device being not an even function of $B$ -- the potential produced by charges
at the edges of the channel in the Hall effect \cite{mb} as the most basic example. In this case
the asymmetry of the charge distribution is translated to the asymmetry of the transport by the electron-electron
interaction.
The role of the electron-electron interaction for the magnetic asymmetry of the transport in the electron gas was also indicated in Refs. \cite{bs,dsz,sk}.
In the present study of the single-electron transport the asymmetry is due to the properties of the ring -- the enhancement
of the backscattering accompanied by absorption of the angular momentum of the channel electron -- which are not
an even function of $B$ due to the form of the ring energy spectrum. Here, the backscattering is only due to the electron-electron interaction.
Although in the linear transport regime the inelastic scattering of the electrons at the Fermi level is blocked by
the fact that the states of lower energies are occupied, in the non-linear transport the inelastic scattering
is not only allowed but necessary for thermalization of the carriers passing between electron reservoirs of
unequal electrochemical potentials. The asymmetry that we find in this work results from the energy transferred
by the channel electron to the ring, i.e. it occurs due to the inelastic scattering.
The magnetic symmetry is restored when the inelastic backscattering is excluded. The invariance of the backscattering
is invoked in explaining $T(B)=T(-B)$ symmetry when
the transfer kinetics is very different for both magnetic field orientations -- see
the deflection of the electron trajectories by the Lorentz force in Ref. \onlinecite{kalina}.
In the present work the Lorentz force is excluded by the strict 1D approximation for the channels width.
Nevertheless, the different kinetics resulting in the same transfer probability was also found
in Figs. 8(a) and 8(b).
A single DBS placed below the ring restores the magnetic symmetry of the transfer, still only
for the electron injected from one side of the channel and not the other (the microreversibility is not restored -- see Fig. 6).
Thus, for a single DBS present within the wire the electron transfer probability from one end of the terminal to the other
are unequal. The turnstile character of the system is also a result of the inelastic scattering.
The conditions present in the linear transport regime -- with the inelastic scattering
excluded at both the transfer and the backscattering -- were simulated with two DBS placed
at both sides of the ring. This configuration of energy filters
restores the microreversibility of the system.
The transfer probability becomes
an even function of $B$,
although the kinetics of the electron transfer is not identical for $\pm B$ [Fig. 10].
Moreover, the microreversibility is also restored [Fig. 9], although
the system with two DBS is still not spatially symmetric under a point inversion.
\section{Summary and Conclusions}
We have studied single-electron scattering process on an electron localized in a quantum ring off the electron transport channel.
We developed for that purpose a time-independent approach based on an expansion of the two-electron function in a basis
of ring eigenstates and explained its relation to the numerically exact time-dependent scattering picture.
We have found that the electron transfer probability is an asymmetric function of $B$ and that the asymmetry
results from the energy cost of the angular momentum absorption by the ring which is not an even function of $B$.
We have demonstrated that the symmetry is restored when the electron backscattering with the energy loss is excluded.
The exclusion was performed by a double barrier structure with the resonant state set at the energy of the incoming electron.
In order to remove the turnstile character of the ring as a scatterer one needs to employ a pair of double barrier
structures at both the entrance and the exit to the ring interaction range.
{\bf Acknowledgements}
This work was performed
supported by the Polish Ministry of Science an Higher Education (MNiSW) within a research project N N202 103938 for 2010-2013.
Calculations were performed in ACK\---CY\-F\-RO\-NET\---AGH on the RackServer Zeus.
|
1,477,468,750,601 | arxiv | \section{Description of the results}
We consider a $(n+1+m)$--dimensional vector--field $N$ which, expressed in local coordinates $({\rm I}, y, {{\psi}})\in{\mathbb P}={\mathbb I}\times {\mathbb Y}\times {\mathbb T}^m$ (where ${\mathbb I}\subset{\mathbb R}^n$, ${\mathbb Y}\subset{\mathbb R}$ are open and connected; ${\mathbb T}={\mathbb R}/(2\pi{\mathbb Z})$ is the standard torus), has the form
\begin{align}\label{X0}N({\rm I}, y) =v({\rm I}, y)\partial_y+\omega({\rm I}, y) \partial_{{\psi}}\, . \end{align}
The motion equations of $N$
\begin{align}\label{unperturbed vectorfield}\left\{\begin{array}{l}
\displaystyle \dot {\rm I}=0 \\\\
\displaystyle \dot y= v({\rm I}, y)\\\\
\displaystyle \dot{{\psi}}=\omega({\rm I}, y)
\end{array}\right.\end{align}
can be integrated in cascade:
\begin{align}\label{motions1}\left\{\begin{array}{l}
\displaystyle {\rm I}(t)={\rm I}_0 \\\\
\displaystyle y(t)=\eta({\rm I}_0, t)\\\\
\displaystyle {{\psi}}(t)={{\psi}}_0+\int_{t_0}^t \omega({\rm I}_0, \eta({\rm I}_0, t'))d t'
\end{array}\right.\end{align}
with $\eta({\rm I}_0, \cdot)$ being the general solution of the one--dimensional equation $\dot y(t)= v({\rm I}_0, y)$.
This formula shows that along the solutions of $N$
the coordinates ${\rm I}$ (``actions'') remain constant, while the motion of the coordinates ${{\psi}}$ (``angles'') is coupled with the motion of the ``driving'' coordinate $y$. We assume that $v$ is suitably far from vanishing (for the problem considered in the paper $|v|$ has a positive lower bound). It is to be noted that, {without further assumptions on the function $v$ (like, for example, of being ``small'', or having a stationary point)} {nothing prevents to the $y$ coordinate to move {\it fast}.} For this reason -- {with slight abuse due to the fact that fastness may nowise occur} -- we refer to the solutions in~\eqref{motions1} as {\it fast driven system}. {The main risk of such kind of system is that}
the solution $q(t)=({\rm I}(t), y(t), {{\psi}}(t))$ of $N$ in~\eqref{motions1}
leaves the domain ${\mathbb P}$ {at a finite time}.
It is then convenient to define the {\it exit time from ${\mathbb P}$ under $N$}, or, more in general, the {\it exit time from a given $W\subseteq{\mathbb P}$ under $N$
}, and denote it as $t^{N, W}_{\rm ex}$, the {(possibly infinite)} first time that $q(t)$
leaves $W$.\\ Let us now replace the vector--field $N({\rm I}, y)$ with a new vector--field of the form
\begin{align}\label{perturbed}X({\rm I}, y, {{\psi}})=N({\rm I}, y)+P({\rm I}, y, {{\psi}}) \end{align}
where the ``perturbation''
\begin{align}P=P_{ 1}({\rm I}, y, {{\psi}})d{\rm I}+P_{ 2}({\rm I}, y, {{\psi}})dy+P_{ 3}({\rm I}, y, {{\psi}})d{{\psi}}\end{align}
is, in some sense, ``small'' (see the next section for precise statements). Let $t^{X, W}_{\rm ex}$ be the exit time from $W$ under $X$, and let $\epsilon$ be {a uniform upper bound} for {the absolute value of $P_{ 1}$} on $W$. Then, one has a linear--in--time {\it a--priori} bound for the variations of ${\rm I}$, as follows
\begin{align}\label{aprioribound}|{\rm I}(t)-{\rm I}(0)|\le \epsilon t\quad \forall\ t:\ |t|< t^{X, W}_{\rm ex}\qquad W\subseteq{\mathbb P}\,.\end{align}
We are interested in improving the bound~\eqref{aprioribound}. To the readers who are familiar with Kolmogorov--Arnold--Moser ({{\sc kam}}) or Nekhorossev theories, this kind of problems is well known: {see}~\cite{arnold63, nehorosev77, poschel93, guzzoCB16}, or
{\cite{cellettiC98, giorgilliLS09, locatelliG07, guzzoEP2020, volpiLS18} for applications to realistic models.}
{Those are theories originally formulated for Hamiltonian vector--fields (next extended to more general ODEs), hence, in particular, with $n=m$ and the coordinate $y$ absent. In those cases the unperturbed motions of the coordinates $({\rm I}, {{\psi}})$ are \begin{align}\label{unperturbedKAM}{\rm I}(t)={\rm I}_0\,,\quad {{\psi}}(t)={{\psi}}_0+\omega({\rm I}_0)t\end{align} and the properties of the motions after the perturbing term is switched on depend on the arithmetic properties of the frequency vector $\omega({\rm I}_0)$. Under suitable non--commensurability assumptions of $\omega({\rm I}_0)$ (referred to as ``Diophantine conditions''), {\sc kam} theory ensures the possibility of continuing the unperturbed motions \eqref{unperturbedKAM} for all times. Conversely, if $\omega({\rm I})$ satisfies, on an open set, an analytic property known as ``steepness'' (which is satisfied, e.g., if $\omega$ does not vanish and moreover if it is the gradient of a convex function), Nekhorossev theory allows to infer -- for {\it all} orbits -- a bound as in~\eqref{aprioribound}, with $e^{-C/\epsilon^{a}}$ replacing $\epsilon $ and $t_{\rm ex}^{X, W}=e^{C/\epsilon^{b}}$, with suitable $a$, $b$, $C>0$. It is to be remarked that in the Nekhorossev regime the exponential scale of $t_{\rm ex}^{X, W}$ is an intrinsic consequence of steepness, responsible of a process known as ``capture in resonance''.}
In {the case considered in the paper such phenomenon does not seem to exist and hence the exit time} $t_{\rm ex}^{X, W}$ {has no reason to be long}. Nevertheless, motivated by an application to celestial mechanics described below, we are interested with replacing $\epsilon$ in~\eqref{aprioribound} with a smaller number.
We shall prove the following result {(note that steepness conditions are not needed here).}
\vskip.2in
\noindent
{\bf Theorem A} {\it Let $X=N+P$ be real--analytic, where $N$ is as in~\eqref{X0}, with $v\not\equiv 0$. Under suitable ``smallness'' assumptions involving $\omega$, $\partial\omega$, $\partial v$ and $P$, the bound in~\eqref{aprioribound} holds with $e^{-C/\epsilon^{a}}$ replacing $\epsilon$, with a suitable $a$, $C>0$.}
\vskip.1in
\noindent
A quantitative statement of Theorem A is given in {Theorem}~\ref{Normal Form Lemma} below. In addition, in view of our application, we also discuss a version to the case when analyticity in ${{\psi}}$ fails; this is Theorem~\ref{Normal Form LemmaNEW}.
\noindent
To describe how we shall use {Theorem A (more precisely, Theorem~\ref{Normal Form LemmaNEW})}, we make a digression on the three--body problem and the {\it renormalizable integrability} of the simply averaged Newtonian potential~\cite{pinzari19}. The Hamiltonian governing the motions of a three--body problem in the plane where the masses are $1$, $\mu$ and ${\kappa} appa$, is (see, e.g.,~\cite{fejoz04})
\begin{align} {\rm H}_{\rm 3b}&=\left(1+\frac{1}{{\kappa}}\right)\frac{\|{\mathbf y}\|^2}{2}+\left(1+\frac{1}{\mu}\right)\frac{\|{\mathbf y}'\|^2}{2}-\frac{{\kappa}}{\|{\mathbf x}\|}-\frac{\mu}{\|{\mathbf x}'\|}-\frac{{\kappa}\mu}{\|{\mathbf x}-{\mathbf x}'\|}+{{\mathbf y}\cdot{\mathbf y}'}\end{align}
where ${\mathbf y}$, ${\mathbf y}'\in {\mathbb R}^2$; ${\mathbf x}$, ${\mathbf x}'\in {\mathbb R}^2$, with ${\mathbf x}\ne 0\ne {\mathbf x}'$ and ${\mathbf x}\ne {\mathbf x}'$, are impulse--position coordinates; $\|\cdot\|$ denotes the Euclidean norm and the gravity constant has been chosen equal to $1$, by a proper choice of the units system.
We rescale
\begin{align}
({\mathbf y}', {\mathbf y})\to \frac{{{\kappa}}^2}{1+{{\kappa}}}({\mathbf y}', {\mathbf y})\, ,\quad ({\mathbf x}', {\mathbf x})\to \frac{1+{{\kappa}}}{{{\kappa}}^2}({\mathbf x}', {\mathbf x})\end{align}
multiply the Hamiltonian by $\frac{1+{{\kappa}}}{{{\kappa}}^3}$ and obtain
\begin{align}\label{HH}{\rm H}_{\rm 3b}({\mathbf y}', {\mathbf y}, {\mathbf x}', {\mathbf x})=\frac{\|{\mathbf y}\|^2}{2}-\frac{1}{\|{\mathbf x}\|}
+\delta\left(\frac{\|{\mathbf y}'\|^2}{2}-\frac{\alpha}{\|{\mathbf x}-{\mathbf x}'\|}-\frac{\beta}{\|{\mathbf x}'\|}\right)
+\gamma {\mathbf y}\cdot{\mathbf y}'\end{align}
with \begin{align}
\alpha:=\frac{\mu^2(1+{\kappa})}{{\kappa}(1+\mu)}\, ,\quad
\beta:=\frac{\mu^2(1+{\kappa})}{{\kappa}^2(1+\mu)}\, ,\quad \gamma :=\frac{{\kappa}}{1+{\kappa}}\, ,\quad \delta:=\frac{{\kappa}(1+\mu)}{\mu(1+{\kappa})}\,.
\end{align}
In order to simplify the analysis a little bit, we introduce a main assumption.
The Hamiltonian
${\rm H}_{\rm 3b}$ in~\eqref{HH} includes the Keplerian term
\begin{align}\label{Kep}\frac{\|\mathbf y\|^2}{2}-\frac{1}{\|\mathbf x\|}=-\frac{1}{2\Lambda^2}\, . \end{align}
We assume that this term is ``leading'' in the Hamiltonian. By averaging theory, this assumption allows us to replace
(at the cost of a small error) ${\rm H}_{\rm 3b}$ by its $\ell$--average
\begin{align}\label{ovlH}\overline{\rm H}=-\frac{1}{2\Lambda^2}+\delta{\rm H}\ \end{align}
where $\ell$ is the mean anomaly associated to~\eqref{Kep}, and\footnote{ Remark that
${\mathbf y}(\ell)$ has vanishing $\ell$--average so that the last term in~\eqref{HH} does not survive.}
\begin{align}\label{secular}
{\rm H}:=\frac{\|{\mathbf y}'\|^2}{2 }-\alpha{\rm U}-\frac{\beta}{\|{\mathbf x}'\|}
\end{align}
with
\begin{align}\label{Usb}{\rm U}:=\frac{1}{2\pi}\int_0^{2\pi}\frac{d\ell}{\|{\mathbf x}'-{\mathbf x}(\ell)\|}\end{align}
being the ``simply\footnote{Here, ``simply'' is used as opposed to the more familiar ``doubly'' averaged Newtonian potential, most often encountered in the literature; e.g.~\cite{laskarR95, fejoz04, pinzari-th09, chierchiaPi11b, chierchiaPi11c}.} averaged Newtonian potential''. {We recall that the mean anomaly $\ell$ is defined as the area spanned by ${\mathbf x}$ on the Keplerian ellipse generated by \eqref{Kep} relatively to the perihelion ${\mathbf P}$ of the ellipse, in $2\pi$ units.}
From now on we focus on the motions of the averaged Hamiltonian~\eqref{secular}, bypassing any quantitative statement concerning the averaging procedure, {as this would lead much beyond the purposes of the paper\footnote{{As we consider a region in phase space close where ${\mathbf x}'$ is very close to the instantaneous Keplerian orbit of ${\mathbf x}$, quantifying the values of the mass parameters and the distance which allow for the averaging procedure is a delicate (even though crucial) question, which, by its nature, demands careful use of regularisations. Due to the non--trivial underlying analysis, we choose to limit ourselves to
point out that
the renormalizable integrability of the Newtonian potential has a nontrivial dynamical impact on the simply averaged three--body problem, which explain the existence of the motions herewith discussed, which would not be justified otherwise. }}}.
Neglecting the first term in~\eqref{ovlH}, which is an inessential additive constant for $\overline{\rm H}$ and reabsorbing the constant $\delta$ with a time change, we are led to look at the Hamiltonian ${\rm H}$ in~\eqref{secular}. We denote as $\mathbb E$ the Keplerian ellipse generated by Hamiltonian~\eqref{Kep}, for negative values of the energy. Without loss
of generality,
assume $\mathbb E$ is not a circle and\footnote{We can do this as the Hamiltonian ${\rm H}_{\rm 3b}$ rescale by a factor $\beta^{-2}$ as $({\mathbf y}', {\mathbf y})\to\beta^{-1}({\mathbf y}', {\mathbf y})$ and $({\mathbf x}', {\mathbf x})\to\beta^2({\mathbf x}', {\mathbf x})$.}
$\Lambda=1$.
Remark that, as the mean anomaly $\ell$ is averaged out, we loose any information concerning the position of ${\mathbf x}$ on $\mathbb E$, so we shall only need two couples of coordinates for determining the shape of
$\mathbb E$ and the vectors ${\mathbf y}'$, ${\mathbf x}'$. These are:
\begin{itemize}
\item[\tiny\textbullet] the ``Delaunay couple'' $({\rm G}, {\rm g})$, where ${\rm G}$ is the Euclidean length of ${\mathbf x}\times {\mathbf y}$ and ${\rm g}$ detects the perihelion. We remark that ${\rm g}$ is measured with respect to ${\mathbf x}'$ (instead of with respect to a fixed direction), as the SO(2) reduction we use
a rotating frame which moves with ${\mathbf x}'$ (compare the formulae in~\eqref{coord} below);
\item[\tiny\textbullet] the ``radial--polar couple''$({\rm R}, {\rm r})$, where ${\rm r}:=\|{\mathbf x}'\|$ and ${\rm R}:=\frac{{\mathbf y}'\cdot{\mathbf x}'}{\|{\mathbf x}'\|}$.
\end{itemize}
\noindent
Using the coordinates above, the Hamiltonian in~\eqref{secular} becomes
\begin{align}\label{3bpav}{\rm H}({\rm R}, {\rm G}, {\rm r}, {\rm g})=\frac{{\rm R}^2}{2}+\frac{({\rm C}-{\rm G})^2}{2{\rm r}^2}-\alpha{\rm U}({\rm r}, {\rm G}, {\rm g})
-\frac{\beta}{{\rm r}}\end{align}
where ${\rm C}=\|{\mathbf x}\times{\mathbf y}+{\mathbf x}'\times{\mathbf y}'\|$ is the total angular momentum of the system, and we have assumed ${\mathbf x}\times{\mathbf y}\parallel{\mathbf x}'\times{\mathbf y}'$, so that $\|{\mathbf x}'\times{\mathbf y}'\|={\rm C}-\|{\mathbf x}\times{\mathbf y}\|={\rm C}-{\rm G}$.
\noindent
The Hamiltonian
~\eqref{3bpav} is now wearing 2 degrees--of--freedom. As the energy is conserved, its motions evolve on the $3$--dimensional manifolds ${\cal M}_c=\{{\rm H}=c\}$. On each of such manifolds the evolution is associated to a $3$--dimensional vector--field $X_c$, given by the velocity field of some triple of coordinates on
${\cal M}_c$. As an example, one can take the triple $({\rm r}, {\rm G}, {\rm g})$, even though a more convenient choice will be done below.
To describe the motions we are looking for, we need to recall a remarkable property of the function ${\rm U}$, pointed out in~\cite{pinzari19}. First of all, one has to note that ${\rm U}$ is integrable, as it is a function of $({\rm r}, {\rm G}, {\rm g})$ only. But the main point is that there exists a function ${\rm F}$ of two arguments such that
\begin{align}\label{relation***}{\rm U}({\rm r}, {\rm G}, {\rm g})={\rm F}({\rm E}({\rm r}, {\rm G}, {\rm g}), {\rm r})\end{align}
where
\begin{align}\label{E}{\rm E}({\rm r}, {\rm G}, {\rm g})={\rm G}^2+{\rm r}\sqrt{1-{\rm G}^2}\cos{\rm g}\,.\end{align}
The function ${\rm E}$ is referred to as {\it Euler integral}, and we express~\eqref{relation***} by saying that ${\rm U}$ is {\it renormalizable integrability via the Euler integral}. Such cirumstance implies that the level sets of ${\rm E}$, namely the curves \begin{align}\label{level curves}{\rm G}^2+{\rm r}\sqrt{1-{\rm G}^2}\cos{\rm g}={\cal E}\end{align} are also level sets of ${\rm U}$. On the other hand, the phase portrait of~\eqref{level curves} keeping ${\rm r}$ fixed is completely explicit and has been studied in~\cite{pinzari20b}. We recall it now. Let us fix (by periodicity of ${\rm g}$) the strip $[-\pi, \pi]\times [-1, 1]$.
For $0<{\rm r}<1$ or $1<{\rm r}<2$ it includes two minima $(\pm\pi, 0)$ on the ${\rm g}$--axis; two symmetric maxima on the ${\rm G}$--axis and one saddle point at $(0, 0)$.
When ${\rm r}>2$ the saddle point disappears and $(0, 0)$ turns to be a maximum.
The phase portrait includes two separatrices when $0<{\rm r}<1$ or $1<{\rm r}<2$; one separatrix if ${\rm r}>2$.
These are the level sets $$\left\{\begin{array}{l}{\cal S}_0({\rm r})=\{{\cal E}={\rm r}\}\,,\quad 0<{\rm r}<1\,,\ 1<{\rm r}<2\\\\
{\cal S}_1({\rm r})=\{{\cal E}=1\}\,,\quad 0<{\rm r}<1\,,\ 1<{\rm r}<2\,,\ {\rm r}>2
\end{array}\right.$$
with ${\cal S}_0({\rm r})$ being the separatrix through the saddle; ${\cal S}_1({\rm r})$ the level set through circular orbits.
Rotational motions in between ${\cal S}_0({\rm r})$ and ${\cal S}_1({\rm r})$, do exist only for $0<{\rm r}<1$. The minima and the maxima are surrounded by librational motions and different motions (librations about different equilibria or rotations) are separated by ${\cal S}_0({\rm r})$ and ${\cal S}_1({\rm r})$. All of this is represented in Figure~\ref{figure1}.
\begin{figure}[htbp]
\subfigure[\label{a}]{\includegraphics[width=0.30\textwidth]{fig1.jpg}}
\hfill
\subfigure[\label{b}]{\includegraphics[width=0.30\textwidth]{fig2.jpg}}
\hfill
\subfigure[\label{c}]{\includegraphics[width=0.30\textwidth]{fig3.jpg}}
\caption{Sections, at ${\rm r}$ fixed, of the level surfaces of ${\rm E}$. (a): $0<{\rm r}<1$; (b): $1<{\rm r}< 2$; (c): ${\rm r}>2$.}
\label{figure1}
\subfigure[\label{d}]{\includegraphics[width=0.30\textwidth]{fig4.jpg}}\hfill
\subfigure[\label{e}]{\includegraphics[width=0.30\textwidth]{fig5.jpg}}\hfill
\subfigure[\label{f}]{\includegraphics[width=0.30\textwidth]{fig6.jpg}}
\caption{Logs of the level surfaces of ${\rm E}$ in the space $({\rm g}, {\rm G}, {\rm r})$. (a): $0<{\rm r}<1$; (b): $1<{\rm r}< 2$; (c): ${\rm r}>2$.}
\label{figure2}
\end{figure}
In Figure~\ref{figure2} the same level sets are drawn in the 3--dimensional space $({\rm r}, {\rm G}, {\rm g})$. The spatial visualisation turns out to be useful for the purposes of the paper, as the coordinate ${\rm r}$, which stays fixed under ${\rm E}$, is instead moving under ${\rm H}$, due to its dependence on ${\rm R}$; see~\eqref{3bpav}. We denote as ${\cal S}_0$ the union of all the ${\cal S}_0({\rm r})$ with $0\le {\rm r}\le 2$.
It is to be noted that, while ${\rm E}$ is perfectly defined along ${\cal S}_0$, ${\rm U}$ is not so. Indeed, as
\begin{align}\label{S0}{\cal S}_0({\rm r})=\Big\{({\rm G}, {\rm g}):\quad {\rm G}^2+{\rm r}\sqrt{1-{\rm G}^2}\cos{\rm g}={\rm r}\, ,\ -1\le {\rm G}\le 1\, ,\quad {\rm g}\in {\mathbb T}\Big\}\qquad 0\le{\rm r}<2\end{align}
we have\footnote{Rewriting~\eqref{S0} as
\begin{align}\label{relation1}
{\rm r}=\frac{{\rm G}^2}{1-\sqrt{1-{\rm G}^2}\cos{\rm g}}
\end{align}
tells us that $({\rm G}, {\rm g})\in {\cal S}_0({\rm r})$ if and only if
${\mathbf x}'$ occupies in the ellipse ${\mathbb E}$ the position with true anomaly $\nu=\pi-{\rm g}$.} ${\rm U}({\rm r}, {\rm G}, {\rm g})=\infty$ for $({\rm G}, {\rm g})\in{\cal S}_0({\rm r})$, for all $0\le{\rm r}\le2$.
\noindent
The natural question now raises whether
any of the ${\cal E}$--levels in Figure~\ref{figure2} is an ``approximate'' invariant manifold for the Hamiltonian ${\rm H}$ in~\eqref{3bpav}. In~\cite{pinzari20a} and~\cite{diruzzaDP20} a positive answer has been given for case ${\rm r}>2$, corresponding to panels (c).
In this paper, we want to focus on motions close to
${\cal S}_0$ with ${\rm r}$ in a left neighbourhood of $2$ (panels (b)). Such portion of phase space is denoted as ${\cal C}$. By the discussion above, motions in ${\cal C}$ are to be understood as ``quasi--collisional''.
\noindent
To state our result, we denote as ${\rm r}_{\rm s}(A)$ the value of ${\rm r}$ such that the area encircled by ${\cal S}_0({\rm r}_{\rm s}(A))$ is $A$. Then the set $\{\exists\ A:\ {\rm r}={\rm r}_{\rm s}(A)\}$ corresponds to ${\cal S}_0$. We prove:
\vskip.1in
\noindent
{\bf Theorem B} {\it Inside the region ${\cal C}$ there exists an open set $W$ such that along any motion with initial datum in $W$, for all $t$ with $|t|\le t_{\rm ex}^{X, W}$, the ratio between the absolute variations of the Euler integral ${\rm E}$ from time $0$ to time $t$, for all $|t|\le t_{\rm ex}^{X, W}$, and the {\it a--priori} bound $\epsilon t$ (where $\epsilon :=|P_1|_\infty${, with $P_1$ being the action component of the vector--field})
does not exceed $C e^{-L^3/C}$, provided that the initial value of ${\rm r}$ is $e^{-L}$ away from ${\rm r}_{\rm s}(A)$, with $L>0$ sufficiently large.
}
\vskip.1in
\noindent
The proof of Theorem B, fully given in the next section, relies
on a careful choice of coordinates $(A, y, \psi)$ on ${\cal M}_c$, where $y$ is diffeomorphic to ${\rm r}$, while $(A, \psi)$ are the action--angle coordinates of ${\rm E}({\rm r}, \cdot, \cdot)$, such that the associated vector--field has the form in~\eqref{perturbed} with $n=m=1$. The diffeomorphism ${\rm r}\to y$ allows $X_c$ to keep its regularity upon ${\cal S}_0$.
\vskip.1in
\noindent
Before switching to proofs, we recall how the theme of collisions in $ N$--body problems (with ${N}\ge 3$) has been treated so far.
As the literature in the field in countless, by no means we claim completeness.
In the late 1890s H. Poincar\'e~\cite{Poincare:1892} conjectured the existence of special solutions in a model of the three--body problem usually referred to as planar, circular, restricted three--body problem ({\sc pcrtbp}).
According to Poincar\'e's conjecture, when
one of the primaries has a small mass $\mu$, the orbit of an infinitesimal body approaching a close encounter with the small primary consists of
two Keplerian arcs glueing so as to form a cusp. These solutions were named by him {\it second species solutions}, and their existence has been next proved in~\cite{bolotin2005, bolotin2006a, bolotin2006b, bolotin2006c, bolotinN2013, marcoN1995, henrard1980}.
In the early 1900s, J.~Chazy classified all the possible final motions of the three--body problem, including the possibility of collisions
~\cite{chazy1922}. The study was reconsidered in~\cite{alekseev1971, alekseev1981}.
After the advent of {{\sc kam}} theory, the existence of almost--collisional quasi--periodic orbits was proven~\cite{chenciner1988, fejoz02, lei15}.
The papers~\cite{saari1971, saari1973, fleischerK2019a, fleischerK2019b, moeckel1980, moeckel1981, moeckel1989, moeckel2007}
deal with rare occurrence of collisions or the existence of chaos in the proximity of collisions.
In
\cite{guardiaKZ19} it is proved that for {\sc pcrtbp} there exists an open set in phase space of fixed measure, where the set of initial points which lead to collision is
${\rm O}(\mu^\alpha)$ dense with some $0<\alpha<1$. In \cite{levicivita06} it is proved that, after collision regularisation, {\sc pcrtbp} is integrable in a neighbourhood of collisions.
In~\cite{cardinG18a, cardinG18} the result has been recently extended to the spatial version, often denoted {\sc scrtbp}.
\section{A Normal Form Theorem for fast driven systems}
In the next Sections~\ref{Weighted norms}--\ref{Proof of NFL} we state and prove a Normal Form Theorem ({\sc nft}) for real--analytic systems. For the purpose of the paper, in Section~\ref{A generalisation when the dependence} we generalise the result, allowing the dependence on the angular coordinate $\psi$ to be just $C^{\ell_*}$ ($\ell_*\in{\mathbb N}$), rather than holomorphic. In all cases, we limit to the case $n=m=1$. Generalisations to $n$, $m\ge 1$ are straightforward.
\subsection{Weighted norms}\label{Weighted norms}
Let us consider a 3--dimensional vector--field
\begin{align}\label{vectorfield} ({\rm I}, y, \psi)\in{\mathbb P}_{r, \sigma, s}:={\mathbb I}_r \times{\mathbb Y}_\sigma \times {{\mathbb T}}_s\to X=(X_1, X_2, X_3)\in{\mathbb C}^3\end{align}
where ${\mathbb I}\subset {\mathbb R}$, ${\mathbb Y}\subset {\mathbb R}$ are open and connected; ${\mathbb T}= {\mathbb R}/(2\pi {\mathbb Z})$,
which has the form~\eqref{perturbed}.
As usual, if $A\subset {\mathbb R}$ and $r${,$s>0$}, the symbols $A_r${, ${\mathbb T}_s$} denote the complex $r$, {$s$}--neighbourhoods of $A${,${\mathbb T}$:}
\begin{align}A_r:=\bigcup_{x\in A}B_r(x)\,,\qquad {{\mathbb T}_s:=\big\{\psi=\psi_1+{\rm i}\psi_2:\ \psi_1\in {\mathbb T}\,,\ \psi_2\in {\mathbb R}\,,\ |\psi_2|<s\big\}\,,}\end{align} with $B_r(x)$ being the complex ball centred at $x$ with radius $r$.
We assume each $X_i$ to be holomorphic in
${\mathbb P}_{r, \sigma, s}$, meaning the it has a finite weighted norm defined below. If this holds, we simply write $X\in {\cal O}^3_{r, \sigma, s}$.
\noindent
For functions $f:\ ({\rm I}, y, \psi)\in{\mathbb I}_r\times {\mathbb Y}_\sigma\times {\mathbb T}_s\to {\mathbb C}$, we write $f\in{\cal O}_{r, \sigma, s}$ if $f$ is holomorphic in ${\mathbb P}_{r, \sigma, s}$. We let
\begin{align}\label{normfOLD}\|f\|_{u}:=\sum_{k\in {\mathbb Z}}\,\sup_{{\mathbb I}_r\times{\mathbb Y}_\sigma}
|f_{\kappa}({\rm I}, y)|
\,e^{|k|s}\qquad u=(r, \sigma, s)
\end{align}
where
\begin{align}f=\sum_{k\in {\mathbb Z}} f_k({\rm I}, y)e^{{\rm i} k\psi}\end{align}
is the Fourier series associated to $f$ relatively to the $\psi$--coordinate. For $\psi$--independent
functions or vector--fields we simply write $\|\,\cdot\,\|_{r, \sigma}$.
\\
For vector--fields $X:\ ({\rm I}, y, \psi)\in{\mathbb I}_r\times {\mathbb Y}_\sigma\times {\mathbb T}_s\to X=(X_1, X_2, X_3)\in{\mathbb C}^3$, we write $X\in{\cal O}^3_{r, \sigma, s}$ if $X_i\in{\cal O}_{r, \sigma, s}$ for $i=1$, $2$, $3$. We
define the {\it weighted norms}
\begin{align}\label{normXOLD}\VERT X \VERT_{u}^{w}:=\sum_i w^{-1}_i\|X_i\|_{u}\end{align}
where
$w=(w_1$, $w_2$, $w_3)\in {\mathbb R}_+^3$ are the {\it weights}.
The wighted norm affords the following properties.
\begin{itemize}
\item[{\tiny\textbullet}] {Monotonicity:}
\begin{align}\label{monotonicity}
\VERT X \VERT_{u}^{w}\le \VERT X \VERT_{u'}^{w}\, ,\quad \VERT X \VERT_{u}^{w'}\le \VERT X \VERT_{u}^{w}\quad \forall\ u\le u'\, ,\ w\le w'
\end{align}
where $u\le u'$ means $u_i\le u_i'$ for $i=1$, $2$, $3$.
\item[{\tiny\textbullet}] {Homogeneity:}
\begin{align}\label{homogeneity}\VERT X \VERT_{u}^{\alpha w}=\alpha^{-1}\VERT X \VERT_{u}^{w}\qquad \forall\ \alpha>0\, . \end{align}
\end{itemize}
\subsection{The Normal Form {Theorem}}
We now state the main result of this section. Observe that the nature of the system does not give rise to any non--resonance condition or ultraviolet cut--off.
We name Normal Form {Theorem} the following
\begin{theorem}[{\sc nft}]\label{Normal Form Lemma}
Let $u=(r, \sigma, s)$; $X=N+P\in {\cal O}^3_{u}$ and let $w=(\rho$, $\tau$, $t)\in {\mathbb R}_+^3$.
Put
\begin{align}\label{delta1}
Q:=3\,{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{r, \sigma}\end{align}
and\footnote{${\rm diam}({\cal A})$ denotes diameter of the set ${\cal A}$.} assume that for some ${p}\in {\mathbb N}$, $s_2\in {\mathbb R}_+$, the following inequalities are satisfied:
\begin{align}\label{NEWu+positive}
0<\rho<\frac{r}{8}\, ,\quad 0<\tau< e^{-s_2}\frac{\sigma}{8}\, ,\quad 0<t<\frac{s}{10}
\end{align}
and
\begin{align}\label{NEWnewcond2}\chi&:= \frac{{\rm diam}({\mathbb Y}_\sigma)}{s_2}\left\|\frac{\partial_y v}{v}\right\|_{r, \sigma}
\le 1 \\\label{theta1}\theta_1&:= 2\,e^{s_2}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_y\omega}{v}\right\|_{r, \sigma}\frac{\tau}{t}\le 1\\
\label{theta2}\theta_2&:= 4\,{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{r, \sigma}\frac{\rho}{\tau}\le 1\\
\label{theta3}\theta_3&:= 8\,{\rm diam}({\mathbb Y}_\sigma)
\left\|\frac{\partial_{\rm I}\omega}{v}\right\|_{r, \sigma}\frac{\rho}{t}\le 1\\
\label{NEWnewsmallness}\eta^2&:= \max\left\{\frac{{\rm diam}({\mathbb Y}_\sigma)}{t}\left\|\frac{\omega}{v}\right\|_{r, \sigma}\, ,\ 2^7\,e^{2 s_2}Q^2 (\VERT P\VERT_{u}^{w})^2\right\}<\frac{1}{{p}}\, . \end{align}
Then, with
\begin{align}u_*=(r_\star, \sigma_\star, s_\star)\, ,\quad r_\star:=r-8\rho\, , \quad\sigma_\star=\sigma-8 e^{s_2}\tau\, , \quad s_\star=s-10 t\end{align}
there exists a real--analytic change of coordinates $\Phi_\star$
such that $X_\star:=\Phi_\star X\in {\cal O}^3_{u_\star}$ and
$X_\star=N+P_\star$,
with
\begin{align}\label{P*}\VERT P_\star\VERT^w_{u_\star}<2^{-({p}+1)}\VERT P\VERT^w_{u}\, . \end{align}
\end{theorem}
\begin{remark}[Proof of Theorem A]\rm
{Theorem}~\ref{Normal Form Lemma} immediately implies Theorem~A, with $C=\min\{2^{-7}Q^{-2}e^{-2 s_2} \varrho^2 \log 2\,,\ t/{\rm diam}{\mathbb Y_\sigma}\}$, $a=2$, provided that $\varrho:=\frac{\epsilon^2}{(\VERT P\VERT_u^w)^2
}$ is of ``order one'' with respect to $\epsilon$. The mentioned ``smallness assumptions'' correspond to
conditions~\eqref{NEWu+positive}--\eqref{theta3} and $\left\|\frac{\omega}{v}\right\|_{r, \sigma}\ll (\VERT P\VERT_u^w)^2$.
\end{remark}
\subsection{The Step Lemma}
\noindent
We denote as
\begin{align}\label{Lie}e^{{\cal L}_Y}=\sum_{{k}\ge 0}\frac{{\cal L}^{{k}}_Y}{{k}!}\end{align}
the formal Lie series associated to $Y$, where \begin{align}[Y, X]=J_X Y-J_Y X\, ,\quad (J_Z)_{ij}:=\partial_j Z_i\end{align} denotes Lie brackets of two vector--fields, with
\begin{align}{\cal L}_Y:=[Y, \cdot]\end{align}
being the Lie operator.
\begin{lemma}\label{iteration lemma}
Let $X=N
+P\in {\cal O}^3_{u}$, with $u=(r, \sigma, s)$, $N$ as in~\eqref{NZ}, $s_1$, $s_2>0$. Assume \begin{align}\label{existence}
\frac{{\rm diam}({\mathbb Y}_\sigma)}{s_1}\left\|\frac{\omega}{v}\right\|_{r, \sigma}\le 1\, ,\quad \frac{{\rm diam}({\mathbb Y}_\sigma)}{s_2}\left\|\frac{\partial_y v}{v}\right\|_{r, \sigma}
\le 1\end{align}
and that
$P$ is so small that
\begin{align}\label{smallcond}
Q \VERT P\VERT^{w} _{u}<1\qquad Q:=3{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{r, \sigma}\, ,\quad w=(\rho, \tau, t)
\end{align}
Let $\rho_*$, $\tau_*$, $t_*$ be defined via
\begin{align}\label{bounds1}
\frac{1}{\rho_*}&=\frac{1}{\rho}-{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{r, \sigma}\left(\frac{1}{\tau}-e^{s_2}{\rm diam}({\mathbb Y}_\sigma)
\left\|\frac{\partial_y\omega}{v}\right\|_{r, \sigma}\frac{1}{t}
\right)\nonumber\\
&- {\rm diam}({\mathbb Y}_\sigma)
\left(\left\|\frac{\partial_{\rm I}\omega}{v}\right\|_{r, \sigma}+e^{s_2}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{r, \sigma} \left\|\frac{\partial_y\omega}{v}\right\|_{r, \sigma}\right)\frac{1}{t}\nonumber\\
\frac{1}{\tau_*}&=\frac{e^{-s_2}}{\tau}-{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_y\omega}{v}\right\|_{r, \sigma}\frac{1}{t}\nonumber\\
t_*&=t
\end{align}
and assume
\begin{align}\label{w*positive}w_*=(\rho_*, \tau_*, t_*)\in {\mathbb R}_+^3\, ,\qquad u_*=(r-2\rho_*, \sigma-2\tau_*, s-3s_1-2t_*)\in {\mathbb R}_+^3\, . \end{align} Then there exists $Y\in {\cal O}^3_{u_*+w_*}$ such that
$X_+:=e^{{\cal L}_Y}X\in {\cal O}^3_{u_*}$ and
$X_+=N+
P_+$, with
\begin{align}\label{P+}
\VERT P_+\VERT^{w_*}_{u_*}
\le \frac{2Q
\left(\VERT P\VERT^w_u\right)^2
}{1-Q
\left\VERT P\right\VERT^w_u}
\end{align}
\end{lemma}
\noindent
In the next section, we shall use {Lemma}~\ref{iteration lemma} in the following ``simplified'' form.
\begin{lemma}[Step Lemma]\label{simplifiedsteplemma}
If~\eqref{existence},~\eqref{smallcond} and~\eqref{w*positive} are replaced with
\begin{align}\label{newcond1}\begin{split}
&2\,e^{s_2}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_y\omega}{v}\right\|_{r, \sigma}\frac{\tau}{t}\le 1\\
&4\,{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{r, \sigma}\frac{\rho}{\tau}\le 1\\
&8\,{\rm diam}({\mathbb Y}_\sigma)
\left\|\frac{\partial_{\rm I}\omega}{v}\right\|_{r, \sigma}\frac{\rho}{t}\le 1
\end{split}
\end{align}
\begin{align}\label{newcond2}
\frac{{\rm diam}({\mathbb Y}_\sigma)}
{t}\left\|\frac{\omega}{v}\right\|_{r, \sigma}\le 1\, ,\quad \frac{{\rm diam}({\mathbb Y}_\sigma)}{s_2}\left\|\frac{\partial_y v}{v}\right\|_{r, \sigma}
\le 1
\end{align}
\begin{align}
\label{u+positive}
0<\rho<\frac{r}{4}\, ,\quad 0<\tau<\frac{\sigma}{4}e^{-s_2}\, ,\quad 0<t<\frac{s}{5}
\end{align}
\begin{align}
\label{newsmallness}
2Q\VERT P\VERT ^w_u<1
\end{align}
then $X_+=N+P_+\in {\cal O}^3_{u_+}$ and \begin{align}\label{finalineq}\VERT P_+\VERT_{u_+}^{w}\le 8 e^{s_2}Q
(\VERT P \VERT_u^w)^2\, . \end{align}
with
\begin{align}u_+:=(r-4\rho, \sigma-4\tau e^{s_2}, s-5t)\, . \end{align}
\end{lemma}
{\bf Proof\ }The inequality in~\eqref{newcond2} guarantees that one can take $s_1=t$, while the inequalities in~\eqref{newcond1} and~\eqref{u+positive} imply
\begin{align}\frac{1}{\rho_*}\ge \frac{1}{2\rho}\, ,\quad \frac{1}{\tau_*}\ge \frac{e^{-s_2}}{2\tau}\end{align}
whence, as $t_*=t$,
\begin{align}w_*<2 e^{s_2} w\, ,\qquad u_*\ge u_+>0\, . \end{align}
Then~\eqref{finalineq} is implied by~\eqref{P+}, monotonicity and homogeneity~\eqref{monotonicity}--\eqref{homogeneity}, and the inequality in~\eqref{newsmallness}. $\square$
\noindent
{To prove {Lemma}~\ref{iteration lemma}, we look for a change of coordinates which conjugates the vector--field $X=N+P$ to a new vector--field $X_+=N_++P_+$, where $P_+$ depends in the coordinates ${\rm I}$ at higher orders. The procedure we follow is reminiscent of classical techniques of normal form theory, where one chooses the transformation so that $X_+=e^{{\cal L}_Y} X$, with the operator $e^{{\cal L}_Y}$ being defined as in \eqref{Lie}. As in the classical case, $Y$ will be chosen as the solution of a certain
``homological equation'' which allows to eliminate the first order terms depending on $\psi$ of $P$.
However, as stated in Lemma \ref{iteration lemma}, differently from the classical situation, one can take $N=N_+$, which is another way of saying that
it is possible to choose $Y$ such in a way to solve
\begin{align}\label{homeq111}
{\cal L}_N[Y]=P
\end{align}
regardless $P$ has vanishing average or not -- or, in other words, that {\it also the resonant terms} of the perturbing term will be killed. Note also that no ``ultraviolet cut--off'' is used.
Equation \eqref{homeq111} is precisely what is discussed in Lemma~\ref{estimates} and Proposition~\ref{homeq1} below.
}
\noindent
Fix $y_0\in {\mathbb Y}$; $v$, $\omega:{\mathbb I}\times {\mathbb Y}\to {\mathbb R}$, with $v\not\equiv 0$. We define, formally, the operators ${\cal F}_{v, \omega}$ and ${\cal G}_{v, \omega}$
as acting on functions $g:{\mathbb I}\times {\mathbb Y}\times {\mathbb T}\to {\mathbb R}$ as
\begin{align}\label{FandG}
{\cal F}_{v, \omega}[g]({\rm I}, y, \psi)&:= \int_{y_0}^y\frac{g\left({\rm I}, \eta, \psi+\int_y^\eta \frac{\omega({\rm I}, \eta')}{v({\rm I}, \eta')}d\eta' \right)}{v({\rm I}, \eta)}d\eta\nonumber\\
{\cal G}_{v, \omega}[g]({\rm I}, y, \psi)&:= \int_{y_0}^y\frac{g\left({\rm I}, \eta, \psi+\int_y^\eta \frac{\omega({\rm I}, \eta')}{v({\rm I}, \eta')}d\eta' \right)e^{-\int_y^\eta \frac{\partial_y v({\rm I}, \eta')}{v({\rm I}, \eta')}d\eta'}}{v({\rm I}, \eta)}d\eta
\end{align}
\noindent
Observe that, when existing,
${\cal F}_{v, \omega}$, ${\cal G}_{v, \omega}$ send zero--average functions to zero--average functions.
\noindent
The existence ${\cal F}_{v, \omega}$, ${\cal G}_{v, \omega}$ is established by the following
\begin{lemma}\label{estimates}
If inequalities~\eqref{existence} hold,
then
\begin{align}{\cal F}_{v, \omega}\, ,\ {\cal G}_{v, \omega}:\quad {\cal O}_{r, \sigma, s}\to {\cal O}_{r, \sigma, s-s_1}\end{align}
and
\begin{align}\|{\cal F}_{v, \omega}[g]\|_{r, \sigma, s-s_1}\le {\rm diam}({\mathbb Y}_\sigma)\left\|\frac{g}{v}\right\|_{r, \sigma, s}\, ,\quad \|{\cal G}_{v, \omega}[g]\|_{r, \sigma, s-s_1}\le e^{s_2}\,{\rm diam}({\mathbb Y}_\sigma) \left\|\frac{g}{v}\right\|_{r, \sigma, s}\end{align}
\end{lemma}
The proof of Lemma~\ref{estimates} is obvious from the definitions~\eqref{FandG}.
\begin{proposition}\label{homeq1}
Let
\begin{align}\label{NZ}N=(0, v({\rm I}, y), \omega({\rm I}, y))\, ,\qquad Z=(Z_1({\rm I}, y, \psi), Z_2({\rm I}, y, \psi), Z_3({\rm I}, y, \psi))\end{align}
belong to ${\cal O}^3_{r, \sigma, s}$ and assume~\eqref{existence}.
Then the ``homological equation''
\begin{align}\label{homeq}{\cal L}_N[Y]=Z\end{align}
has a solution $Y\in {\cal O}_{r, \sigma, s-3s_1}$ verifying
\begin{align}\label{bounds}
\VERT Y\VERT_{r, \sigma, s-3 s_1}^{\rho_*, \tau_*, t_*}\le {\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{r, \sigma}\VERT Z\VERT_{r, \sigma, s}^{\rho, \tau, t}
\end{align}
with $\rho_*$, $\tau_*$, $t_*$ as in~\eqref{bounds1}.
\end{proposition}
{\bf Proof\ }We expand $Y_j$ and $Z_j$ along the Fourier basis
\begin{align}Y_j({\rm I}, y, \psi)=\sum_{k\in {\mathbb Z}} Y_{j, k}({\rm I}, y)e^{{\rm i} k\psi}\, ,\quad Z_j({\rm I}, y, \psi)=\sum_{k\in {\mathbb Z}} Z_{j, k}({\rm I}, y)e^{{\rm i} k\psi}\, ,\quad j=1,\ 2,\ 3\end{align}
Using \begin{align}{\cal L}_N[Y]=[N, Y]=J_Y N-J_N Y\end{align}
where
$(J_Z)_{ij}=\partial_j Z_i$
are the Jacobian matrices,
we rewrite~\eqref{homeq} as
\begin{align}\label{equations}
Z_{1, k}({\rm I}, y)&=v({\rm I}, y)\partial_y Y_{1, k} +{\rm i} k \omega({\rm I}, y) Y_{1, k} \nonumber\\
Z_{2, k}({\rm I}, y)&=v({\rm I}, y)\partial_y Y_{2, k} +({\rm i} k \omega({\rm I}, y)-\partial_y v({\rm I}, y)) Y_{2, k}-\partial_{\rm I} v({\rm I}, y) Y_{1, k}\nonumber\\
Z_{3, k}({\rm I}, y)&=v({\rm I}, y)\partial_y Y_{3, k} +{\rm i} k \omega({\rm I}, y) Y_{3, k}-\partial_{\rm I} \omega({\rm I}, y) Y_{1, k}-\partial_y \omega({\rm I}, y) Y_{2, k}\,.
\end{align}
Regarding~\eqref{equations} as equations for $Y_{j, k}$, we find the solutions
\begin{align}
Y_{1, k}&=\int_{y_0}^y\frac{Z_{1, k}({\rm I}, \eta)}{v({\rm I}, \eta)}e^{{\rm i} k\int_y^\eta \frac{\omega({\rm I}, \eta')}{v({\rm I}, \eta')}d\eta'}d\eta\nonumber\\
Y_{2, k}&=\int_{y_0}^y\frac{Z_{2, k}({\rm I}, \eta)+\partial_{\rm I} v
Y_{1, k}
}{v({\rm I}, \eta)}e^{\int_y^\eta \frac{{\rm i} k \omega({\rm I}, \eta')-\partial_y v({\rm I}, \eta')}{v({\rm I}, \eta')}d\eta'}d\eta\nonumber\\
Y_{3, k}&=\int_{y_0}^y\frac{Z_{3, k}({\rm I}, \eta)+\partial_{\rm I} \omega({\rm I}, \eta) Y_{1, k}+\partial_y \omega({\rm I}, \eta) Y_{2, k}}{v({\rm I}, \eta)}e^{{\rm i} k\int_y^\eta \frac{\omega({\rm I}, \eta')}{v({\rm I}, \eta')}d\eta'}d\eta\nonumber\\
\end{align}
multiplying by $e^{{\rm i} k\psi}$ and summing over $k\in {\mathbb Z}$ we find \begin{align}\label{Yi}
Y_1&={\cal F}_{v, \omega}[Z_1]\nonumber\\
Y_2&={\cal G}_{v, \omega}[Z_2]+{\cal G}_{v, \omega}[\partial_{\rm I} v\,Y_1]\, ,\nonumber\\
Y_3&={\cal F}_{v, \omega}[Z_3]+{\cal F}_{v, \omega}[\partial_{\rm I} \omega\,Y_1]+{\cal F}_{v, \omega}[\partial_y \omega\,Y_2]\, . \end{align}
Then, by Lemma~\ref{estimates},
\begin{align}
&\|Y_1\|_{r, \sigma, s-s_1}\le {\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{r, \sigma}\left\|Z_1\right\|_{r, \sigma, s}\nonumber\\
&\|Y_2\|_{r, \sigma, s-2s_1
\le e^{s_2}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{r, \sigma}\left\|Z_2\right\|_{r, \sigma, s-s_1}+e^{s_2}{\rm diam}({\mathbb Y}_\sigma)^2\left\|\frac{1}{v}\right\|_{r, \sigma}\left\|\frac{\partial_{\rm I} v}{v}\right\|_{r, \sigma}\left\|Z_1\right\|_{r, \sigma, s}\nonumber\\
&\|Y_3\|_{r, \sigma, s-3s_1
\le {\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{r, \sigma}\left\|Z_3\right\|_{r, \sigma, s-2s_1}
+e^{s_2}{\rm diam}({\mathbb Y}_\sigma)^2\left\|\frac{1}{v}\right\|_{r, \sigma}\left\|\frac{\partial_y\omega}{v}\right\|_{r, \sigma}
\left\|Z_2\right\|_{r, \sigma, s-s_1}
\nonumber\\
&\qquad+{\rm diam}({\mathbb Y}_\sigma)^2\left\|\frac{1}{v}\right\|_{r, \sigma}
\left(\left\|\frac{\partial_{\rm I}\omega}{v}\right\|_{r, \sigma}+e^{s_2}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{r, \sigma} \left\|\frac{\partial_y\omega}{v}\right\|_{r, \sigma}\right)\left\|Z_1\right\|_{r, \sigma, s}
\end{align}
Multiplying the inequalities above by $\rho_*^{-1}$, $\tau_*^{-1}$, $t_*^{-1}$ respectively and taking the sum, we find~\eqref{bounds}, with
\begin{align}
\frac{1}{\rho}&=\frac{1}{\rho_*}+e^{s_2}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{r, \sigma}\frac{1}{\tau_*}+{\rm diam}({\mathbb Y}_\sigma)
\left(\left\|\frac{\partial_{\rm I}\omega}{v}\right\|_{r, \sigma}+e^{s_2}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{r, \sigma} \left\|\frac{\partial_y\omega}{v}\right\|_{r, \sigma}\right)\frac{1}{t_*}\nonumber\\
\frac{1}{\tau}&=\frac{e^{s_2}}{\tau_*}+e^{s_2}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_y\omega}{v}\right\|_{r, \sigma}\frac{1}{t_*}\nonumber\\
\frac{1}{t}&=\frac{1}{t_*}\, .
\end{align}
We recognise that, under conditions~\eqref{w*positive}, $\rho_*$, $\tau_*$, $t_*$ in~\eqref{bounds1} solve the equations above.
$\qquad \square$
\begin{lemma}\label{Lie brackets}
Let $w<u\le u_0$;
$Y\in {\cal O}^3_{u_0}$, $W\in {\cal O}^3_{u}$. Then
\begin{align}\VERT{\cal L}_Y[W]\VERT^{u_0-u+w}_{u-w}\le \VERT Y\VERT^{w}_{u-w}\VERT W\VERT ^{u_0-u+w}_{u}+\VERT W\VERT^{u_0-u+w}_{u-w}\VERT Y\VERT^{u_0-u+w}_{u_0}\, . \end{align}
\end{lemma}
{\bf Proof\ }
One has
\begin{align}
\VERT{\cal L}_Y[W]\VERT^{u_0-u+w}_{u-w}&=\VERT J_W Y-J_Y W\VERT^{u_0-u+w}_{u-w}\nonumber\\
&\le \VERT J_W Y\VERT^{u_0-u+w}_{u-w}+\VERT J_Y W\VERT^{u_0-u+w}_{u-w}
\end{align}
Now, $(J_W Y)_i=\partial_{{\rm I}} W_i Y_1+\partial_{y} W_i Y_2+\partial_{\psi} W_i Y_3$, so, using Cauchy inequalities,
\begin{align}
\|(J_W Y)_i\|_{u-w}&\le \|\partial_{{\rm I}} W_i\|_{u-w}\| Y_1\|_{u-w}+\|\partial_{y} W_i\|_{u-w} \|Y_2\|_{u-w}+\|\partial_{\psi} W_i\|_{u-w} \|Y_3\|_{u-w}\nonumber\\
&\le w_1^{-1}\| W_i\|_{u}\| Y_1\|_{u-w}+w_2^{-1}\|W_i\|_{u} \|Y_2\|_{u-w}+w_3^{-1}\|W_i\|_{u} \|Y_3\|_{u-w}\nonumber\\
&=\VERT Y\VERT^w_{u-w}\|W_i\|_{u}
\end{align}
Similarly,
\begin{align}\|(J_Y W)_i\|_{u-w}\le \VERT W\VERT_{u-w}^{u_0-u+w}\|Y_i\|_{u_0}\, . \end{align}
Taking the $u_0-u+w$--weighted norms, the thesis follows. $\quad \square$
\begin{lemma}\label{iterateL}
Let $0<w<u\in {\mathbb R}^3$, $Y\in {\cal O}^3_{u+w}$, $W\in {\cal O}^3_{u}$. Then
\begin{align}\VERT{\cal L}^{{k}}_Y[W]\VERT^{w}_{u-w}\le 3^{{k}} {{k}}!\left(\VERT Y\VERT ^w_{u+w}\right)^{{k}}\VERT W\VERT^w _{u-w}\, . \end{align}
\end{lemma}
{\bf Proof\ } We apply Lemma~\ref{Lie brackets} with $W$ replaced by ${\cal L}^{i-1}_Y[W]$,
$u$ replaced by $u-(i-1)w/{{k}}$,
$w$ replaced by $w/{{k}}$ and, finally, $u_0=u+w$. With $\VERT\cdot\VERT_i^{w}=\VERT\cdot\VERT^{w}_{u-i\frac{w}{{{k}}}}$, $0\le i\le {{k}}$, so that $\VERT\cdot\VERT^w_0=\VERT\cdot\VERT^w_{u}$ and $\VERT\cdot\VERT^w_{{k}}=\VERT\cdot\VERT^w_{u-w}$,
\begin{align}
\VERT{\cal L}^i_Y[W]\VERT^{w+w/{{k}}}_i&=\left\VERT\left[Y, {\cal L}^{i-1}_Y[W]\right]\right\VERT^{w+w/{{k}}}_i\nonumber\\
&\le \VERT Y\VERT^{w/{{k}}}_{i}\VERT{\cal L}^{i-1}_Y[W]\VERT^{w+w/{{k}}}_{i-1}+
\VERT Y\VERT^{w+w/{{k}}}_{u+w}\VERT{\cal L}^{i-1}_Y[W]\VERT^{w+w/{{k}}}_{i}\,.
\end{align}
Hence, de--homogenizating,
\begin{align}
\frac{{{k}}}{{{k}}+1}\VERT{\cal L}^i_Y[W]\VERT^{w}_i&\le {{k}} \frac{{{k}}}{{{k}}+1}\VERT Y\VERT^{w}_{i}\VERT{\cal L}^{i-1}_Y[W]\VERT^{w}_{i-1}+
\frac{{{k}}^2}{({{k}}+1)^2}\VERT Y\VERT^{w}_{u+w}\VERT{\cal L}^{i-1}_Y[W]\VERT^{w}_{i}\nonumber\\
&\le \frac{{{k}}^2}{{{k}}+1}\left(1+\frac{1}{{{k}}+1}\right)\VERT Y\VERT^{w}_{u+w}\VERT{\cal L}^{i-1}_Y[W]\VERT^{w}_{i-1}
\end{align}
Eliminating the common factor $\frac{{{k}}}{{{k}}+1}$ and
iterating ${{k}}$ times from $i={{k}}$, by Stirling, we get
\begin{align}
\VERT {\cal L}^{{k}}_Y[W]\VERT^w _{u-w}
&\le {{k}}^{{k}}\left(1+\frac{1}{{k}}\right)^{{k}}\left(\VERT Y\VERT ^w_{u+w}\right)^{{k}}\VERT W\VERT^w _{u-w}\nonumber\\
&\le e^{{k}} {{k}}!\left(\VERT Y\VERT ^w_{u+w}\right)^{{k}}\VERT W\VERT^w _{u-w}\nonumber\\
&< 3^{{k}} {{k}}!\left(\VERT Y\VERT ^w_{u+w}\right)^{{k}}\VERT W\VERT^w _{u-w}
\end{align}
as claimed. $\quad \square$
\begin{proposition}\label{Lie Series}
Let $0<w<u$, $Y\in {\cal O}^3_{u+w}$,
\begin{align}\label{q}q:=3\VERT Y\VERT^w_{u+w}<1\, . \end{align}
Then the Lie series $e^{{\cal L}_Y}$
defines an operator
\begin{align}e^{{\cal L}_Y}:\quad {\cal O}^3_{u}\to {\cal O}^3_{u-w}\end{align}
and its tails
\begin{align}e^{{\cal L}_Y}_m=\sum_{{{k}}\ge m}\frac{{\cal L}^{{k}}_Y}{{{k}}!}\end{align}
verify
\begin{align}\left\VERT e^{{\cal L}_Y}_m W\right\VERT^w_{u-w}\le \frac{q^m}{1-q}\VERT W\VERT_{u}^w\qquad \forall\ W\in {\cal O}^3_{u}\, . \end{align}
\end{proposition}
\paragraph{\bf Proof of {Lemma}~\ref{iteration lemma}} We look for $Y$ such that $X_+:=e^{{\cal L}_Y} X$ has the desired properties.
\begin{align}
e^{{\cal L}_Y} X&=e^{{\cal L}_Y}\left(N
+P\right)=N
+P+{\cal L}_Y N+e_2^{{\cal L}_Y} N
+e_1^{{\cal L}_Y}P\nonumber\\
&=N
+P-{\cal L}_N Y+P_+
\end{align}
with
$P_+=e_2^{{\cal L}_Y} N
+e_1^{{\cal L}_Y}P$.
We
choose $Y$ so that the homological equation
\begin{align}{\cal L}_N Y=P\end{align}
is satisfied. By Proposition~\ref{homeq1}, this equation has a solution $Y\in {\cal O}^3_{r, \sigma, s-3s_1}$ verifying
\begin{align} q:=3\VERT Y\VERT^{w_*} _{r, \sigma, s-3s_1}&\le
3{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{r, \sigma}\VERT P\VERT^w_u=Q \VERT P\VERT^w_u<1\, . \end{align}
By Proposition~\ref{Lie Series}, the Lie series $e^{{\cal L}_Y}$
defines an operator
\begin{align}e^{{\cal L}_Y}:\quad W\in {\cal O}_{u_*+w_*}\to {\cal O}_{u_*}\end{align}
and its tails $e^{{\cal L}_Y}_m$
verify
\begin{align}
\left\VERT e^{{\cal L}_Y}_m W\right\VERT^{w_*}_{u_*}&\le \frac{q^m}{1-q}\VERT W\VERT^{w_*}_{u_*+w_*}\nonumber\\
&\le \frac{\left(Q\VERT P\VERT^w_u\right)^m}{1-Q\VERT P\VERT^w_u}\VERT W\VERT^{w_*}_{u_*+w_*}
\end{align}
for all $W\in {\cal O}^3_{u_*+w_*}$.
In particular, $e^{{\cal L}_Y}$ is well defined on ${\cal O}^3_{u}\subset {\cal O}^3_{u_*+w_*}$, hence $P_+\in {\cal O}^3_{u_*}$. The bounds on $P_+$ are obtained as follows. Using the homological equation, one finds
\begin{align}\label{pertbound1}
\VERT e_2^{{\cal L}_Y}N\VERT^{w_*}_{u_*}&= \left\VERT \sum_{{{k}}=1}^{\infty}\frac{{\cal L}^{{{k}}+1}_Y N}{({{k}}+1)!}\right\VERT^{w_*}_{u_*}\nonumber\\
&\le
\sum_{{{k}}=1}^{\infty}\frac{1}{({{k}}+1)!}\left\VERT {\cal L}^{{{k}}+1}_Y N\right\VERT^{w_*}_{u_*}\nonumber\\
&=\sum_{{{k}}=1}^{\infty}\frac{1}{({{k}}+1)!}\left\VERT {\cal L}^{{{k}}}_Y P\right\VERT^{w_*}_{u_*}\nonumber\\
&\le \sum_{{{k}}=1}^{\infty}\frac{1}{{{k}}!}\left\VERT {\cal L}^{{{k}}}_Y P\right\VERT^{w_*}_{u_*}\nonumber\\
&\le \frac{Q
\left(\VERT P\VERT^w_u\right)^2
}{1-Q
\left\VERT P\right\VERT^w_u}
\end{align}
The bound
\begin{align}\label{pertbound2}
\VERT e_1^{{\cal L}_Y}P\VERT^{w_*}_{u_*}
\le \frac{Q
\left(\VERT P\VERT^w_u\right)^2
}{1-Q
\left\VERT P\right\VERT^w_u}\end{align}
is even more straightforward. $\quad \square$
\subsection{Proof of the Normal Form Theorem}\label{Proof of NFL}
The proof of {\sc nft} is obtained -- following~\cite{poschel93} -- via iterate applications of the Step Lemma.
At the base step, we let\footnote{{With slight abuse of notations, here and during the proof of Theorem~\ref{Normal Form LemmaNEW}, the sub--fix $j$ will denote the value of a given quantity at the $j^{\rm th}$ step of the iteration.}}
\begin{align}X=X_0:=N+P_0\, ,\quad w=w_0:=(\rho,\tau, t)\, ,\quad u=u_0:=(r,\sigma, s)\end{align}
with $X_0=N+P_0\in {\cal O}^3_{u_0}$. We let
\begin{align}
Q_0:=3\,{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{r, \sigma}\end{align}
Conditions~\eqref{newcond1}--\eqref{newsmallness} are implied by the assumptions~\eqref{theta1}--\eqref{NEWnewsmallness}. We then conjugate $X_0$ to $X_1=N+P_1\in {\cal O}^3_{u_1}$, where
\begin{align}u_1=(r-4\rho, \sigma-4\tau e^{s_2}, s-5t)=:(r_1, \sigma_1, s_1)\, . \end{align}
Then we have
\begin{align}\label{base step}\VERT P_1\VERT_{u_1}^{w_0}\le 8 e^{s_2}Q_0 \left(\VERT P_0\VERT_{u_0}^{w_0}\right)^2\le \frac{1}{2} \VERT P_0\VERT_{u_0}^{w_0}\, . \end{align}
We assume, inductively, that, for some $1\le j\le {{p}}$,
we have
\begin{align}\label{induction}X_{j}=N+P_{j}\in {\cal O}^3_{u_j}\, ,\qquad \VERT P_{j}\VERT_{u_j}^{w_0}<2^{-(j-1)}
\VERT P_{1}\VERT _{u_{1}}^{w_0}
\end{align} where \begin{align}\label{uj}u_j=(r_j, \sigma_j, s_j)\end{align} with
\begin{align}r_j:=r_1-4 (j-1)\frac{\rho}{ {{p}}}\, ,\quad \sigma_j:=\sigma_1-4 e^{s_2}(j-1)\frac{\tau}{ {{p}}}\, ,\quad s_j:=s_1-5 (j-1)\frac{t}{ {{p}}}\, . \end{align} The case $j=1$ trivially reduces to the identity $\VERT P_{1}\VERT _{u_{1}}^{w_0}=\VERT P_{1}\VERT _{u_{1}}^{w_0}$.
We aim to apply {Lemma}~\ref{simplifiedsteplemma} with $u=u_j$ as in~\eqref{uj} and \begin{align}w=w_1:=\frac{w_0}{{{p}}}\, ,\qquad \forall\ 1\le j\le {{p}}\, . \end{align}
Conditions~\eqref{newcond1},~\eqref{newcond2} and~\eqref{u+positive} are easily seen to be implied by~\eqref{theta1},~\eqref{NEWnewcond2},~\eqref{NEWu+positive} and the first condition in~\eqref{NEWnewsmallness} combined with the inequality ${{p}}\eta^2<1$, implied by the choice of ${{p}}$. We check condition~\eqref{newsmallness}.
By homogeneity,
\begin{align}\label{eq}\VERT P_j\VERT_{u_j}^{w_1}={{p}}\VERT P_j\VERT_{u_j}^{w_0}\le {{p}}\VERT P_1\VERT_{u_1}^{w_0}\le 8{{p}} e^{s_2}Q_0 \left(\VERT P_0\VERT_{u_0}^{w_0}\right)^2\end{align}
whence, using
\begin{align}
Q_j=3\,{\rm diam}({\mathbb Y}_{\sigma_j})\left\|\frac{1}{v}\right\|_{r_j, \sigma_j}\le Q_0\end{align}
we see that condition~\eqref{newsmallness} is met:
\begin{align}2 Q_j\VERT P_j\VERT_{u_j}^{w_1}\le 16{{p}} e^{s_2}Q^2_0 \left(\VERT P_0\VERT_{u_0}^{w_0}\right)^2<1\, . \end{align}
Then the Iterative Lemma can be applied and we get $X_{j+1}=N+P_{j+1}\in {\cal O}^3_{u_{j+1}}$, with \begin{align}\VERT P_{j+1}\VERT _{u_{j+1}}^{w_1}\le 8 e^{s_2} Q_j \left(\VERT P_j\VERT _{u_j}^{w_1}\right)^2\le 8 e^{s_2} Q_0 \left(\VERT P_j\VERT _{u_j}^{w_1}\right)^2\,.\end{align}
Using homogeneity again to the extreme sides of this inequality and combining it with~\eqref{induction},~\eqref{base step} and~\eqref{NEWnewsmallness}, we get
\begin{align}\label{iterate}\VERT P_{j+1}\VERT _{u_{j+1}}^{w_{0}}&\le 8 {{p}} e^{s_2} Q_0 \left(\VERT P_j\VERT _{u_j}^{w_0}\right)^2
\le 8 {{p}} e^{s_2} Q_0 \VERT P_1\VERT _{u_1}^{w_0}\VERT P_j\VERT _{u_j}^{w_0}\nonumber\\
&\le
64{{p}}e^{2s_2} Q_0^2\left(\VERT P_0\VERT _{u_0}^{w_0}\right)^2
\VERT P_j\VERT _{u_j}^{w_0}
\le
\frac{1}{2}\VERT P_j\VERT _{u_j}^{w_0}\nonumber\\
&<2^{-j}\VERT P_1\VERT _{u_1}^{w_0}\, .
\end{align}
After ${{p}}$ iterations,
\begin{align}\VERT P_{{{p}}+1}\VERT _{u_{{{p}}+1}}^{w_{0}}<2^{-{{p}}}\VERT P_1\VERT _{u_1}^{w_0}<2^{-({{p}}+1)}\VERT P_0\VERT _{u_0}^{w_0}\end{align}
so we can take $X_\star=X_{{{p}}+1}$, $P_\star=P_{{{p}}+1}$, $u_\star=u_{{{p}}+1}$. $\qquad \square$
\subsection{A generalisation when the dependence on $\psi$ is smooth}\label{A generalisation when the dependence}
\begin{definition}\rm We denote ${\cal C}^3_{u, \ell_*}$, with $u=(r, \sigma)$, the class of vector--fields
\begin{align} ({\rm I}, y, \psi):\ {\mathbb P}_{u}:={\mathbb I}_r \times{\mathbb Y}_\sigma \times {{\mathbb T}}\to X=(X_1, X_2, X_3)\in{\mathbb C}^3\qquad u=(r, \sigma)\end{align}
where each $X_i\in {\cal C}_{u, \ell_*}$, meaning that $X_i$ is $C^{\ell_*}$ in ${\mathbb P}:={\mathbb I} \times{\mathbb Y} \times {{\mathbb T}}$, $X_i(\cdot, \cdot, \psi)$ is holomorphic in ${\mathbb I}_r \times{\mathbb Y}_\sigma$ for each fixed $\psi$ in ${\mathbb T}$.
\end{definition}
\noindent
In this section we generalise {Theorem}~\ref{Normal Form Lemma} to the case that $X\in {\cal C}^3_{u, \ell_*}$. We use techniques going back to J. Nash and J. Moser~\cite{nash1956, moser1961, moser1962}.
\noindent
First of all, we need a different definition of norms\footnote{The series in~\eqref{normfOLD} is in general diverging when $f\in {\cal C}_{u, \ell_*}$.} and, especially, {\it smoothing} operators.
\paragraph{1. Generalised weighted norms}
We let
\begin{align}\label{normX}
\VERT X \VERT_{u, \ell}^{w}&:= \sum_i w^{-1}_i\|X_i\|_{u, \ell}\ ,\qquad 0\le \ell\le \ell_*\end{align}
where
$w=(w_1$, $w_2$, $w_3)\in {\mathbb R}_+^3$
where, if
$f:\ {\mathbb P}_{r, \sigma}:={\mathbb I}_r \times{\mathbb Y}_\sigma \times {{\mathbb T}}\to {\mathbb C}$,
then
\begin{align}\label{normf}\|f\|_{u}:=\sup_{{\mathbb I}_r\times{\mathbb Y}_\sigma\times{\mathbb T}}|f|
\ ,\quad \|f\|_{u, \ell}:=\max_{0\le j\le \ell}\{\|\partial_\varphi^j\, f\|_u\}\qquad u=(r, \sigma)\,.
\end{align}
Clearly, the class ${\cal O}^3_{r, \sigma, s}$ defined in Section~\ref{Weighted norms} is a proper subset of ${\cal C}^3_{u, \ell_*}$
\noindent
Observe that the norms~\eqref{normX} still verify monotonicity and
homogeneity in
\eqref{monotonicity} and~\eqref{homogeneity}.
\paragraph{2. Smoothing}
We call {\it smoothing} a family of operators
\begin{align}T_K:\qquad f\in {\cal C}_{u, \ell_*}\to T_Kf\in {\cal C}_{u, \ell_*}\ ,\quad K\in {\mathbb N}\end{align}
verifying the following. Let $R_K:=I-T_K$. There exist $c_0>0$, $\delta\ge 0$ such that for all $f\in {\cal C}_{u, \ell_*}$, for all $K$, $0\le {j}\le \ell\le {\ell_*}$,
\begin{itemize}\item[\tiny\textbullet] $\|T_K\,f\|_{u, \ell}\le c_0\,K^{(\ell-{j}+\delta)}\|f\|_{u, {j}}\quad \forall\,0\le \ell\le \ell_*$\\
\item[\tiny\textbullet] $\|R_K\,f\|_{u, {j}}\le c_0\,K^{-(\ell-{j}-\delta)}\|f\|_{u, \ell}\quad \forall\,0\le \ell\le \ell_*$
\end{itemize}
As an example, as suggested in~\cite{arnold63}, one can take
\begin{align}T_K\,f({\rm I}, y, \psi):=\sum_{k\in {\mathbb Z}, |k|_1\le K} f_k({\rm I}, y)e^{{\rm i} k\psi}\end{align}
which, with the definitions~\eqref{normX}--\eqref{normf}, verifies the inequalities above with $\delta=2$.
\noindent
We name Generalised Normal Form Theorem ({\sc gnft}) the following
\begin{theorem}[{\sc gnft}]\label{Normal Form LemmaNEW}
Let $u=(r, \sigma)$; $X=N+P\in {\cal C}^3_{u, \ell_*}$, ${{p}}$, $\ell$, $K\in \natural$ and let $w_K=\left(\rho, \tau, \frac{1}{c_0\,K^{1+\delta}}\right)\in {\mathbb R}_+^3$ and assume that for some $s_1$, $s_2\in {\mathbb R}_+$, the following inequalities are satisfied.
Put
\begin{align}\label{delta1NEW}
Q:=3\, e^{s_1}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{r, \sigma}\end{align}
then assume:
\begin{align}\label{NEWu+positiveNEW}
0<\rho<\frac{r}{8}\, ,\quad 0<\tau< e^{-s_2}\frac{\sigma}{8}\end{align}
and
\begin{align}\label{NEWnewcond2NEW}\chi&:= \max\left\{\frac{{\rm diam}({\mathbb Y}_\sigma)}{s_1}\left\|\frac{\omega}{v}\right\|_{r, \sigma}\, ,\ \frac{{\rm diam}({\mathbb Y}_\sigma)}{s_2}\left\|\frac{\partial_y v}{v}\right\|_{r, \sigma}\right\}
\le 1 \\
\label{theta1NEW}\theta_1&:= 2\,e^{s_1+s_2}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_y\omega}{v}\right\|_{r, \sigma}c_0\,K^{1+\delta}\tau\le 1\\
\theta_2&:= \label{theta2NEW}4\,e^{s_1}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{r, \sigma}\frac{\rho}{\tau}\le 1\\
\theta_3&:= \label{theta3NEW}8\,{e^{s_1}}{\rm diam}({\mathbb Y}_\sigma)
\left\|\frac{\partial_{\rm I}\omega}{v}\right\|_{r, \sigma}c_0\,K^{1+\delta}\rho\le 1\\\\
\label{NEWnewsmallnessNEW}\eta&:= 2^4\,e^{s_2}Q \VERT P\VERT_{u}^{{w_K}}<\frac{1}{\sqrt{{{p}}}}\, . \end{align}
Then, with
\begin{align}u_*=(r_\star, \sigma_\star)\, ,\quad r_\star:=r-8\rho\, , \quad\sigma_\star=\sigma-8 e^{s_2}\tau\end{align}
there exists a real--analytic change of coordinates $\Phi_\star$
such that $X_\star:=\Phi_\star X\in {\cal C}^3_{u_\star, \ell_*}$ and
$X_\star=N+P_\star$,
with
\begin{align}\label{P*NEW}\VERT P_\star\VERT^{w_K}_{u_\star}\le \max\left\{2^{-({{p}}+1)}\VERT P\VERT^{w_K}_{u}\, ,\ 2 c_0\,K^{-\ell+\delta}\VERT P\VERT^{w_K}_{u, \ell} \right\} \qquad \forall\ 0\le \ell\le \ell_*\,. \end{align}
\end{theorem}
\noindent
The result generalising {Lemma}~\ref{iteration lemma} is
\begin{lemma}\label{iteration lemmaNEW}
Let $X=N
+P\in {\cal C}^3_{u, \ell_*}$, with $u=(r, \sigma)$, $N$ as in~\eqref{NZ}, $\ell$, $K\in {\mathbb N}$. Assume~\eqref{existence}
and that
$P$ is so small that
\begin{align}\label{smallcondNEW}
Q \VERT P\VERT^{w_K} _{u}<1\qquad Q:=3e^{s_1}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{r, \sigma}\, ,\quad w_K=\left(\rho, \tau, \frac{1}{c_0\,K^{1+\delta}}\right)
\end{align}
Let $\rho_*$, $\tau_*$ be defined via
\begin{align}\label{bounds1NEW}
\frac{1}{\rho_*}&=\frac{1}{\rho}-{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{r, \sigma}\left(\frac{e^{s_1}}{\tau}-e^{2s_1+s_2}{\rm diam}({\mathbb Y}_\sigma)
\left\|\frac{\partial_y\omega}{v}\right\|_{r, \sigma}c_0\,K^{1+\delta}
\right)\nonumber\\
&- {\rm diam}({\mathbb Y}_\sigma)
\left(e^{s_1}\left\|\frac{\partial_{\rm I}\omega}{v}\right\|_{r, \sigma}+e^{{2s_1+s_2}}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{r, \sigma} \left\|\frac{\partial_y\omega}{v}\right\|_{r, \sigma}\right)c_0\,K^{1+\delta}\nonumber\\
\frac{1}{\tau_*}&=\frac{e^{-s_2}}{\tau}-e^{s_1}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_y\omega}{v}\right\|_{r, \sigma}c_0\,K^{1+\delta}
\end{align}
assume
\begin{align}\label{w*positiveNEW}\hat w_*=(\rho_*, \tau_*)\in {\mathbb R}_+^2\, ,\qquad u_*=(r-2\rho_*, \sigma-2\tau_*)\in {\mathbb R}_+^2\end{align}
and put
\begin{align}w_{*, K}:=\left(\hat w_*, \frac{1}{c_0\,K^{1+\delta}}\right)\, . \end{align}
Then there exists $Y\in T_K{\cal C}^3_{u_*+\hat w_*, \ell_*}$ such that
$X_+:=e^{{\cal L}_Y}X\in {\cal C}^3_{u_*, \ell_*}$ and
$X_+=N+
P_+$, with
\begin{align}\label{P+NEW} \VERT P_+\VERT^{w_{*, K}}_{u_*}
\le \frac{2Q
\left(\VERT P\VERT^{w_K}_u\right)^2
}{1-Q
\left\VERT P\right\VERT^{w_K}_u}+c\,K^{-\ell+\delta}\VERT P\VERT^ {w_{K}}_{u, \ell}\qquad \forall\ 0\le \ell\le \ell_*
\end{align}
\end{lemma}
The simplified form of
{Lemma}~\ref{iteration lemmaNEW}, corresponding to {Lemma}~\ref{simplifiedsteplemma}, is
\begin{lemma}[Generalised Step Lemma]\label{simplifiedsteplemmaNEW}
Assume~\eqref{existence} and replace~\eqref{smallcondNEW} and~\eqref{bounds1NEW} with
\begin{align}\label{newcond1NEW}&2\,e^{s_1+s_2}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_y\omega}{v}\right\|_{r, \sigma}c_0\,K^{1+\delta}\tau\le 1\nonumber\\
&4\,e^{s_1}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{r, \sigma}\frac{\rho}{\tau}\le 1\nonumber\\
&8\,{e^{s_1}}{\rm diam}({\mathbb Y}_\sigma)
\left\|\frac{\partial_{\rm I}\omega}{v}\right\|_{r, \sigma}c_0\,K^{1+\delta}\rho\le 1\\
\label{u+positiveNEW}
&0<\rho<\frac{r}{4}\, ,\quad 0<\tau<\frac{\sigma}{4}e^{-s_2}\\
\label{newsmallnessNEW}
&2Q\VERT P\VERT ^{w_K}_u<1
\end{align}
then $X_+=N+P_+\in {\cal C}^3_{u_+, \ell_*}$ and \begin{align}\label{finalineqNEW}\VERT P_+\VERT_{u_+}^{w_K}\le 8 e^{s_2}Q
(\VERT P \VERT_u^{w_K})^2+c\,K^{-\ell+\delta}\VERT P\VERT^ {w_{K}}_{u, \ell}
\end{align}
with
\begin{align}u_+:=(r-4\rho, \sigma-4\tau e^{s_2})\, . \end{align}
\end{lemma}
{\bf Proof\ }The inequalities in~\eqref{newcond1NEW} guarantee
\begin{align}\frac{1}{\rho_*}\ge \frac{1}{2\rho}\, ,\quad \frac{1}{\tau_*}\ge \frac{e^{-s_2}}{2\tau}\end{align}
whence
\begin{align}w_{*, K}<2 e^{s_2} w_K\, ,\qquad u_*\ge u_+>0\, . \end{align}
Then
~\eqref{finalineqNEW} is implied by~\eqref{P+NEW}, monotonicity and homogeneity and the inequality in~\eqref{newsmallnessNEW}. $\quad\square$
\vskip.2in
\noindent
Let now
${\cal F}_{v, \omega}$ and ${\cal G}_{v, \omega}$ be as in~\eqref{FandG}. First of all, observe that
${\cal F}_{v, \omega}$, ${\cal G}_{v, \omega}$ take $ T_K{\cal C}_{u, \ell_*}$ to itself. Moreover, generalising Lemma~\ref{estimates},
\begin{lemma}\label{estimatesNEW}
If inequalities~\eqref{existence} hold,
then
\begin{align}{\cal F}_{v, \omega}\, ,\ {\cal G}_{v, \omega}:\quad {\cal C}_{u, \ell_*}\to {\cal C}_{u, \ell_*}\end{align}
and
\begin{align}\|{\cal F}_{v, \omega}[g]\|_{r, \sigma}\le e^{s_1}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{g}{v}\right\|_{r, \sigma}\, ,\quad \|{\cal G}_{v, \omega}[g]\|_{r, \sigma}\le e^{s_1+s_2}\,{\rm diam}({\mathbb Y}_\sigma) \left\|\frac{g}{v}\right\|_{r, \sigma}\, . \end{align}
\end{lemma}
\begin{proposition}\label{homeq1NEW}
Let
\begin{align}\label{NZNEW}N=(0, v({\rm I}, y), \omega({\rm I}, y))\, ,\qquad Z=(Z_1({\rm I}, y, \psi), Z_2({\rm I}, y, \psi), Z_3({\rm I}, y, \psi))\end{align}
belong to ${\cal C}^3_{u, \ell_*}$ and assume~\eqref{existence}.
Then the ``homological equation''
\begin{align}\label{homeqNEW}{\cal L}_N[Y]=Z\end{align}
has a solution $Y\in {\cal C}_{u, \ell_*}$ verifying
\begin{align}\label{boundsNEW}
\VERT Y\VERT_{u}^{\rho_*, \tau_*, t_*}\le e^{s_1}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{u}\VERT Z\VERT_{u}^{\rho, \tau, t}\quad u=(r, \sigma)
\end{align}
with $\rho_*$, $\tau_*$, $t_*$ defined via
\begin{align}\label{bounds1NEW*}
\frac{1}{\rho_*}&=\frac{1}{\rho}-{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{u}\left(\frac{e^{s_1}}{\tau}-e^{2s_1+s_2}{\rm diam}({\mathbb Y}_\sigma)
\left\|\frac{\partial_y\omega}{v}\right\|_{u}\frac{1}{t}
\right)\nonumber\\
&- {\rm diam}({\mathbb Y}_\sigma)
\left(e^{s_1}\left\|\frac{\partial_{\rm I}\omega}{v}\right\|_{u}+e^{{2s_1+s_2}}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{u} \left\|\frac{\partial_y\omega}{v}\right\|_{u}\right)\frac{1}{t}\nonumber\\
\frac{1}{\tau_*}&=\frac{e^{-s_2}}{\tau}-e^{s_1}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_y\omega}{v}\right\|_{u}\frac{1}{t}\nonumber\\
t_*&=t
\end{align}
and provided that
\begin{align}\label{w*positiveNEW*}
(\rho_*, \tau_*)\in {\mathbb R}_+^2\, .
\end{align}
In particular, if $Z\in T_{K}{\cal C}^3_{u, \ell_*}$ for some $K\in {\mathbb N}$, then also $Y\in T_{K}{\cal C}^3_{u, \ell_*}$.
\end{proposition}
{\bf Proof\ }The solution~\eqref{Yi}
satisfies
\begin{align}
\|Y_1\|_{u}&\le e^{s_1}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{u}\left\|Z_1\right\|_{u}\nonumber\\
\|Y_2\|_{u}&\le e^{s_1+s_2}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{u}\left\|Z_2\right\|_{u}+e^{2s_1+s_2}{\rm diam}({\mathbb Y}_\sigma)^2\left\|\frac{1}{v}\right\|_{u}\left\|\frac{\partial_{\rm I} v}{v}\right\|_{u}\left\|Z_1\right\|_{u}\nonumber\\
\|Y_3\|_{u}&\le e^{s_1} {\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{u}\left\|Z_3\right\|_{u}
+e^{2s_1+s_2}{\rm diam}({\mathbb Y}_\sigma)^2\left\|\frac{1}{v}\right\|_{u}\left\|\frac{\partial_y\omega}{v}\right\|_{u}
\left\|Z_2\right\|_{u}
\nonumber\\
&+ {\rm diam}({\mathbb Y}_\sigma)^2\left\|\frac{1}{v}\right\|_{u}
\left(e^{2s_1}\left\|\frac{\partial_{\rm I}\omega}{v}\right\|_{u}+e^{{3s_1+s_2}}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{u} \left\|\frac{\partial_y\omega}{v}\right\|_{u}\right)\left\|Z_1\right\|_{u}
\end{align}
Multiplying the inequalities above by $\rho_*^{-1}$, $\tau_*^{-1}$, $t_*^{-1}$ respectively and taking the sum, we find~\eqref{boundsNEW}, with
\begin{align}
\frac{1}{\rho}&=\frac{1}{\rho_*}+e^{s_1+s_2}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{u}\frac{1}{\tau_*}+{\rm diam}({\mathbb Y}_\sigma)
\left(e^{s_1}\left\|\frac{\partial_{\rm I}\omega}{v}\right\|_{u}+e^{{2s_1+s_2}}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_{\rm I} v}{v}\right\|_{u} \left\|\frac{\partial_y\omega}{v}\right\|_{u}\right)\frac{1}{t_*}\nonumber\\
\frac{1}{\tau}&=\frac{e^{s_2}}{\tau_*}+e^{s_1+s_2}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{\partial_y\omega}{v}\right\|_{u}\frac{1}{t_*}\nonumber\\
\frac{1}{t}&=\frac{1}{t_*}\, .
\end{align}
We recognise that, under conditions~\eqref{w*positiveNEW*}, $\rho_*$, $\tau_*$, $t_*$ in~\eqref{bounds1NEW*} solve the equations above. Observe that if $Z\in T_{K}{\cal C}^3_{u, \ell_*}$, then also $Y\in T_{K}{\cal C}^3_{u, \ell_*}$, as ${\cal F}_{v, \omega}$, ${\cal G}_{v, \omega}$ do so.
$\qquad \square$
\begin{lemma}\label{Lie bracketsNEW}
Let $u_0\ge u>w\in {\mathbb R}_+^2\times \{0\}$;
$Y\in T_K{\cal C}^3_{u_0, \ell_*}$, $W\in T_K{\cal C}^3_{u, \ell_*}$. Put $w_K:=\left(w_1, w_2, \frac{1}{c_0\,K^{1+\delta}}\right)$. Then
\begin{align}\VERT{\cal L}_Y[W]\VERT^{u_0-u+w_K}_{u-w}\le \VERT Y\VERT^{w_K}_{u-w}\VERT W\VERT ^{u_0-u+w_K}_{u}+\VERT W\VERT^{u_0-u+w_K}_{u-w}\VERT Y\VERT ^{u_0-u+w_K}_{u_0}\, . \end{align}
\end{lemma}
{\bf Proof\ }By Cauchy inequalities, the definitions~\eqref{normX}--\eqref{normf} and the smoothing properties,
\begin{align}
\|(J_W Y)_i\|_{u-w}&\le \|\partial_{{\rm I}} W_i\|_{u-w}\| Y_1\|_{u-w}+\|\partial_{y} W_i\|_{u-w} \|Y_2\|_{u-w}+\|\partial_{\psi} W_i\|_{u-w} \|Y_3\|_{u-w}\nonumber\\
&\le w_1^{-1}\| W_i\|_{u}\| Y_1\|_{u-w}+w_2^{-1}\|W_i\|_{u} \|Y_2\|_{u-w}+\|W_i\|_{u, 1} \|Y_3\|_{u-w}\nonumber\\
&\le w_1^{-1}\| W_i\|_{u}\| Y_1\|_{u-w}+w_2^{-1}\|W_i\|_{u} \|Y_2\|_{u-w}+c_0\,K^{1+\delta}\|W_i\|_{u} \|Y_3\|_{u-w}\nonumber\\
&=\VERT Y\VERT_{u-w}^{w_K}\|W_i\|_{u}
\end{align}
Similarly,
\begin{align}\|(J_Y W)_i\|_{u-w}\le \VERT W\VERT_{u-w}^{u_0-u+w_K}\|Y_i\|_{u_0}\, . \end{align}
Taking the $u_0-u+w_K$--weighted norms, the thesis follows. $\quad \square$
\begin{lemma}\label{iterateLNEW}
Let $0<w<u\in {\mathbb R}^2_+\times\{0\}$, $w_K:=\left(w_1, w_2, \frac{1}{c_0\,K^{1+\delta}}\right)$;
$Y\in T_K{\cal C}^3_{u+w, \ell_*}$, $W\in T_K{\cal C}^3_{u, \ell_*}$. Then
\begin{align}\VERT{\cal L}^n_Y[W]\VERT^{w_K}_{u-w}\le 3^n n!\left(\VERT Y\VERT ^{w_K}_{u+w}\right)^n\VERT W\VERT^{w_K} _{u-w}\, . \end{align}
\end{lemma}
{\bf Proof\ } The proof copies the one of Lemma~\ref{iterateL}, up to invoke Lemma~\ref{Lie bracketsNEW} at the place of Lemma~\ref{Lie brackets} and hence replace the $w$'s ``up'' with $w_K$. $\qquad \square$
\begin{proposition}\label{Lie SeriesNEW}
Let $0<w<u\in {\mathbb R}_+^2\times\{0\}$, $w_K:=\left(w_1, w_2, \frac{1}{c_0\,K^{1+\delta}}\right)$, $Y\in T_K{\cal C}^3_{u+w, \ell_*}$,
\begin{align}\label{q}q:=3\VERT Y\VERT^{w_K}_{u+w}<1\, . \end{align}
Then the Lie series $e^{{\cal L}_Y}$
defines an operator
\begin{align}e^{{\cal L}_Y}:\quad T_K{\cal C}^3_{u, \ell_*}\to T_K{\cal C}^3_{u-w, \ell_*}\end{align}
and its tails
\begin{align}e^{{\cal L}_Y}_m=\sum_{{{k}}\ge m}\frac{{\cal L}^n_Y}{{{k}}!}\end{align}
verify
\begin{align}\left\VERT e^{{\cal L}_Y}_m W\right\VERT^{w_K}_{u-w}\le \frac{q^m}{1-q}\VERT W\VERT_{u}^{w_K}\qquad \forall\ W\in T_K{\cal C}^3_{u, \ell_*}\, . \end{align}
\end{proposition}
\paragraph{\bf Proof of {Lemma}~\ref{iteration lemmaNEW}} {All the remarks before Lemma}~\ref{estimates}
{continue holding also in this case, except for the fact that,} differently from {Lemma}~\ref{iteration lemma} here we need a ``ultraviolet cut--off'' of the perturbing term. Namely, we split
\begin{align}
e^{{\cal L}_Y} X&=e^{{\cal L}_Y}\left(
+P\right)=
+P+{\cal L}_Y N+e_2^{{\cal L}_Y} N
+e_1^{{\cal L}_Y}P\nonumber\\
&=
+T_KP-{\cal L}_N Y+P_+
\end{align}
with
$P_+=e_2^{{\cal L}_Y}
+e_1^{{\cal L}_Y}P+R_KP$.
We
choose $Y$ so that the homological equation
\begin{align}{\cal L}_N Y=T_KP\end{align}
is satisfied. By Proposition~\ref{homeq1NEW}, this equation has a solution $Y\in T_K{\cal C}^3_{u, \ell_*}$ verifying
\begin{align} q:=3\VERT Y\VERT^{w_*} _{u}&\le
3{e^{s_1}}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{u}\VERT P\VERT^{w_K}_u=Q \VERT P\VERT^{w_K}_u<1\, . \end{align}
with $w_*=(\rho_*, \tau_*, t_*)$ as in~\eqref{bounds1NEW*}. As
$t_*=t=\frac{1}{c_0\,K^{1+\delta}}$,
We let \begin{align}w_{*, K}:=w_*\ ,\quad \hat w_{*}:=(\rho_*, \tau_*)\end{align}
with
$(\rho_*, \tau_*)$ as in~\eqref{bounds1NEW}.
By Proposition~\ref{Lie SeriesNEW}, the Lie series $e^{{\cal L}_Y}$
defines an operator
\begin{align}e^{{\cal L}_Y}:\quad W\in T_K{\cal C}_{u_*+\hat w_*, \ell_*}\to T_K{\cal C}_{u_*, \ell_*}\end{align}
and its tails $e^{{\cal L}_Y}_m$
verify
\begin{align}
\left\VERT e^{{\cal L}_Y}_m W\right\VERT^ {w_{*, K}}_{u_*}\le\frac{\left(Q\VERT P\VERT^{w_K}_u\right)^m}{1-Q\VERT P\VERT^{w_K}_u}\VERT W\VERT^ {w_{*, K}}_{u_*+\hat w_*}
\end{align}
for all $W\in T_K{\cal C}^3_{u_*+\hat w_*, \ell_*}$.
In particular, $e^{{\cal L}_Y}$ is well defined on $T_K{\cal C}^3_{u, \ell_*}\subset T_K{\cal C}^3_{u_*+\hat w_*, \ell_*}$, hence $P_+\in {\cal C}^3_{u_*, \ell_*}$. The bounds on $P_+$ are obtained as follows. The terms $\VERT e_2^{{\cal L}_Y}N\VERT^ {w_{*, K}}_{u_*}$ and $\VERT e_1^{{\cal L}_Y}P\VERT^{w_{*, K}}_{u_*}$ are treated quite similarly as~\eqref{pertbound1} and~\eqref{pertbound2}:
\begin{align}
\VERT e_2^{{\cal L}_Y}N\VERT^ {w_{*, K}}_{u_*}\le \frac{Q
\left(\VERT P\VERT^{w_K}_u\right)^2
}{1-Q
\left\VERT P\right\VERT^{w_K}_u}\ ,\quad \VERT e_1^{{\cal L}_Y}P\VERT^{w_{*, K}}_{u_*}
\le \frac{Q
\left(\VERT P\VERT^{w_K}_u\right)^2
}{1-Q
\left\VERT P\right\VERT^{w_K}_u}
\end{align}
The moreover, here we have the term $R_KP$, which is obviously bounded as
\begin{align}
\VERT R_KP\VERT^ {w_{*, K}}_{u_*}\le c\,K^{-\ell+\delta}\VERT P\VERT^ {w_{*, K}}_{u_*, \ell}\le c\,K^{-\ell+\delta}\VERT P\VERT^ {w_{K}}_{u, \ell}\, . \quad \square
\end{align}
\noindent
We are finally ready for the
\paragraph{Proof of {Theorem~\ref{Normal Form LemmaNEW}}}
Analogously as in the proof of {\sc nft}, we proceed by iterate applications of the Generalised Step Lemma.
At the base step, we let
\begin{align}X=X_0:=N+P_0\, ,\quad w_0:=w_{0, K}:=\left(\rho,\tau, \frac{1}{c_0 K^{1+\delta}}\right)\, ,\quad u_0:=(r,\sigma)\end{align}
with $X_0=N+P_0\in {\cal C}^3_{u_0, \ell_*}$. We let
\begin{align}
Q_0:=3\,e^{s_1}{\rm diam}({\mathbb Y}_\sigma)\left\|\frac{1}{v}\right\|_{u_0}\end{align}
Conditions~\eqref{newcond1NEW}--\eqref{newsmallnessNEW} are implied by the assumptions~\eqref{delta1NEW}--\eqref{NEWnewsmallnessNEW}. We then conjugate $X_0$ to $X_1=N+P_1\in {\cal C}^3_{u_1, \ell_*}$, where
\begin{align}u_1=(r-4\rho, \sigma-4\tau e^{s_2})=:(r_1, \sigma_1)\, . \end{align}
Then we have
\begin{align}\VERT P_1\VERT_{u_1}^{w_0}\le
8 e^{s_2}Q_0 \left(\VERT P_0\VERT_{u_0}^{w_0}\right)^2+ c_0\,K^{-\ell+\delta}
\VERT P_0\VERT_{u_0, \ell}^{w_0}
\, . \end{align}
If $8 e^{s_2}Q_0 \left(\VERT P_0\VERT_{u_0}^{w_0}\right)^2\le
c_0\,K^{-\ell+\delta}
\VERT P_0\VERT_{u_0, \ell}^{w_0} $,
the proof finishes here. So, we assume the opposite inequality, which gives
\begin{align}\label{base stepNEW}\VERT P_1\VERT_{u_1}^{w_0}\le
16 e^{s_2}Q_0 \left(\VERT P_0\VERT_{u_0}^{w_0}\right)^2\le \frac{1}{2} \VERT P_0\VERT_{u_0}^{w_0}
\, . \end{align}
We assume, inductively, that, for some $1\le j\le {{p}}$,
we have
\begin{align}\label{inductionNEW}X_{j}=N+P_{j}\in {\cal C}^3_{u_j, \ell_*}\, ,\qquad \VERT P_{j}\VERT_{u_j}^{w_0}<2^{-(j-1)}
\VERT P_{1}\VERT _{u_{1}}^{w_0}
\end{align} where \begin{align}\label{ujNEW}u_j=(r_j, \sigma_j)\end{align} with
\begin{align}r_j:=r_1-4 (j-1)\frac{\rho}{ {{p}}}\, ,\quad \sigma_j:=\sigma_1-4 e^{s_2}(j-1)\frac{\tau}{ {{p}}}\, . \end{align} The case $j=1$ is trivially true because it is the identity $\VERT P_{1}\VERT _{u_{1}}^{w_0}=\VERT P_{1}\VERT _{u_{1}}^{w_0}$.
We aim to apply {Lemma}~\ref{simplifiedsteplemmaNEW} with $u=u_j$ as in~\eqref{ujNEW} and \begin{align}w=w_1:=\frac{w_0}{{{p}}}\, ,\qquad \forall\ 1\le j\le {{p}}\, . \end{align}
Conditions~\eqref{newcond1NEW} and~\eqref{u+positiveNEW} correspond to~\eqref{NEWnewcond2NEW}--\eqref{theta3NEW}, while~\eqref{newsmallnessNEW} is implied by~\eqref{NEWnewsmallnessNEW}. We check condition~\eqref{newsmallnessNEW}.
By homogeneity,
\begin{align}\label{eqNEW}\VERT P_j\VERT_{u_j}^{w_1}={{p}}\VERT P_j\VERT_{u_j}^{w_0}\le {{p}}\VERT P_1\VERT_{u_1}^{w_0}\le 16{{p}} e^{s_2}Q_0 \left(\VERT P_0\VERT_{u_0}^{w_0}\right)^2\end{align}
whence, using
\begin{align}
Q_j=3\,{\rm diam}({\mathbb Y}_{\sigma_j})\left\|\frac{1}{v}\right\|_{r_j, \sigma_j}\le Q_0\end{align}
we see that condition~\eqref{newsmallness} is met:
\begin{align}2 Q_j\VERT P_j\VERT_{u_j}^{w_1}\le 32\,{{p}} e^{s_2}Q^2_0 \left(\VERT P_0\VERT_{u_0}^{w_0}\right)^2<1\, . \end{align}
Then the Iterative Lemma can be applied and we get $X_{j+1}=N+P_{j+1}\in {\cal C}^3_{u_{j+1}, \ell_*}$, with \begin{align}\VERT P_{j+1}\VERT _{u_{j+1}}^{w_1}\le 8 e^{s_2} Q_j \left(\VERT P_j\VERT _{u_j}^{w_1}\right)^2\le 8 e^{s_2} Q_0 \left(\VERT P_j\VERT _{u_j}^{w_1}\right)^2\end{align}
Using homogeneity again to the extreme sides of this inequality and combining it with~\eqref{inductionNEW},~\eqref{base stepNEW} and~\eqref{NEWnewsmallnessNEW}, we get
\begin{align}\label{iterateNEW}\VERT P_{j+1}\VERT _{u_{j+1}}^{w_{0}}&\le 8 {{p}} e^{s_2} Q_0 \left(\VERT P_j\VERT _{u_j}^{w_0}\right)^2
\le 8 {{p}} e^{s_2} Q_0 \VERT P_1\VERT _{u_1}^{w_0}\VERT P_j\VERT _{u_j}^{w_0}\nonumber\\
&\le
128\,{{p}}e^{2s_2} Q_0^2\left(\VERT P_0\VERT _{u_0}^{w_0}\right)^2
\VERT P_j\VERT _{u_j}^{w_0}
\le
\frac{1}2\VERT P_j\VERT _{u_j}^{w_0}\nonumber\\
&<2^{-j}\VERT P_1\VERT _{u_1}^{w_0}\, .
\end{align}
After ${{p}}$ iterations,
\begin{align}\VERT P_{{{p}}+1}\VERT _{u_{{{p}}+1}}^{w_{0}}<2^{-{{p}}}\VERT P_1\VERT _{u_1}^{w_0}<2^{-({{p}}+1)}\VERT P_0\VERT _{u_0}^{w_0}\end{align}
so we can take $X_\star=X_{{{p}}+1}$, $P_\star=P_{{{p}}+1}$, $u_\star=u_{{{p}}+1}$. $\qquad \square$
\section{Symplectic tools}\label{Tools}
In this section we describe various sets of canonical coordinates that are needed to our application. We remark that during the proof of Theorem B, we shall not use any of such sets completely, but rather a ``mix'' of action--angle and regularising coordinates, described below.
\subsection{Starting coordinates}
We begin with the coordinates
\begin{align}\label{coord}
\left\{\begin{array}{l}\displaystyle {\rm C}=\|{\mathbf x}\times {\mathbf y}+{\mathbf x}'\times {\mathbf y}'\|\\\\
\displaystyle {\rm G}=\|{\mathbf x}\times {\mathbf y}\|\\\\
\displaystyle {\rm R}=\frac{\mathbf y'\cdot \mathbf x'}{\|\mathbf x'\|}\\\\
\displaystyle \Lambda= \sqrt{ a}
\end{array}\right.\qquad\qquad \left\{\begin{array}{l}\displaystyle \gamma =\alpha_{\mathbf k}(\mathbf i, \mathbf x')+\frac{\pi}{2}\\\\
\displaystyle {\rm g}=\alpha_{\mathbf k}({\mathbf x'},\mathbf P)+\pi\\\\
\displaystyle {\rm r}=\|\mathbf x'\|\\\\
\displaystyle \ell={\rm mean\ anomaly\ of\ {\mathbf x}\ in\ \mathbb E}
\end{array}\right.
\end{align}
where:
\begin{itemize}
\item[{\tiny\textbullet}] ${\mathbf i}=\left(
\begin{array}{lll}
1\\
0\\
0
\end{array}
\right)$, $ {\mathbf j}=\left(
\begin{array}{lll}
0\\
1\\
0
\end{array}
\right)$ is a hortonormal frame in ${\mathbb R}^2\times\{\mathbf 0\}$ and ${\mathbf k}={\mathbf i}\times {\mathbf j}$ (``$\times$'' denoting, as usual, the ``skew--product'');
\item[{\tiny\textbullet}] after
fixing a set of values of $({\mathbf y}, {\mathbf x})$ where the Kepler Hamiltonian~\eqref{Kep}
takes negative values, ${\mathbb E}$ denotes the elliptic orbit with initial values $({\mathbf y}_0, {\mathbf x}_0)$ in such set;
\item[{\tiny\textbullet}]
$a$ is the semi--major axis of ${\mathbb E}$;
\item[{\tiny\textbullet}] ${\mathbf P}$, with $\|{\mathbf P}\|=1$, the direction of the perihelion of ${\mathbb E}$, assuming ${\mathbb E}$ is not a circle;
\item[{\tiny\textbullet}] $\ell$ is the mean anomaly of ${\mathbf x}$ on ${\mathbb E}$, defined, mod $2\pi$, as the area of the elliptic sector spanned from ${\mathbf P}$ to ${\mathbf x}$, normalized to $2\pi$;
\item[{\tiny\textbullet}] $\alpha_{\mathbf w}({\mathbf u}, {\mathbf v})$ is the oriented angle from ${\mathbf u}$ to ${\mathbf v}$ relatively to the positive orientation established by ${\mathbf w}$, if ${\mathbf u}$, ${\mathbf v}$ and ${\mathbf w}\in {\mathbb R}^3\setminus\{\mathbf 0\}$, with ${\mathbf u}$, ${\mathbf v}\perp{\mathbf w}$.
\end{itemize}
The canonical\footnote{Namely, the change of coordinate~\eqref{coord} satisfies $\sum_{i=1}^2(d{\mathbf y}_i\wedge d{\mathbf x}_i+d{\mathbf y}_i'\wedge d{\mathbf x}_i')=dC\wedge d\gamma +d{\rm G}\wedge d{\rm g}+d{\rm R}\wedge d{\rm r}+d\Lambda\wedge d\ell$.} character of the coordinates~\eqref{coord} has been discussed, in a more general setting, in~\cite{pinzari19}. The shifts $\frac{\pi}{2}$ and $\pi$ in~\eqref{coord} serve only to be consistent with the spatial coordinates of~\cite{pinzari19}.
\subsection{Energy--time coordinates}\label{Energy--time coordinates}
We now describe the ``energy--time'' change of coordinates
\begin{align}\label{transf}\phi_{\rm et}:\qquad ({\cal R}, {\cal E}, {\rm r}, \tau)\to ({\rm R}, {\rm G}, {\rm r}, {\rm g})=({\cal R}+\rho({\cal E}, {\rm r}, \tau),\ \widetilde{\rm G}({\cal E}, {\rm r}, \tau),\ {\rm r},\ \widetilde{\rm g}({\cal E}, {\rm r}, \tau))\end{align}
which integrates the function ${\rm E}({\rm r}, {\rm G}, {\rm g})$ in~\eqref{E}, where ${\cal E}$ (``energy'') denotes the generic level--set of ${\rm E}$, while $\tau$ is its conjugated (``time'') coordinate. The domain of the coordinates~\eqref{transf} is
\begin{align}\label{range}{\cal R}\in {\mathbb R}\, ,\quad 0\le {\rm r}<2\, ,\quad -{\rm r}<{\cal E}<1+\frac{{\rm r}^2}{4}\, ,\quad \tau\in {\mathbb R}\, ,\quad {\cal E}\notin\{{\rm r}, 1\}\, . \end{align}
The extremal values of ${\cal E}$ are taken to be the minimum and the maximum of the function ${\rm E}$ for $0\le {\rm r}<2$. The values ${\rm r}$ and $1$ have been excluded because they correspond, in the $({\rm g}, {\rm G})$--plane, to the curves ${\cal S}_0({\rm r})$ and ${\cal S}_1({\rm r})$ in Figure~\ref{figure1}, where periodic motions do not exist.
\noindent
The functions $\widetilde{\rm G}({\cal E}, {\rm r}, \cdot)$, $\widetilde{\rm g}({\cal E}, {\rm r}, \cdot)$ and $\rho({\cal E}, {\rm r}, \cdot)$ appearing in~\eqref{transf} are, respectively, $2\tau_{\rm p}$ periodic, $2\tau_{\rm p}$ periodic, $2\tau_{\rm p}$ quasi--periodic, meaning that they satisfy
\begin{align}\label{full real}
{\cal P}_{er}:\quad \left\{\begin{array}{l}\displaystyle \widetilde{\rm G}({\cal E}, {\rm r}, \tau+2j\tau_{\rm p})=\widetilde{\rm G}({\cal E}, {\rm r}, \tau)\\\\ \displaystyle \widetilde{\rm g}({\cal E}, {\rm r}, \tau+2j\tau_{\rm p})=\widetilde{\rm g}({\cal E}, {\rm r}, \tau)\\\\
\displaystyle \rho({\cal E}, {\rm r}, \tau+2j\tau_{\rm p})=\rho({\cal E}, {\rm r}, \tau)+2j\rho({\cal E}, {\rm r}, \tau_{\rm p})
\end{array}\right.\qquad \forall\ \tau\in {\mathbb R}\, ,\ \forall\ j\in {\mathbb Z}
\end{align}
with $\tau_{\rm p}=\tau_{\rm p}({\cal E}, {\rm r})$ the period, defined below. Note that one can find a unique splitting
\begin{align}\label{split}\rho({\cal E}, {\rm r}, \tau)={\cal B}({\cal E}, {\rm r}) \tau+\widetilde\rho({\cal E}, {\rm r}, \tau)\end{align}
such that $\widetilde\rho({\cal E}, {\rm r}, \cdot)$ is $2\tau_{\rm p}$--periodic. It is obtained taking
\begin{align}\label{B}{\cal B}({\cal E}, {\rm r})=\frac{\rho({\cal E}, {\rm r}, \tau_{\rm p}({\cal E}, {\rm r}))}{\tau_{\rm p}({\cal E}, {\rm r})}\, ,\quad \widetilde\rho({\cal E}, {\rm r}, \tau)=\rho({\cal E}, {\rm r}, \tau)-\frac{\rho({\cal E}, {\rm r}, \tau_{\rm p}({\cal E}, {\rm r}))}{\tau_{\rm p}({\cal E}, {\rm r})}\tau\,.\end{align}
\noindent
The transformation~\eqref{transf} turns to satisfy also the following ``half--parity'' symmetry:
\begin{align}\label{full period}{\cal P}_{1/2}:\quad \left\{\begin{array}{l}\displaystyle \widetilde{\rm G}({\cal E}, {\rm r}, \tau)=\widetilde{\rm G}({\cal E}, {\rm r}, -\tau)\\\\
\displaystyle \widetilde{\rm g}({\cal E}, {\rm r}, \tau)=
2\pi-\widetilde{\rm g}({\cal E}, {\rm r}, -\tau)\\\\
\displaystyle \rho({\cal E}, {\rm r}, \tau)=-\rho({\cal E}, {\rm r}, -\tau)\end{array}\right.\qquad \forall\ -\tau_{\rm p}<\tau<\tau_{\rm p}\, . \end{align}
\noindent
In addition, when $-{\rm r}<{\cal E}<{\rm r}$, one has the following ``quarter--parity''
\begin{align}\label{half domain}{\cal P}_{1/4}:\quad \left\{\begin{array}{l}\displaystyle \widetilde{\rm G}({\cal E}, {\rm r}, \tau)=-{\rm G}\left({\cal E}, {\rm r}, \tau_{\rm p}-\tau\right)\\\\
\displaystyle \widetilde{\rm g}({\cal E}, {\rm r}, \tau)=\widetilde{\rm g}\left({\cal E}, {\rm r}, \tau_{\rm p}-\tau\right)\\\\
\displaystyle \rho({\cal E}, {\rm r}, \tau)=\rho\left({\cal E}, {\rm r}, \tau_{\rm p}\right)-\rho\left({\cal E}, {\rm r}, \tau_{\rm p}-\tau\right)\end{array}\right.\quad \forall\ 0\le \tau\le \tau_{\rm p}\, . \end{align}
\noindent
The change~\eqref{transf} will be constructed using, as generating function, a solution of the Hamilton--Jacobi equation
\begin{align}\label{HJ}{\rm E}({\rm r}, {\rm G}, \partial_{\rm G} S_{\rm et})={\rm G}^2+{\rm r}\sqrt{1-{\rm G}^2}\cos\left(\partial_{\rm G} S_{\rm et}\right)={\cal E}\,.\end{align}
We choose the solution
\begin{align}S_{\rm et}^+({\cal R}, {\cal E},{\rm r}, {\rm G})=\left\{\begin{array}{l}\displaystyle \pi\sqrt{\alpha_+({\cal E}, {\rm r})}-\int_{{\rm G}}^{\sqrt{\alpha_+({\cal E}, {\rm r})}} \cos^{-1}\frac{{\cal E}-\Gamma ^2}{{\rm r}\sqrt{1-\Gamma ^2}}d\Gamma +{\cal R}{\rm r}\quad -{\rm r}\le{\cal E}<1\\\\
\displaystyle \pi-\int_{{\rm G}}^{\sqrt{\alpha_+({\cal E}, {\rm r})}} \cos^{-1}\frac{{\cal E}-\Gamma ^2}{{\rm r}\sqrt{1-\Gamma ^2}}d\Gamma +{\cal R}{\rm r}\quad\quad 1\le {\cal E}\le 1+\frac{{\rm r}^2}{4}
\displaystyle
\end{array}\right.
\end{align}
where we denote as
\begin{align}\label{alphapm}
\alpha_\pm({\cal E}, {\rm r})={\cal E}-\frac{{\rm r}^2}{2}\pm{\rm r}\sqrt{1+\frac{{\rm r}^2}{4}-{\cal E}}
\end{align}
the real roots
of
\begin{align}\label{equation
x^2-2\left({\cal E}-\frac{{\rm r}^2}{2}\right)x+{\cal E}^2-{\rm r}^2=0\end{align}
Note that the equation in~\eqref{equation} has always a positive real root all ${\rm r}$, ${\cal E}$ as in~\eqref{range}, so $\alpha_+({\cal E}, {\rm r})$ is positive.
$S_{\rm et}^+$ generates the following equations
\begin{align}\label{ftau}\left\{\begin{array}{l}
\displaystyle {\rm g}=-\cos^{-1}\frac{{\cal E}-{\rm G}^2}{{\rm r}\sqrt{1-{\rm G}^2}}\\\\
\displaystyle \tau=+\int_{\widetilde{\rm G}({\cal E}, {\rm r}, \tau)}^{\sqrt{\alpha_+({\cal E}, {\rm r})}}\frac{d\Gamma }{\sqrt{(\Gamma ^2-\alpha_-({\cal E}, {\rm r}))(\alpha_+({\cal E}, {\rm r})-\Gamma ^2)}}\\\\
\displaystyle {\rm R}={\cal R}-\frac{1}{{\rm r}}\int_{\widetilde{\rm G}({\cal E}, {\rm r}, \tau)}^{\sqrt{\alpha_+({\cal E}, {\rm r})}}\frac{({\cal E}-\Gamma ^2)d\Gamma }{\sqrt{(\Gamma ^2-\alpha_-({\cal E}, {\rm r}))(\alpha_+({\cal E}, {\rm r})-\Gamma ^2)}}=:{\cal R}+\rho({\cal E}, {\rm r}, \tau)\\\\
\displaystyle {\rm r}={\rm r}\end{array}\right.\end{align}
The equations for ${\rm g}$ and ${\rm r}$ are immediate. We check the equation for $\tau$. Letting, for short, $\sigma({\cal E}, {\rm r}):=\sqrt{\alpha_+({\cal E}, {\rm r})}$, we have
\begin{align}\label{Ederivative}
\tau&=\partial_{\cal E} S^+_{\rm et}({\cal R}, {\cal E}, {\rm r}, {\rm G})\nonumber\\
&=\left\{
\begin{array}{llll}
\displaystyle \pi\partial_{\cal E}\sigma({\cal E}, {\rm r})-{\partial_{\cal E}\sigma({\cal E}, {\rm r})}{\rm g}_+({\cal E}, {\rm r}) -\int_{\rm G}^{{\sigma({\cal E}, {\rm r})} }\partial_{\cal E}\cos^{-1}\frac{{\cal E}-\Gamma ^2}{{\rm r}\sqrt{1-\Gamma ^2}}d\Gamma \ &-{\rm r}\le {\cal E}< 1\\\\
\displaystyle -{\partial_{\cal E}\sigma({\cal E}, {\rm r})}{\rm g}_+({\cal E}, {\rm r})-\int_{\rm G}^{{\sigma({\cal E}, {\rm r})} }\partial_{\cal E}\cos^{-1}\frac{{\cal E}-\Gamma ^2}{{\rm r}\sqrt{1-\Gamma ^2}}d\Gamma &1\le {\cal E}\le 1+\frac{{\rm r}^2}{4}
\end{array}
\right.\nonumber\\
&=-\int_{\rm G}^{{\sigma({\cal E}, {\rm r})} }\partial_{\cal E}\cos^{-1}\frac{{\cal E}-\Gamma ^2}{{\rm r}\sqrt{1-\Gamma ^2}}d\Gamma \nonumber\\
&=
\int_{\widetilde{\rm G}({\cal E}, {\rm r}, \tau)}^{\sqrt{\alpha_+({\cal E}, {\rm r})}}\frac{d\Gamma }{\sqrt{(\Gamma ^2-\alpha_-({\cal E}, {\rm r}))(\alpha_+({\cal E}, {\rm r})-\Gamma ^2)}}
\end{align}
having let ${\rm g}_+({\cal E}, {\rm r}):=\cos^{-1}\frac{{\cal E}-\sigma({\cal E}, {\rm r})^2}{{\rm r}\sqrt{1-\sigma({\cal E}, {\rm r})^2}}$ and used, by~\eqref{alphapm},
\begin{align}\label{gplus}
{\rm g}_+({\cal E}, {\rm r})&=\cos^{-1}{\, \rm sign\, }\left(\frac{{\rm r}}{2}-\sqrt{1+\frac{{\rm r}^2}{4}-{\cal E}}\right)=\left\{\begin{array}{l}\displaystyle \pi\quad -{\rm r}\le {\cal E}< 1\\\\
\displaystyle 0\quad 1\le {\cal E}\le 1+\frac{{\rm r}^2}{4}\end{array}\right.
\end{align}
Observe that $({\rm g}_+$, $\sigma)$ are the coordinates of the point where ${\rm E}$ reaches its maximum on each level set (Figure~\ref{figure1}).
The equation for ${\rm R}$ is analogous.
\noindent
Equations~\eqref{ftau} define the segment of the transformation~\eqref{transf} with $0\le \tau\le \tau_{\rm p}$, where
\begin{align}\label{T}\tau_{\rm p}({\cal E}, {\rm r}):=\int_{\beta({\cal E}, {\rm r})}^{\sqrt{\alpha_+({\cal E}, {\rm r})}}\frac{d\Gamma }{\sqrt{(\Gamma ^2-\alpha_-({\cal E}, {\rm r}))(\alpha_+({\cal E}, {\rm r})-\Gamma ^2)}}\end{align}
is the half--period,
with
\begin{align}\label{beta}\beta({\cal E}, {\rm r})=\left\{\begin{array}{l}-\sqrt{\alpha_+({\cal E}, {\rm r})}\quad {\rm if}\quad \alpha_-({\cal E}, {\rm r})<0\\\\
\phantom{-}\sqrt{\alpha_-({\cal E}, {\rm r})}\quad {\rm if}\quad \alpha_-({\cal E}, {\rm r})>0\, .
\end{array}\right.\end{align}
The transformation is prolonged to $-\tau_{\rm p}<\tau<0$ choosing the solution \begin{align}S_{\rm et}^-:=-2\pi{\rm G}-S_{\rm et}^+\end{align} of~\eqref{HJ}. It can be checked that this choice provides the symmetry relation described in~\eqref{full period}.
Considering next the functions $S_k^\pm=S_{\rm et}^\pm+2 k \Sigma({\cal E}, {\rm r})$, where $\Sigma$ solves\footnote{The existence of the function $\Sigma({\cal E}, {\rm r})$ follows from the arguments of the next section: compare the formula in~\eqref{A1A2}.}
\begin{align}\partial_{\cal E}\Sigma=
\tau_{\rm p}({\cal E}, {\rm r})\, ,\quad \partial_{\rm r}\Sigma=
\rho({\cal E}, {\rm r}, \tau_{\rm p}({\cal E}, {\rm r}))\end{align}
one obtains the extension of the transformation to $\tau\in {\mathbb R}$ verifying~\eqref{full real}.
\noindent
Observe that quarter period symmetry~\eqref{transf}, holding in the case $-{\rm r}<{\cal E}<{\rm r}$, is an immediate consequence of the definitions~\eqref{ftau}.
\noindent
The coordinates $({\cal R}, {\cal E}, {\rm r}, \tau)$ are referred to as {\it energy--time coordinates}.
\noindent
The regularity of the functions $\widetilde{\rm G}({\cal E}, {\rm r}, \tau)$, $\widetilde\rho({\cal E}, {\rm r}, \tau)$, ${\cal B}({\cal E}, {\rm r})$ and $\tau_{\rm p}({\cal E}, {\rm r})$, which are relevant for the paper,
are studied in detail in Section~\ref{Regularity of the energy--time coordinates}. Their holomorphy is not discussed.
\subsection{Action--angle coordinates}
We look at the transformation
\begin{align}\phi_{\rm aa}:\qquad ({\cal R}_*, A_*, {\rm r}_*, \varphi_*)\to ({\cal R}, {\cal E}, {\rm r}, \tau)\end{align}
defined by equations
\begin{align}\label{actionangle}\left\{\begin{array}{l}\displaystyle A_*={\cal A}({\cal E}, {\rm r})\\\\
\displaystyle \varphi_*=\pi\frac{\tau}{\tau_{\rm p}({\cal E}, {\rm r})}\\\\
\displaystyle {\rm r}_*={\rm r}\\\\
\displaystyle {\cal R}_*={\cal R}+{\cal B}({\cal E}, {\rm r})\tau
\end{array}\right.
\end{align}
with ${\cal B}({\cal E}, {\rm r})$ as in~\eqref{B}, $\tau_{\rm p}({\cal E}, {\rm r})$ as in~\eqref{T} and ${\cal A}({\cal E}, {\rm r})$ the ``action function'', defined as
\begin{align}{\cal A}({\cal E}, {\rm r}):=\left\{\begin{array}{l}\displaystyle \sqrt{\alpha_+({\cal E}, {\rm r})}-\frac{1}{\pi}\int_{\beta({\cal E}, {\rm r})}^{\sqrt{\alpha_+({\cal E}, {\rm r})}}\cos^{-1}\frac{{\cal E}-\Gamma ^2}{{\rm r}\sqrt{1-\Gamma ^2}}d\Gamma \qquad -{\rm r}\le {\cal E}\le1\\\\
\displaystyle 1-\frac{1}{\pi}\int_{\beta({\cal E}, {\rm r})}^{\sqrt{\alpha_+({\cal E}, {\rm r})}}\cos^{-1}\frac{{\cal E}-\Gamma ^2}{{\rm r}\sqrt{1-\Gamma ^2}}d\Gamma \qquad 1< {\cal E}\le1+\frac{{\rm r}^2}{4}
\end{array}\right.\end{align}
with $\alpha_+({\cal E}, {\rm r})$ and $\beta({\cal E}, {\rm r})$ being defined in~\eqref{alphapm},~\eqref{beta}. \\
Geometrically, ${\cal A}({\cal E}, {\rm r})$ represents the area of the region encircled by the level curves of ${\rm E}$ in Figure~\ref{figure1} in the former case, the area of its complement in the second case, divided by $2\pi$.
\\
The canonical character of the transformation~\eqref{actionangle} is recognised looking at the generating function
\begin{align}\label{S}S_{\rm aa}({\cal R}, {\cal E}, {\rm r}_*, \varphi_*)= \varphi_*{\cal A}({\cal E}, {\rm r}_*)+{\cal R}{\rm r}_*\end{align}
and using the following relations (compare the formulae in~\eqref{ftau} and~\eqref{T})
\begin{align}\label{A1A2}{\cal A}_{{\rm r}}({\cal E}, {\rm r})&=-\frac{1}{\pi{\rm r}}\int_{\beta({\cal E}, {\rm r})}^{\sqrt{\alpha_+({\cal E}, {\rm r})}}\frac{({\cal E}-\Gamma ^2)d\Gamma }{\sqrt{(\Gamma ^2-\alpha_-({\cal E}, {\rm r}))(\alpha_+({\cal E}, {\rm r})-\Gamma ^2)}}\nonumber\\
&=\frac{1}{\pi}\rho({\cal E}, {\rm r}, \tau_{\rm p})\nonumber\\
{\cal A}_{{\cal E}}({\cal E}, {\rm r})&=\frac{1}{\pi}\int_{\beta({\cal E}, {\rm r})}^{\sqrt{\alpha_+({\cal E}, {\rm r})}}\frac{d\Gamma }{\sqrt{(\Gamma ^2-\alpha_-({\cal E}, {\rm r}))(\alpha_+({\cal E}, {\rm r})-\Gamma ^2)}}\nonumber\\
&=\frac{1}{\pi}\tau_{\rm p}({\cal E}, {\rm r})
\end{align}
which allow us to rewrite~\eqref{actionangle}
as the transformation generated by~\eqref{S}:
\begin{align}\label{generating equations}\left\{\begin{array}{l}\displaystyle A_*={\cal A}({\cal E}, {\rm r})\\\\
\displaystyle \varphi_*=\frac{\tau}{{\cal A}_{{\cal E}}({\cal E}, {\rm r})}\\\\
\displaystyle {\rm r}_*={\rm r}\\\\
\displaystyle {\cal R}_*={\cal R}+\frac{{\cal A}_{{\rm r}}({\cal E}, {\rm r})}{{\cal A}_{{\cal E}}({\cal E}, {\rm r})}\tau\,.
\end{array}\right.
\end{align}
The coordinates $({\cal R}_*, A_*, {\rm r}_*, \varphi_*)$ are referred to as {\it action--angle coordinates}.
\begin{remark}\rm
We conclude this section observing a non--negligible advantage while using {\it action--angle coordinates}
compared to {\it energy--time} -- besides the obvious one of dealing with a constant period.
It is the law that relates ${\rm R}$ to ${\cal R}_*$, which is (see~\eqref{transf},~\eqref{split} and~\eqref{actionangle})
\begin{align}\label{good relation}{\rm R}={\cal R}_*+\rho_*(A_*, {\rm r}_*, \varphi_*)\, ,\quad {\rm with}\quad \rho_*(A_*, {\rm r}_*, \varphi_*):=\widetilde \rho\circ \phi_{\rm aa}(A_*, {\rm r}_*, \varphi_*)\end{align} where $\widetilde\rho$ is as in~\eqref{split}. Here $\rho_*(A_*, {\rm r}_*, \varphi_*)$ is a {\it periodic function} because so is the function $\widetilde\rho$. This benefit is evident comparing with the corresponding formula with {\it energy--time} coordinates:
\begin{align}{\rm R}={\cal R}+{\cal B}({\cal E}, {\rm r})\tau+\widetilde\rho({\cal E}, {\rm r}, \tau)\end{align}
which would include the uncomfortable linear term ${\cal B}({\cal E}, {\rm r})\tau$. Incidentally, such term would unnecessarily complicate the computations we are going to present in the next Section~\ref{Proof of Theorem}.
\end{remark}
\subsection{Regularising coordinates}\label{Regularising coordinates}
In this section we define the the {\it regularising coordinates}. First of all we rewrite ${\cal S}_0({\rm r})$ in~\eqref{S0} in terms of $(A_*, \varphi_*)$:
\begin{align}{\cal S}_0({\rm r}_*)=\Big\{(A_*, \varphi_*):\quad A_*={\cal A}_{\rm s}({\rm r}_*)\, ,\ \varphi_*\in {\mathbb R}\Big\}\qquad 0<{\rm r}_*<2\end{align}
with ${\cal A}_{\rm s}({\rm r}_*)$ being the limiting value of ${\cal A}({\cal E}, {\rm r}_*)$ when ${\cal E}={\rm r}_*$:
\begin{align}{\cal A}_{\rm s}({\rm r}_*)=\left\{\begin{array}{lll}\displaystyle \sqrt{{\rm r}_*(2-{\rm r}_*)}-\frac{1}{\pi}\int_{0}^{\sqrt{{\rm r}_*(2-{\rm r}_*)}}\cos^{-1}\frac{{\rm r}_*-\Gamma ^2}{{\rm r}_*\sqrt{1-\Gamma ^2}}d\Gamma \quad &0<{\rm r}_*<1\\\\
\displaystyle 1-\frac{1}{\pi}\int_{0}^{\sqrt{{\rm r}_*(2-{\rm r}_*)}}\cos^{-1}\frac{{\rm r}_*-\Gamma ^2}{{\rm r}_*\sqrt{1-\Gamma ^2}}d\Gamma &1<{\rm r}_*<2
\end{array}
\right.\end{align}
\noindent
We observe that
the function ${\cal A}_{\rm s}({\rm r}_*)$ is continuous in $[0, 2]$ (in particular, ${\cal A}_{\rm s}(1^-)={\cal A}_{\rm s}(1^+)$), with
\begin{align}{\cal A}_{\rm s}(0)=0\, ,\quad {\cal A}_{\rm s}(2)=1\end{align}
and increases smoothly between those two values, as it results from the analysis of its derivative. Indeed, letting, for short, $\sigma_0({\rm r}_*):=\sqrt{{\rm r}_*(2-{\rm r}_*)}$ and proceeding analogously as~\eqref{Ederivative}, we get
\begin{align}\label{As}
{\cal A}_{\rm s}'({\rm r}_*
&=-\frac{1}{\pi}\int_{0}^{{\sigma_0({\rm r}_*)} }\partial_{{\rm r}_*}\cos^{-1}\frac{{\rm r}_*-\Gamma ^2}{{\rm r}_*\sqrt{1-\Gamma ^2}}d\Gamma \nonumber\\
&=\frac{1}{\pi{\rm r}_*}\int_{0}^{{\sigma_0({\rm r}_*)} }\frac{\Gamma d\Gamma }{\sqrt{\sigma_0({\rm r}_*)^2-\Gamma ^2}}\nonumber\\
&=\frac{1}{\pi}\sqrt{\frac{2-{\rm r}_*}{{\rm r}_*}}\qquad \forall\ 0< {\rm r}_*< 2
\end{align}
We denote as $A_*\to{\rm r}_{\rm s}(A_*)$ the inverse function \begin{align}\label{inverseA}{\rm r}_{\rm s}:={\cal A}_{\rm s}^{-1}\end{align} and
we define two different changes of coordinates
\begin{align}\phi^k_{\rm rg}:\quad (Y_k, A_k, y_k, \varphi_k)\to ({\cal R}_*, A_*, {\rm r}_*, \varphi_*)\qquad k=\pm 1\end{align}
via the formulae
\begin{align}\label{reg}\left\{\begin{array}{l}\displaystyle {\cal R}_*=Y_k e^{ky_k }\\\\
\displaystyle A_*=A_k\\\\
\displaystyle {\rm r}_*=-ke^{-ky_k }+{\rm r}_{\rm s}(A_k)\\\\
\displaystyle \varphi_*=\varphi_k+Y_k e^{ky}{\rm r}_{\rm s}'(A_k)\end{array}\right.\end{align}
The transformations~\eqref{reg} are canonical, being generated by
\begin{align}
S^k_{\rm rg}(Y_k, A_k, {\rm r}_*, \varphi_*):=-\frac{Y_k}{k}\log\left|\frac{{\rm r}_{\rm s}(A_k)-{\rm r}_*}{k}\right|+A_k\varphi_*\, .
\end{align}
The coordinates $(Y_k, A_k, y_k, \varphi_k)$ with $k=\pm 1$ are called {\it regularising coordinates}.
\section{A deeper insight into energy--time coordinates}
\label{Regularity of the energy--time coordinates}
In this section we study the functions $\widetilde{\rm G}({\cal E}, {\rm r}, \tau)$, $\widetilde\rho({\cal E}, {\rm r}, \tau)$, ${\cal B}({\cal E}, {\rm r})$ and $\tau_{\rm p}({\cal E}, {\rm r})$, described in Section~\ref{Energy--time coordinates}. We prove that $\widetilde{\rm G}({\cal E}, {\rm r}, \tau)$, $\widetilde\rho({\cal E}, {\rm r}, \tau)$ are $C^\infty$ provided that $({\cal E}, {\rm r})$ vary in a compact subset set of
\eqref{range} and we study the behaviour of ${\cal B}({\cal E}, {\rm r})$ and $\tau_{\rm p}({\cal E}, {\rm r})$ closely to ${\cal S}_0({\rm r})$.
\noindent
It reveals to be useful to perform this study via suitable other functions $\breve{\rm G}({\kappa}, \theta)$, $\breve\rho({\kappa}, \theta)$, ${\cal A}({\kappa})$ and $T_0({\kappa})$, which we now define.
We rewrite
\begin{align}\label{changeofcoord}\widetilde{\rm G}({\cal E}, {\rm r}, \tau)=\sigma({\cal E}, {\rm r})\breve{\rm G}\big({\kappa}({\cal E}, {\rm r}), \theta({\cal E}, {\rm r}, \tau)\big)\ ,\quad \tau_{\rm p}({\cal E}, {\rm r})=\frac{T_{\rm p}\big({\kappa}({\cal E}, {\rm r})\big)}{\sigma({\cal E}, {\rm r})}\end{align}
and
\begin{align}\label{changeofcoord1}
\rho({\cal E}, {\rm r}, \tau)=-\frac{{\cal E}\tau}{{\rm r}}+\frac{\sigma({\cal E}, {\rm r})}{{\rm r}}\widehat\rho({\kappa}({\cal E}, {\rm r}), \theta({\cal E}, {\rm r}, \tau))\qquad 0\le \theta\le T_{\rm p}({\kappa})
\end{align}
where (changing, in the integrals in~\eqref{ftau}, the integration variable $\Gamma ={\sigma}\xi$)
$\breve{\rm G}({\kappa}, \theta)$ is the unique solution of
\begin{align}\label{f(k, theta)}\int^{1}_{\breve{\rm G}({\kappa}, \theta)}\frac{d\xi}{{\sqrt{(1-\xi^2)(\xi^2-{\kappa})}}}=\theta\ ,\qquad 0\le \theta\le T_{\rm p}({\kappa})\end{align}
\begin{align}\label{hatrho}
&\widehat\rho({\kappa}, \theta)=\int^{1}_{\breve{\rm G}({\kappa}, \theta)}\frac{\xi^2 d\xi}{{\sqrt{(1-\xi^2)(\xi^2-{\kappa})}}}\qquad 0\le \theta\le T_{\rm p}({\kappa})\end{align}
and
\begin{align}\label{tmax} T_{\rm p}({\kappa})=\left\{\begin{array}{l}\displaystyle T_{0}({\kappa})\quad 0<{\kappa}<1\\\\
\displaystyle 2{T_{0}({\kappa})}\quad {\kappa}<0
\end{array}\right.\end{align}
with
\begin{align}\label{f0T0}T_0({\kappa}):=\int_{{\rm G}_0({\kappa})}^1\frac{d\xi}{{\sqrt{(1-\xi^2)(\xi^2-{\kappa})}}}\ ,\quad{\rm where}\quad {\rm G}_0({\kappa}):=\left\{\begin{array}{l}\displaystyle {\sqrt{\kappa}}\quad 0<{\kappa}<1\\\\
\displaystyle 0\quad {\kappa}<0
\end{array}\right.\end{align}
The function $\widehat\rho({\kappa}, \theta)$ in~\eqref{hatrho} is further split as
\begin{align}\label{hatrhosplit}\widehat\rho({\kappa}, \theta)={\cal A}({\kappa})\theta+\breve\rho({\kappa}, \theta)\end{align}
where
\begin{align}\label{A(k)NEW}{\cal A}({\kappa})=\frac{\widehat\rho({\kappa}, T_{\rm p}({\kappa}))}{T_{\rm p}({\kappa})}\ ,\quad \breve\rho({\kappa}, \theta)=\widehat\rho(\kappa, \theta)-{\cal A}(\kappa)\theta\,.\end{align}
Finally, $\sigma({\cal E}, {\rm r})$, $\kappa({\cal E}, {\rm r})$ and $\theta({\cal E}, {\rm r}, \tau)$ are given by
\begin{align}\label{sigmakappa}
&\sigma({\cal E}, {\rm r}):=\sqrt{\alpha_+({\cal E}, {\rm r})}=\sqrt{{\cal E}-\frac{{\rm r}^2}{2}+{\rm r}\sqrt{1+\frac{{\rm r}^2}{4}-{\cal E}}}\nonumber\\
& \kappa({\cal E}, {\rm r}):=\frac{\alpha_-({\cal E}, {\rm r})}{\alpha_+({\cal E}, {\rm r})}=\frac{{\cal E}^2-{\rm r}^2}{\left({\cal E}-\frac{{\rm r}^2}{2}+{\rm r}\sqrt{1+\frac{{\rm r}^2}{4}-{\cal E}}\right)^2}\nonumber\\
&\theta({\cal E}, {\rm r}, \tau):=\tau\sqrt{{\cal E}-\frac{{\rm r}^2}{2}+{\rm r}\sqrt{1+\frac{{\rm r}^2}{4}-{\cal E}}}\,.
\end{align}
The periodicity of $\breve\rho(\kappa, \cdot)$ (see equation~\eqref{extendhatrho} below), the uniqueness of the splitting~\eqref{split} and the formulae in~\eqref{changeofcoord1} and~\eqref{hatrhosplit} imply that ${\cal A}(\kappa)$ and $\breve\rho(\kappa, \theta)$ are related to ${\cal B}({\cal E}, {\rm r})$ and $\widetilde\rho({\cal E}, {\rm r}, \tau)$ in~\eqref{split} via
\begin{align}\label{tilderhoB}{\cal B}({\cal E}, {\rm r})=-\frac{{\cal E}}{{\rm r}}+\frac{\sigma({\cal E}, {\rm r})^2}{{\rm r}}{\cal A}(\kappa)\ ,\quad \widetilde\rho({\cal E}, {\rm r}, \tau)=\frac{\sigma({\cal E}, {\rm r})}{{\rm r}}\breve\rho(\kappa({\cal E}, {\rm r}), \theta({\cal E}, {\rm r}, \tau))\,.\end{align}
\vskip.2in
\noindent
In view of relations~\eqref{changeofcoord},~\eqref{tmax} and~\eqref{tilderhoB}, we focus on the functions $\breve{\rm G}(\kappa, \theta)$, $\breve\rho(\kappa, \theta)$, ${\cal A}(\kappa)$ and $T_0(\kappa)$.
The proofs of the following statements are postponed at the end of the section.
\noindent
Let us denote
$\breve{\rm G}_{ij}(\kappa, \theta):=\partial^{i+j}_{\kappa^i\theta^j}\breve{\rm G}(\kappa, \theta)$, $\breve\rho_{ij}(\kappa, \theta):=\partial^{i+j}_{\kappa^i\theta^j}\breve\rho(\kappa, \theta)$.
\begin{proposition}\label{Gderiv}
Let $0\ne\kappa<1$ fixed. The functions $\breve{\rm G}_{ij}(\kappa, \cdot)$ and $\breve\rho_{ij}(\kappa, \cdot)$
are continuous for all $\theta\in {\mathbb R}$.
\end{proposition}
This immediately implies
\begin{corollary}
Let ${\mathbb K}\subset{\mathbb R}$ a compact set, with $0$, $1\notin{\mathbb K}$. Then $\breve{\rm G}$, $\breve\rho$ are $C^\infty({\mathbb K}\times{\mathbb T})$.
\end{corollary}
\noindent
Concerning $T_0(\kappa)$, we have
\begin{proposition}\label{real period}
Let $0\ne \kappa<1$, and let $T_0(\kappa)$ be as in~\eqref{f0T0}.
Then one can find two real numbers $C^*$, ${\cal R}^*$, ${\cal S}^*$ and two functions ${\cal R}(\kappa)$, ${\cal S}(\kappa)$ verifying
\begin{align}{\cal R}(0)=1={\cal S}(0)\, ,\quad 0\le {\cal R}(\kappa)\le {\cal R}^*\, ,\quad 0\le {\cal S}(\kappa)\le {\cal S}^*\qquad \forall\ \kappa\in (-1, 1)\end{align}
such that
\begin{align}\label{T0preal}T_0'(\kappa)={-\frac{{\cal R}(\kappa)}{2\kappa}}\, ,\quad T_0''(\kappa)={\frac{{\cal S}(\kappa)}{4\kappa^2}}\, , \qquad \forall\ 0\ne \kappa<1\end{align}
In particular,
\begin{align}\label{T0}|T_0(\kappa)|\le
\frac{{\cal R}^*}{2}\Big|\log|\kappa|\Big|+C^*\,,\quad|T'_0(\kappa)|\le
\frac{{\cal R}^*}{2}\Big|\kappa\Big|^{-1}\,,\quad|T''_0(\kappa)|\le
\frac{{\cal S}^*}{4}\Big|\kappa\Big|^{-2}\, . \end{align}
\end{proposition}
\noindent
Finally, as for ${\cal A}(\kappa)$, we have
\begin{proposition}\label{calA} Let $0\ne \kappa<1$, and let ${\cal A}(\kappa)$ be as in
\eqref{A(k)NEW}.
Then one can find $C^*>0$
such that
\begin{align}|{\cal A}(\kappa)|\le
C^*\Big|\log|\kappa|\Big|^{-1}\,,\quad|{\cal A}'(\kappa)|\le
C^*\Big|\kappa\Big|^{-1}\,,\quad|{\cal A}''(\kappa)|\le
C^*\Big|\kappa\Big|^{-2} \end{align} \, .
\end{proposition}
\paragraph{Proofs of Propositions~\ref{Gderiv},~\ref{real period} and~\ref{calA}}
Relations
~\eqref{full real},~\eqref{full period} and~\eqref{half domain}
provide
\begin{align}\label{fsym}
& \left\{\begin{array}{llll}
\displaystyle \breve{\rm G}(\kappa, \theta+2j T_{\rm p})=\breve{\rm G}(\kappa, \theta)\qquad &\forall\ \theta\in {\mathbb R}\, ,\ j\in {\mathbb Z}&\forall\ 0\ne \kappa<1\\\\
\displaystyle \breve{\rm G}(\kappa, -\theta)=\breve{\rm G}(\kappa, \theta)&\forall\ 0\le \theta\le T_{\rm p}(\kappa)&\forall\ 0\ne \kappa<1\\\\
\displaystyle \breve{\rm G}(\kappa, T_{\rm p}-\theta)=-\breve{\rm G}(\kappa, \theta)&\forall\ 0\le \theta\le T_0(\kappa)\quad &\forall\ \kappa<0.
\end{array}
\right.
\\\nonumber\\\nonumber\\\label{extendhatrho}
&\left\{\begin{array}{llll}
\displaystyle \breve\rho(\kappa, \theta+2j T_{\rm p})=\breve\rho(\kappa, \theta
\, ,\qquad\ &\forall\ \theta\in{\mathbb R}\, ,\ j\in {\mathbb Z}&\forall\ 0\ne \kappa<1\\\\
\displaystyle \breve\rho(\kappa, -\theta)=-\breve\rho(\kappa, \theta)\, &\forall\ 0\le \theta\le T_{\rm p}(\kappa)&\forall\ 0\ne \kappa<1\\\\
\displaystyle \breve\rho(\kappa, T_{\rm p}-\theta)
- \breve\rho(\kappa, \theta) & \forall\ 0\le \theta\le T_0(\kappa) & \forall\ \kappa<0.
\end{array}
\right.
\end{align}
The following lemmata are obvious
\begin{lemma}\label{dercont}
Let $g(\kappa, \cdot)$ verify~\eqref{fsym} with $T_{\rm p}(\kappa)=\pi$ for all $\kappa$ and $T_0$ as in~\eqref{tmax}. Then the functions $g_{ij}(\kappa, \theta):=\partial^{i+j}_{\kappa^i,\theta^j} g(\kappa, \theta)$ are continuous on ${\mathbb R}$ if and only if they are continuous in $[0, T_0]$ and verify
\begin{align}\label{continuity conditions}
\left\{ \begin{array}{lllll}{\rm no\ further\ condition}\quad &{\rm if}\quad &j\in 2{\mathbb N}\,,\quad &0<\kappa<1\\\\
g_{ij}(\kappa, \frac{\pi}{2})=0&{\rm if}\quad &j\in 2{\mathbb N}\,,\quad &\kappa<0\\\\
g_{ij}(\kappa, 0)=0=g_{ij}(\kappa, \pi)&{\rm if}\quad &j\in 2{\mathbb N}+1\,,\quad &0<\kappa<1\\\\
g_{ij}(\kappa, 0)=0&{\rm if}\quad &j\in 2{\mathbb N}+1\,,\quad &\kappa<0
\end{array}
\right.
\end{align}
\end{lemma}
\begin{lemma}\label{periodicity}
Let $g(\kappa, \cdot)$ verify~\eqref{extendhatrho} with $T_{\rm p}(\kappa)=\pi$ for all $\kappa$ and $T_0$ as in~\eqref{tmax}. Then $g_{ij}(\kappa, \cdot)$, where $g_{ij}(\kappa, \theta):=\partial^{i+j}_{\kappa^i,\theta^j} g(\kappa, \theta)$, are continuous on ${\mathbb R}$ if and only if they are continuous in $[0, T_0(\kappa)]$ and verify
\begin{align}\label{continuity condition2}
\left\{
\begin{array}{llllll}g_{ij}(\kappa, 0)= g_{ij}(\kappa, \pi)=0\quad&{\rm if}\ &j\in 2{\mathbb N}\,,\ &0<\kappa<1\\\\
g_{ij}(\kappa, 0)= g_{ij}(\kappa, \frac{\pi}{2})=0&{\rm if}&j\in 2{\mathbb N}&\kappa<0\\\\
\displaystyle {\rm no\ further\ condition}&{\rm if}&j\in 2{\mathbb N}+1&
\end{array}
\right.
\end{align}
\end{lemma}
\subparagraph{\it Proof of Proposition~\ref{Gderiv}}
(i) The function $\breve{\rm G}(\kappa, \cdot)$ is $C^\infty({\mathbb R})$ for all $0\ne \kappa<1$~\cite{freitagB2005}. Then so is the function
$g(\kappa, \cdot)$, where
$g(\kappa, \theta):=\breve{\rm G}(\kappa, \frac{T_{\rm p}(\kappa)}{\pi}\theta )$. Then~\eqref{continuity conditions} hold true for $g(\kappa, \theta)$ with $i=0$. Hence, the derivatives $g_{ij}(\kappa, \theta)$, which exist for all $0\ne \kappa<1$, also verify~\eqref{continuity conditions}. Then $g_{ij}(\kappa, \cdot)$ are continuous for all $0\ne \kappa<1$ and so are the $\breve{\rm G}_{ij}(\kappa, \cdot)$.
\noindent
(ii) We check conditions~\eqref{continuity condition2} for the function $g(\kappa, \theta):=\breve\rho(\kappa, \frac{T_{\rm p}(\kappa)}{\pi}\theta)$, in the case
$j=0$.
Using~\eqref{hatrho},~\eqref{f(k, theta)} and~\eqref{A(k)NEW}, we get, for $0<\kappa<1$,
\begin{align}\label{id1}
g\left(\kappa, 0\right)=\breve\rho(\kappa, 0)=0\ ,\quad g\left(\kappa, \pi\right)= {\breve\rho(\kappa, T_{\rm p}(\kappa))=\widehat\rho(\kappa, T_{\rm p})-\frac{\widehat\rho(\kappa, T_{{\rm p}})}{T_{\rm p}} T_{\rm p}=0.}
\end{align}
while, for $\kappa<0$,
\begin{align}\label{id2}
g\left(\kappa, 0\right)=\breve\rho(\kappa, 0)=0\ ,\quad g\left(\kappa, \frac{\pi}{2}\right)= {\breve\rho(\kappa, T_0(\kappa))=\widehat\rho(\kappa, T_0)-\frac{\widehat\rho(\kappa, T_{0})}{T_0} T_0=0.}
\end{align}
The identities~\eqref{id1} and~\eqref{id2} still hold replacing $g$ with any $g_{i0}(\kappa, \theta)$, with $i\in{\mathbb N}$, therefore,
any $g_{i0}(\kappa, \theta)$ satisfies~\eqref{continuity condition2}.
Let us now consider the case $j\ne 0$. Again by~\eqref{hatrho},~\eqref{f(k, theta)} and~\eqref{A(k)NEW},
\begin{align}\label{breverhotheta}
\breve\rho_\theta(\kappa, \theta)&=\breve{\rm G}(\kappa, \theta)^2-{\cal A}(\kappa)
\end{align}
so, for any $j\ne 0$,
\begin{align}
\breve\rho_{ij}(\kappa, \theta)=\partial^{i+j-1}_{\kappa^i\theta^{j-1}}\Big(\breve{\rm G}(\kappa, \theta)^2\Big)
\end{align}
Then the $\breve\rho_{ij}(\kappa, \cdot)$ with $j\ne 0$ are continuous because so is $\breve{\rm G}_{ij}(\kappa, \cdot)$. $\qquad \square$
\subparagraph{\it Proof of Proposition~\ref{real period}}
The function $T_0(\kappa)$ in~\eqref{f0T0} is studied in detail in Appendix~\ref{elliptic integrals}. Combining Lemma~\ref{splitperiod} and Proposition~\ref{period} and taking the $\kappa$--primitive of such relations, one obtains Proposition~\ref{real period}.
\subparagraph{\it Proof of Proposition~\ref{calA}}
\begin{align}\label{hatrho1}
{\cal A}(\kappa)&=\frac{1}{T_0(\kappa)}\int^1_{{\rm G}_{0}(\kappa)}\frac{\sqrt{\xi^2-\kappa}}{{\sqrt{1-\xi^2}}}d\xi+\kappa
\end{align}
\begin{align}\label{Ap}
{{\cal A}'(\kappa)}&=\frac{1}{2}+(\kappa-{\cal A}(\kappa))\frac{T'_0(\kappa)}{T_0(\kappa)}=\frac{1}{2}-(\kappa-{\cal A}(\kappa))\frac{{\cal R}(\kappa)}{2\kappa T_0(\kappa)}\nonumber\\
&=\frac{1}{2}-\frac{{\cal R}(\kappa)}{2T_0(\kappa)}+\frac{{\cal A}(\kappa){\cal R}(\kappa)}{2\kappa T_0(\kappa)}
\end{align}
and
\begin{align}\label{cApprime}
{{\cal A}''(\kappa)}&=(1-{\cal A}'(\kappa))\frac{T'_0(\kappa)}{T_0(\kappa)}+(\kappa-{\cal A}(\kappa))\left(\frac{T''_0(\kappa)}{T_0(\kappa)}-\frac{\big(T'_0(\kappa)\big)^2}{\big(T_0(\kappa))^2}\right)\nonumber\\
&=\frac{T'_0(\kappa)}{2T_0(\kappa)}-2(\kappa-{\cal A}(\kappa))\frac{\big(T'_0(\kappa)\big)^2}{\big(T_0(\kappa))^2}+(\kappa-{\cal A}(\kappa))\frac{T''_0(\kappa)}{T_0(\kappa)}\nonumber\\
&=-\frac{{\cal R}(\kappa)}{4 \kappa T_0(\kappa)}-2(\kappa-{\cal A}(\kappa))\frac{{\cal R}(\kappa)^2}{4\kappa^2 T_0(\kappa)^2}+(\kappa-{\cal A}(\kappa))\frac{{\cal S}(\kappa)}{4\kappa^2 T_0(\kappa)}\qquad \square
\end{align}
\section{The function ${\rm F}({\cal E}, {\rm r})$}\label{appB}
In this section we study the function ${\rm F}({\cal E}, {\rm r})$ in~\eqref{relation***}. Specifically, we aim to prove the following
\begin{proposition}\label{Ffunct}
${\rm F}({\cal E}, {\rm r})$ is well defined and smooth for all $({\cal E}, {\rm r})$ with $0\le {\rm r}<2$ and $-{\rm r}\le {\cal E}<1+\frac{{\rm r}^2}{4}$, ${\cal E}\ne {\rm r}$. Moreover, there exists a number $C>0$ and a neighbourhood ${\cal O}$ of $0\in {\mathbb R}$ such that, for all $0\le {\rm r} <2$ and all $-{\rm r}\le {\cal E}<1+\frac{{\rm r}^2}{4}$ such that ${\cal E}-{\rm r}\in {\cal O}$,
\begin{align}\label{FFineq}|{\rm F}({\cal E}, {\rm r})|\le C\log|{\cal E}-{\rm r}|^{-1}\,,\quad |\partial_{{\cal E}, {\rm r}}{\rm F}({\cal E}, {\rm r})|\le C|{\cal E}-{\rm r}|^{-1}\,,\quad |\partial^2_{{\cal E}, {\rm r}}{\rm F}({\cal E}, {\rm r})|\le C|{\cal E}-{\rm r}|^{-2}\, . \end{align}
\end{proposition}
\noindent
To prove Proposition~\ref{Ffunct} we need an analytic representation of the function ${\rm F}$, which we proceed to provide.
In terms of the coordinates~\eqref{coord}, the function ${\rm U}$ in~\eqref{3bpav} is given by (recall we have fixed $\Lambda=1$)
\begin{align}\label{U}{\rm U}({\rm r}, {\rm G}, {\rm g})&=\frac{1}{2\pi}\int_0^{2\pi}\nonumber\\
&\frac{\big(1-\sqrt{1-{\rm G}^2}\cos\xi\big)d\xi}{\sqrt{
(1-\sqrt{1-{\rm G}^2}\cos\xi)^2+2{\rm r}\Big(
(\cos\xi-\sqrt{1-{\rm G}^2})\cos{\rm g}-{\rm G}\sin\xi\sin{\rm g}
\Big)+{\rm r}^2}
} \nonumber\\\end{align}
where $\xi$ is the eccentric anomaly.
By~\cite{pinzari19}, ${\rm U}$ remains constant along the level curves, at ${\rm r}$ fixed, of the function ${\rm E}({\rm r}, \cdot, \cdot)$ in~\eqref{E}. Therefore, the function ${\rm F}({\cal E}, {\rm r})$ which realises~\eqref{relation***} is nothing else than the value that ${\rm U}({\rm r}, \cdot, \cdot)$ takes at a chosen fixed point$({\rm G}_0({\cal E}, {\rm r}), {\rm g}_0({\cal E}, {\rm r}))$ of the level set ${\cal E}$ in Figure~\ref{figure1}. For the purposes\footnote{Compare~\eqref{F(E,r)} with the simpler formula proposed in~\cite{pinzari20a}, however valid only for values of ${\cal E}$ in the interval $[-{\rm r}, {\rm r})$.} of the paper, we choose such point to be the point where the ${\cal E}$--level curve attains its maximum. It follows from the discussion in Section~\ref{Energy--time coordinates} that the coordinates of such point are
\begin{align}\label{G+g+}\left\{\begin{array}{l}\displaystyle {\rm G}_+({\cal E}, {\rm r})=\sqrt{\alpha_+({\cal E}, {\rm r})}\\ \\
\displaystyle {\rm g}_+({\cal E}, {\rm r})=\left\{\begin{array}{l}\displaystyle \pi\quad -{\rm r}\le {\cal E}< 1\\\\
\displaystyle 0\quad 1\le {\cal E}\le 1+\frac{{\rm r}^2}{4}\end{array}\right.\end{array}\right.\end{align}
where $\alpha_+({\cal E}, {\rm r})$ is as in~\eqref{alphapm}. Replacing~\eqref{G+g+} into~\eqref{U}, we obtain
\begin{align}\label{F(E,r)}{\rm F}({\cal E}, {\rm r})
\frac{1}{2\pi}\int_0^{2\pi}\frac{(1-|e({\cal E}, {\rm r})|\cos\xi)d\xi}{\sqrt{(1-|e({\cal E}, {\rm r})|\cos\xi)^2+2s({\cal E}, {\rm r}){\rm r}(\cos\xi-|e({\cal E}, {\rm r})|)+{\rm r}^2}}
\end{align}
with
\begin{align}e({\cal E}, {\rm r})=\frac{{\rm r}}{2}-\sqrt{1+\frac{{\rm r}^2}{4}-{\cal E}}\,,\quad s({\cal E}, {\rm r}):={\, \rm sign\, } \big(e({\cal E}, {\rm r})\big)=\left\{\begin{array}{l}\displaystyle -1\quad -{\rm r}\le {\cal E}< 1\\\\
\displaystyle +1\quad 1< {\cal E}\le 1+\frac{{\rm r}^2}{4}\end{array}\right.\end{align}
To study the regularity of ${\rm F}$, it turns to be useful
to rewrite the integral~\eqref{F(E,r)} as twice the integral on the half period $[0, \pi]$ and next to make two subsequent changes of variable. The first time, with $z= s({\cal E}, {\rm r})\cos x$. It gives the following formula, which will be used below.
\begin{align}\label{first formula}{\rm F}({\cal E}, {\rm r})
\frac{1}{\pi}\int_{-1}^{1}\frac{1}{\sqrt{1-z^2}}\frac{(1-e({\cal E}, {\rm r})z)d z}{\sqrt{(1-e({\cal E}, {\rm r})z)^2+2{\rm r}(z-e({\cal E}, {\rm r}))+{\rm r}^2}}
\end{align}
We denote as
\begin{align}\label{zetapm}z_{\pm}({\cal E}, {\rm r}):=\frac{e({\cal E}, {\rm r})-{\rm r}}{e({\cal E}, {\rm r})^2}\pm\frac{\sqrt{
{\rm r}({\rm r}-2e({\cal E}, {\rm r}))(1-e({\cal E}, {\rm r})^2)
}}{e({\cal E}, {\rm r})^2}\end{align}
the roots of the polynomial under the square root, which, as we shall see below, are real under conditions~\eqref{range}. As a second change, we let $z=\frac{1-\beta^2 t^2}{1+\beta^2 t^2}$. This leads to write ${\rm F}({\cal E}, {\rm r})$ as
\begin{align}\label{reprF}{\rm F}({\cal E}, {\rm r})&=\frac{2(1-e({\cal E}, {\rm r})) }{\pi|e({\cal E}, {\rm r})|\sqrt{(z_-({\cal E}, {\rm r})+1)(z_+({\cal E}, {\rm r})-1)}}\left(\frac{1+e({\cal E}, {\rm r})}{1-e({\cal E}, {\rm r})}j_0(\kappa({\cal E}, {\rm r}))\right.\nonumber\\
&\left.-\frac{2e({\cal E}, {\rm r})}{1-e({\cal E}, {\rm r})}j_{\beta({\cal E}, {\rm r})}(\kappa({\cal E}, {\rm r}))\right)\end{align}
where
$j_\beta eta(\kappa)$ is the elliptic integral
\begin{align}\label{h} j_\beta (\kappa):=\int_{0}^{+\infty}\frac{1}{1+\beta t^2}\frac{dt}{\sqrt{\big(1+ t^2\big)\big(1+\kappa t^2\big)}}
\end{align}
and $\beta$, $\kappa$ are taken to be
\begin{align}\label{betagammakappa}\beta({\cal E}, {\rm r}):=\frac{z_-({\cal E}, {\rm r})-1}{1+z_-({\cal E}, {\rm r})
\,,\quad \kappa({\cal E}, {\rm r}):=\frac{(1+z_+({\cal E}, {\rm r}))(
z_-({\cal E}, {\rm r})-1)}{(1+z_-({\cal E}, {\rm r}))(z_+({\cal E}, {\rm r})-1)}\, . \end{align}
The elliptic integrals $j_\beta (\kappa)$ in~\eqref{h} are studied in Appendix~\ref{elliptic integrals}: compare Proposition~\ref{period}.
\noindent In terms of $(e, {\rm r})$, the inequalities in
~\eqref{range} become
\begin{align}\label{rhate}{\rm r}\in [0,\ 2]\ ,\quad e\in \left[-1, \ \frac{{\rm r}}{2}\right]\setminus\{0,\ {\rm r}-1\}\subset [-1, 1] \end{align}
where
$\{e=-1\}$ corresponds to the minimum level $\{{\cal E}=-{\rm r}\}$;
$\{e={\rm r}-1\}$
corresponds to the separatrix level ${\cal S}_0({\rm r})$; $\{e=0\}$
corresponds to the separatrix level ${\cal S}_1({\rm r})$ and, finally,
$\{e=\frac{{\rm r}}{2}\}$ corresponds to maximum level $\{{\cal E}=1+\frac{{\rm r}^2}{4}\}$. It is so evident that the discriminant in~\eqref{zetapm} is not negative under conditions~\eqref{rhate}, so $z_\pm ({\cal E}, {\rm r})$ are real under~\eqref{range}, as claimed.
In addition, one can easily verify that,for any $({\rm r}, e)$ as~\eqref{rhate},
it is $e^2+e-{\rm r}\le0$. This implies
\begin{align}
\displaystyle z_++1=\frac{e({\cal E}, {\rm r})^2+e({\cal E}, {\rm r})-{\rm r}}{e({\cal E}, {\rm r})^2}+\frac{\sqrt{
{\rm r}({\rm r}-2e({\cal E}, {\rm r}))(1-e({\cal E}, {\rm r})^2)
}}{e({\cal E}, {\rm r})^2}<0\quad\forall\ e\ne {\rm r}-1\,.
\end{align}
Moreover, since
\begin{align}\label{zpm0}z_-({\cal E}, {\rm r})<z_+({\cal E}, {\rm r})\quad \forall\ {\rm r}\ne 0\ ,\ {\cal E}\ne 1+\frac{{\rm r}^2}{2}\,,\ {\cal E}\ne -{\rm r}\,,\ ({\cal E}, {\rm r})\ne (2,2)\end{align}
we have
\begin{align}\beta({\cal E},{\rm r})>0\quad \forall\ ({\cal E}, {\rm r})\ {\rm as\ in\ }\eqref{zpm0}\end{align}
and
\begin{align}0<\kappa({\cal E},{\rm r})<1\quad \forall\ ({\cal E}, {\rm r})\ {\rm as\ in\ }\eqref{zpm0}\quad {\rm and}\quad {\cal E}\ne {\rm r}-1\, . \end{align}
\noindent Combining these informations with the formula in~\eqref{reprF} and with Proposition~\ref{period}, we conclude that ${\rm F}({\cal E}, {\rm r})$ is smooth for all ${\rm r}\ne 0\ ,\ {\cal E}\ne 1\ ,\ {\cal E}\ne 1+\frac{{\rm r}^2}{2}\,,\ {\cal E}\ne \pm{\rm r}\,,\ ({\cal E}, {\rm r})\ne (2,2)$ and that~\eqref{FFineq} holds.
However, the representation in~\eqref{first formula} allows to extend regularity for ${\rm F}({\cal E}, {\rm r})$ to the domain $0\le {\rm r}<2$, $-{\rm r}\le {\cal E}<1+\frac{{\rm r}^2}{4}$, ${\cal E}\ne {\rm r}$, as claimed. $\qquad \square$
\section{Proof of Theorem B}\label{Proof of Theorem}
In this section we state and prove a more precise statement of Theorem B, which is Theorem~\ref{main*} below.
\noindent
The framework is as follows:
\begin{itemize}
\item[\tiny\textbullet] fix a energy level $c$;
\item[\tiny\textbullet] change the time via \begin{align}\label{tau}\frac{dt}{d{t'}}=e^{- 2k y}\qquad k=\pm 1\end{align} where $t'$ is the new time and $t$ the old one. The new time $t'$ is soon renamed $t$;
\item[\tiny\textbullet] look at the ODE
\begin{align}\label{Xk}\partial_{t} q_k=X^{{ (k) }}(q_k; c)\end{align}
for the triple $q_k=(A_k, y_k, \psi)$ where $A_k$, $y_k$ are as in~\eqref{reg}, while $\psi=\varphi_*$, with $\varphi_*$ as in~\eqref{actionangle} in ${\mathbb P}_k$, where
\begin{align}{\mathbb P}_k(\varepsilon_-, \varepsilon_+, L_-, L_+, \xi)&:= \Big\{(A_k, y_k, \psi):\ 1-2\varepsilon_+<A_k\le 1-2\varepsilon_-\ ,\ L_-+2\xi\le ky_k\le L_+-2\xi,\nonumber\\
&\ \psi\in {\mathbb T}\Big\}
\end{align}
with $\xi<(L_+-L_-)/4$.
Observe that
\begin{itemize}
\item[{\tiny\textbullet}] the projection of ${\mathbb P}_+$ in the plane $({\rm g}, {\rm G})$ in Figure~\ref{figure1} is an
inner region of ${\cal S}_0({\rm r})$ and ${\rm r}$ varies in a $\varepsilon$--left neighburhood of $2$;
\item[{\tiny\textbullet}] the projection of ${\mathbb P}_-$ in the plane $({\rm g}, {\rm G})$ in Figure~\ref{figure1} is an
outer region of ${\cal S}_0({\rm r})$ and ${\rm r}$ varies in a $\varepsilon$--left neighburhood of $2$;
\item[{\tiny\textbullet}] the boundary of ${\mathbb P}_\kappa$ includes ${\cal S}_0$
if $L_+=\infty$; it has a positive distance from it if $L_+<+\infty$.
\end{itemize}
\end{itemize}
We shall prove
\begin{theorem}\label{main*}
There exist a graph ${{\cal G}}_k\subset {\mathbb P}_k(\varepsilon_-, \varepsilon_+, L_-, L_+, \xi)$ and a number $L_\star>1$ such that for any $L_->L_\star$ there exist $\varepsilon_-$, $\varepsilon_+$, $L_+$, $\xi$, an open neighbourhood $W_k\supset{{\cal G}}_k$
such that along any orbit $q_k(t)$ such that $q_k(0)\in W_k$,
\begin{align}|A(q_k(t))-A(q_k(0))|\le C_0 \epsilon e^{-L_-^3}\, t\qquad \forall\ t:\ |t|< t_{\rm ex}\end{align}
where $t_{\rm ex}$ is the first $t$ such that $q(t)\notin W_k$ and $\epsilon$ is an upper bound for $\|P_1\|_{W_k}$ (with $P_1$ being the first component of $P$).\end{theorem}
{\bf Proof\ }
For definiteness, from now on we discuss the case $k=+1$ (outer orbits). The case $k=-1$ (inner orbits) is pretty similar. We neglect to write the sub--fix ``$+1$'' everywhere. As the proof is long and technical, we divide it in paragraphs. We shall take
\begin{align}{{\cal G}}=\Big\{(A, y, \psi_\circ(A, y)),\ 1-2\varepsilon_+\le A\le 1-2\varepsilon_-\, ,\ L_-+2\xi\le y\le L_+-2\xi\Big\}\subset {\mathbb P}\end{align}
with $\varepsilon_-$, $\varepsilon_+$, $L_-$, $L_+$, $\psi_\circ$ to be chosen below.
\paragraph{\it Step 1. The vector--field $X$}
As $\psi$ is one of the {\it action--angle coordinates}, while $A$, $y$ are two among the {\it regularising coordinates}, we need the expressions of the Hamiltonian~\eqref{3bpav} written in terms of those two sets.
The Hamiltonian~\eqref{3bpav} written in {\it action--angle coordinates} is
\begin{align}\label{H*}{\rm H}_{\rm aa}({\cal R}_*, A_*, {\rm r}_*, \varphi_*)=\frac{({\cal R}_*+\rho_*(A_*, {\rm r}_*, \varphi_*))^2}{2}+\alpha{\rm F}_*(A_*, {\rm r}_*)+\frac{({\rm C}-{\rm G}_*(A_*, {\rm r}_*, \varphi_*))^2}{2{\rm r}_*^2}-\frac{\beta}{{\rm r}_*}\end{align}
where
\begin{align}\label{G*r*}{\rm G}_*(A_*, {\rm r}_*, \varphi_*):={\rm G}\circ \phi_{\rm aa}(A_*, {\rm r}_*, \varphi_*)\, ,\qquad {\rm F}_*(A_*, {\rm r}_*):={\rm F}\circ \phi_{\rm aa}(A_*, {\rm r}_*)
\end{align} with $\phi_{\rm aa}$ as in~\eqref{actionangle}, while $\widetilde{\rm G}({\cal E}, {\rm r}, \tau)$, ${\rm F}({\cal E}, {\rm r})$ as in~\eqref{transf},~\eqref{relation***}, respectively, $\rho_*$ is as in~\eqref{good relation}. The Hamiltonian~\eqref{3bpav} written in {\it regularising coordinates} is
\begin{align}{\rm H}_{\rm rg}(Y, A, y, \varphi)&=\frac{(Y e^{y}+\rho_*(A, {\rm r}_\circ(A, y), \varphi_\circ(Y, A, y, \varphi)))^2}{2}+\alpha{\rm F}_*(A, {\rm r}_\circ(A, y))\nonumber\\
&+ \frac{({\rm C}-{\rm G}_*(A, {\rm r}_\circ(A, y), \varphi_\circ(Y, A, y, \varphi)))^2}{2{\rm r}_\circ(A, y)^2}-\frac{\beta}{{\rm r}_\circ(A, y)}\end{align}
where
${\rm r}_\circ(A, y)$, $\varphi_\circ(Y, A, y, \varphi)$ are the right hand sides of the equations for ${\rm r}_*$, $\varphi_*$ in~\eqref{reg}, with $k=+1$.\\
Taking the $\varphi_*$--projection of Hamilton equation of ${\rm H}_{\rm aa}$, and
the $(A, y)$--projection of Hamilton equation of ${\rm H}_{\rm rg}$,
changing the time as prescribed in~\eqref{tau} and reducing the energy via
\begin{align}
{\cal R}_*+\rho_{*}(A, {\rm r}_\circ(A, y), \psi)=Ye^{y }+\rho_{*}(A, {\rm r}_\circ(A, y), \psi)={\cal Y}(A, y, \psi; c)\nonumber\\
\end{align}
with
\begin{align}\label{Ycanc}{\cal Y}(A, y, \psi; c):=\pm\sqrt{2\left(
c-\alpha{\rm F}_*(A, {\rm r}_\circ(A, y))
-\frac{({\rm C}-{\rm G}_*(A, {\rm r}_\circ(A, y), \psi))^2}{2{\rm r}_\circ(A, y)^2}+\frac{\beta }{{\rm r}_\circ(A, y)}
\right)}\end{align}
we find that the evolution for the triple $q=(A, y, \psi)$ during the time $t$ is governed by the vector--field
\begin{align}\label{X}\left\{\begin{array}{l}\displaystyle X_1(A, y, \psi; c)=e^{-2y}\frac{{\rm C}-{\rm G}_*(A, {\rm r}_\circ(A, y), \psi)}{{\rm r}_\circ(A, y)^2}{\rm G}_{*, 3}(A, {\rm r}_\circ(A, y), \psi)-e^{-2y}\rho_{*, 3}(A, {\rm r}_\circ(A, y), \psi){\cal Y}(A, y, \psi; c)\\\\
\displaystyle X_2(A, y, \psi; c)=-e^{-y}\frac{{\rm C}-{\rm G}_*(A, {\rm r}_\circ(A, y), \psi)}{{\rm r}_\circ(A, y)^2}{\rm G}_{*, 3}(A, {\rm r}_\circ(A, y), \psi){\rm r}_{\rm s}'(A)\\\ \ \qquad+
\displaystyle e^{-y}\big(1+\rho_{*, 3}(A, {\rm r}_\circ(A, y), \psi){\rm r}_{\rm s}'(A)\big){\cal Y}(A, y, \psi; c)\\\\
\displaystyle X_3(A, y, \psi; c)=\alpha\,e^{-2y}{\rm F}_{*, 1}(A, {\rm r}_\circ(A, y))-e^{-2y}\frac{{\rm C}-{\rm G}_*(A, {\rm r}_\circ(A, y), \psi)}{{\rm r}_\circ(A, y)^2}{\rm G}_{*, 1}(A, {\rm r}_\circ(A, y), \psi)\\\ \ \displaystyle \qquad+e^{-2y}\rho_{*, 1}(A, {\rm r}_\circ(A, y), \psi){\cal Y}(A, y, \psi; c)
\end{array}\right.\nonumber\\
\end{align}
where we have used the notation, for $f=\rho_*$, ${\rm G}_*$, ${\rm F}_*$,
\begin{align}f_{1}(A, {\rm r}_*, \psi):=\partial_{A} f(A, {\rm r}_*, \psi)\, ,\quad f_{3}(A, {\rm r}_*, \psi):=\partial_{\psi} f(A, {\rm r}_*, \psi)\, . \end{align}
\paragraph{\it Step 2. Splitting the vector--field}
We write
\begin{align}\label{splitX}X(A, y, \psi; c)=N(A, y; c)+P(A, y, \psi; c)\end{align}
with
\begin{align}\label{N}\left\{\begin{array}{l}\displaystyle N_1(A, y; c)=0\\\\
\displaystyle N_2(A, y; c)=v(A, y; c):=e^{-y }\sqrt{2\big(c-\alpha{\rm F}_*(A, {\rm r}_\circ(A, y))\big)}\\\\
N_3(A, y; c)= \omega(A, y; c):=\alpha\,e^{-2y }{\rm F}_{*, 1}(A, {\rm r}_\circ(A, y))\end{array}\right.\end{align}
hence,
\begin{align}\label{P}
\left\{\begin{array}{l}\displaystyle P_1=e^{-2y}\frac{{\rm C}-{\rm G}_*(A, {\rm r}_\circ(A, y), \psi)}{{\rm r}_\circ(A, y)^2}{\rm G}_{*, 3}(A, {\rm r}_\circ(A, y), \psi)-e^{-2y}\rho_{*, 3}(A, {\rm r}_\circ(A, y), \psi){\cal Y}(A, y, \psi; c)\\\\
\displaystyle P_2=-e^{-y}\frac{{\rm C}-{\rm G}_*(A, {\rm r}_\circ(A, y), \psi)}{{\rm r}_\circ(A, y)^2}{\rm G}_{*, 3}(A, {\rm r}_\circ(A, y), \psi){\rm r}_{\rm s}'(A)+
e^{-y}\rho_{*, 3}(A, {\rm r}_\circ(A, y), \psi){\rm r}_{\rm s}'(A)\nonumber\\
\qquad \displaystyle \cdot{\cal Y}(A, y, \psi; c)
+
\displaystyle e^{-y}\left(
{\cal Y}(A, y, \psi; c)
-\sqrt{2\big(c-\alpha{\rm F}_*(A, {\rm r}_\circ(A, y))\big)}\right)\\\\
\displaystyle P_3=-e^{-2y}\frac{{\rm C}-{\rm G}_*(A, {\rm r}_\circ(A, y), \psi)}{{\rm r}_\circ(A, y)^2}{\rm G}_{*, 1}(A, {\rm r}_\circ(A, y), \psi)+e^{-2y}\rho_{*, 1}(A, {\rm r}_\circ(A, y), \psi){\cal Y}(A, y, \psi; c)
\end{array}\right.\nonumber\\
\end{align}
\noindent
The application of {\sc nft} relies on the smallness of the perturbing term $P$. In the case in point, the ``greatest'' term of $P$ is the component $P_2$, and precisely $\rho_{*, 3}$. This function is not uniformly small. For this reason, we need to look at its zeroes and localise around them. The localisation (described in detail below) carries the holomorphic perturbation $P$ to a perturbation $\widetilde P$, which is smaller, but {\it no longer holomorphic}. We shall apply {\sc gnft} to the new vector--field $\widetilde X=N+\widetilde P$.
\paragraph{\it Step 3. Localisation about non--trivial zeroes of $\rho_{*,3}$}
The following lemma gives an insight on the term $\rho_{*, 3}$, appearing in~\eqref{P}. It will be proved in Appendix~\ref{Technicalities}.
\begin{lemma}\label{zeroes}
For any
${\cal A}_{\rm s}({\rm r}_*)<A<1$ $(0<A<{\cal A}_{\rm s}({\rm r}_*))$
there exists $0<\psi_*(A, {\rm r}_*))<\pi$ {\rm (}$0<\psi_*(A, {\rm r}_*))<\pi/2${\rm)} such that $\rho_{*,3}(A, {\rm r}_*, \psi_*(A, {\rm r}_*)))\equiv0$ {\rm(and} $\rho_{*,3}(A, {\rm r}_*, \pi-\psi_*(A, {\rm r}_*)))\equiv0${\rm)}. Moreover, there exists $C>0$ such that, for any $\d>0$ one can find a neighbourhood $V_*(A, {\rm r}_*; \delta)$ of $\psi_*(A, {\rm r}_*))$ {\rm(}and a neighbourhood $V'(A, {\rm r}_*; \delta)$ of $\pi-\psi_*(A, {\rm r}_*))${\rm)} such that
\begin{align}\label{boundrho3}
&|\rho_{*,3}(A, {\rm r}_*, \psi)|\le C\frac{\sigma_*(A, {\rm r}_*)}{{\rm r}_*}{\delta}\qquad \forall\ \psi\in V_*(A, {\rm r}_*; \delta)\nonumber\\
&\left(|\rho_{*,3}(A, {\rm r}_*, \psi)|\le C\frac{\sigma_*(A, {\rm r}_*)}{{\rm r}_*}{\delta}\qquad\forall\ \psi\in V_*(A, {\rm r}_*; \delta)\cup V'(A, {\rm r}_*; \delta)\, . \right)\end{align}
\end{lemma}
\noindent
We now let
\begin{align}\psi_\circ(A, y):=\psi_*(A, {\rm r}(A, y))\ ,\quad V_\circ(A, y; \delta):=V_*(A, {\rm r}(A, y); \delta)\, . \end{align}
\noindent
For definiteness, from now on, we focus on orbits with initial datum $(A_0, y_0, \psi_0)$ such that $\psi_0$ is close to $\psi_\circ(A_0, y_0)$. The symmetrical cases can be similarly treated.
\noindent
Let $W_\circ(A, y; \delta)\subset V_\circ(A, y; \delta)$ an open set and let $g(A, y, \cdot)$ be a $C^\infty$, $2\pi$--periodic function such that, in each period $[\psi_\circ(A, y)-\pi, \psi_\circ(A, y)+\pi)$ satisfies
\begin{align}\label{g}g(A, y, \psi; \delta)\left\{\begin{array}{l}\displaystyle ~\equiv 1\quad \forall\ \psi\in W_\circ(A, y; \delta)\\\\ \displaystyle \equiv 0\quad \forall\ \psi\in [\psi_\circ(A, y)-\pi, \psi_\circ(A, y)+\pi)\setminus V_\circ(A, y; \delta)\\\\
\displaystyle \in (0, 1) \quad \forall\ \psi\in V_\circ(A, y; \delta)\setminus W_\circ(A, y; \delta)
\end{array}\right.
\end{align}
The function $g$ is chosen so that
\begin{align}\label{g0}
\sup_{0\le \ell<\ell_*}\|g\|_{u, \ell}\le 1\,.
\end{align}
As an example, one can take $g(A, y, \psi; \delta)=\chi(\psi-\psi_\circ(A, y))$, with
\begin{align}
\chi(\theta)=\left\{
\begin{array}{lll}
1\quad & |\theta|\le a\\
1-\frac{\int_a^{\theta}e^{-\frac{\zeta}{(\theta-a)(b-\theta)}}d\zeta}{\int_a^{b}e^{-\frac{\zeta}{(\theta-a)(b-\theta)}}d\zeta}& a<\theta\le b\\
0& \theta>b\\
\chi(-\theta)& \theta<-a
\end{array}
\right.
\end{align}
with $0<a<$ $b$ so small that $B_a(\psi_\circ(A, y))\subset W_\circ(A, y; \delta)$, $B_b(\psi_\circ(A, y))\subset V_\circ(A, y; \delta)$. If $\zeta\in (0, 1)$ is sufficiently small (depending on $\ell_*$), then
\eqref{g0} is met.\\
Let
\begin{align}\label{tildeP}\widetilde P(A, y, \psi; \delta):= g(A, y, \psi;\delta) P(A, y, \psi)\, . \end{align}
We let
\begin{align}\widetilde X:=N+\widetilde P\end{align}
and
\begin{align}\label{domain}{\mathbb P}_{\varepsilon_-, \xi}={\mathbb A}_{\varepsilon_-}\times {\mathbb Y}_{\xi}\times {\mathbb T}\, ,\end{align}
where ${\mathbb A}=[1-2\varepsilon_+, 1-2\varepsilon_-]$, ${\mathbb Y}=[L_-+2\xi, L_+-2\xi]$ and $\varepsilon_-<\varepsilon_+$, $\xi$ are sufficiently small,
and $u=(\varepsilon_-, \xi)$.
By construction, $\widetilde X$ and $\widetilde P\in {\cal C}^3_{u, \infty}$. In particular, $\widetilde P\in {\cal C}^3_{u, \ell_*}$, for all $\ell_*\in{\mathbb N}$. Below, we shall fix a suitably large $\ell_*$.
\paragraph{\it Step 4. Bounds}
The following uniform bounds follow rather directly from the definitions. Their proof is deferred to Appendix~\ref{Technicalities}, in order not to interrupt the flow.
\begin{align}\label{bounds2}&
\left\|\frac{1}{v}\right\|_u \le C \frac{e^{L_+}}{\alpha L^{\frac{1}{2}}_-}\ ,\quad
\left\|\frac{\partial_A v}{v}\right\|_u \le C \frac{e^{L_+}}{L_-\sqrt{\varepsilon_-}}\ ,\quad \left\|\frac{\partial_y v}{v}\right\|_u \le1+ C \frac{e^{L_+-L_-}}{L^2_-}\nonumber\\\nonumber\\
&\left\|\frac{\omega}{v}\right\|_u\le C\frac{e^{L_+-L_-}}{L_-^{3/2}}\ ,\quad \left\|\frac{\partial_A\omega}{v}\right\|_u\le C\frac{e^{2L_+-L_-}}{L_-^{3/2}\varepsilon_-^{\frac{1}2}}\ ,\quad \left\|\frac{\partial_y\omega}{v}\right\|_u\le C\frac{e^{2L_+-2L_-}}{L_-^{3/2}}\\\nonumber\\
\label{bounds3}
&\|\widetilde P_1\|_u\le C e^{-2 L_-}\max\Big\{
|{\rm C}|L_+\sqrt{\varepsilon_+},\ L_+\varepsilon_+,\ \delta\sqrt{\varepsilon_+}\sqrt{\alpha\,L_+}
\Big\}\nonumber\\\nonumber\\
&\|\widetilde P_2\|_u\le C e^{-L_-}\max\Big\{
|{\rm C}|L_+\sqrt{\frac{\varepsilon_+}{\varepsilon_-}},\ L_+\frac{\varepsilon_+}{\sqrt{\varepsilon_-}},\ \sqrt{\frac{\varepsilon_+}{\varepsilon_-}}\delta\sqrt{\alpha\,L_+}\, ,\ (\alpha{L_-})^{-\frac{1}{2}}\max\{|{\rm C}|^2,\ \varepsilon_+^2,\ \beta\}
\Big\}\nonumber\\\nonumber\\
&\|\widetilde P_3\|_u\le C e^{-2L_-}\max\Big\{
|{\rm C}|\frac{\sqrt{\varepsilon_+}}{\varepsilon_-},\ \frac{\varepsilon_+}{\varepsilon_-},\ \frac{\sqrt{\varepsilon_+}}{\varepsilon_-}\sqrt{\alpha\,L_+}
\Big\}
\end{align}
Here $C$ is a number not depending on $L_-$, $L_+$, $\xi$, $\varepsilon_-$, $\varepsilon_+$, $c$, $|{\rm C}|$, $\beta$, $\alpha$ and the norms are meant as in Section~\ref{A generalisation when the dependence}, in the domain~\eqref{domain}.
Remark that the validity of~\eqref{bounds3} is subject to condition
\begin{align}\label{Lm}
L_-\ge C\alpha^{-1}\max\{|c|,\ |{\rm C}|^2,{\varepsilon_+}, \beta \}\, .
\end{align}
which will be verified below.
\paragraph{\it Step 5. Application of {\sc gnft} and conclusion}
Fix $s_1$, $s_2>0$. Define
\begin{align}\rho:=\frac{\varepsilon_-}{16}\ ,\quad \tau:=e^{-s_2}\frac{\xi}{16}\ ,\quad w_K:=\left(\frac{\varepsilon_-}{16}\,, \frac{e^{-s_2}\xi }{16}\,, \frac{1}{c_0 K^{1+\delta}}\right)\end{align}
so that~\eqref{NEWu+positiveNEW} are satisfied.
With these choices, as a consequence of the bounds in~\eqref{bounds2}--\eqref{bounds3}, one has
\begin{align}\label{bounds10}
\chi&\le C(L_+-L_-) \max\left\{\frac{e^{L_+-L_-}}{s_1L_-^{3/2}},\ \frac{1}{s_2}\left(1+ C \frac{e^{L_+-L_-}}{L^2_-}\right)\right\}\nonumber\\
\theta_1&\le C e^{s_1}(L_+-L_-)\xi K^{1+\delta}\frac{e^{2L_+-2L_-}}{L_-^{3/2}}\nonumber\\
\theta_2&\le C e^{s_1+s_2}(L_+-L_-)\frac{\sqrt{\varepsilon_-}}{\xi}\frac{e^{L_+}}{L_-}
\nonumber\\
\theta_3&\le C{e^{s_1}}(L_+-L_-)K^{1+\delta}\sqrt{\varepsilon_-}\frac{e^{2L_+-L_-}}{L_-^{3/2}}\nonumber\\
\eta&\le C e^{s_1+s_2}(L_+-L_-)\frac{e^{L_+-L_-}}{\alpha L_-^{\frac{1}{2}}}\max\left\{
e^{-L_-}\varepsilon_-^{-1}\max\Big\{
|{\rm C}|L_+\sqrt{\varepsilon_+},\ L_+\varepsilon_+,\ \delta\sqrt{\varepsilon_+}\sqrt{\alpha\,L_+}
\Big\}\right.\, ,\nonumber\\
&{e^{s_2}}\xi^{-1}\max\Big\{
|{\rm C}|L_+\sqrt{\frac{\varepsilon_+}{\varepsilon_-}},\ L_+\frac{\varepsilon_+}{\sqrt{\varepsilon_-}},\ \sqrt{\frac{\varepsilon_+}{\varepsilon_-}}\delta\sqrt{\alpha\,L_+}\, ,(\alpha{L_-})^{-\frac{1}{2}}\max\{|{\rm C}|^2,\ \varepsilon_+^2,\ \beta\}
\Big\}\, ,\nonumber\\
&\left. e^{-L_-}K^{1+\delta}\max\Big\{
|{\rm C}|\frac{\sqrt{\varepsilon_+}}{\varepsilon_-},\ \frac{\varepsilon_+}{\varepsilon_-},\ \frac{\sqrt{\varepsilon_+}}{\varepsilon_-}\sqrt{\alpha\,L_+}
\Big\}\right\}
\end{align}
\noindent
We now discuss inequalities~\eqref{NEWu+positiveNEW}--\eqref{NEWnewsmallnessNEW} and~\eqref{Lm}. We choose $s_i$, $L_\pm$, $\varepsilon_\pm$ and $K$ to be the following functions of $L$ and $\xi$, with $0<\xi<1<L$:
\begin{align}
&L_-=L\,,\quad \varepsilon_\pm=c_\pm L^2e^{-2L}\,,\qquad L_+=L+10\xi\,,\quad s_1=C_1 \xi L^{-\frac{3}{2}}\,,\quad s_2=C_1\xi\,,\quad K=\left[\left(\frac{c_1}{\xi\sqrt L}\right)^{\frac{1}{1+\delta}}\right]
\end{align}
with $0<c_1<1<C_1$ and $0<c_-<c_+<1$ suitably fixed, so as to have $K> 0$. A more stringent relation between $\xi$ and $L$ will be specified below.
We take
\begin{align}|{\rm C}|<c_1 L^2 e^{-2L}\,,\quad \b<c_1 L^4 e^{-4L}\,,\quad \d<c_1 L^{3/2} e^{-L}\end{align}
In view of~\eqref{bounds10}, it is immediate to check that there exist suitable numbers $0<c_1<1<C_1$ depending only on $c$, $c_+$, $c_-$ and $\alpha$ such that
inequalities~\eqref{NEWu+positiveNEW}--\eqref{theta3NEW} and~\eqref{Lm} are satisfied and
\begin{align}\eta<C_2 L^{-\frac{3}{2}}\end{align}
\noindent
An application of {\sc gnft} conjugates $\widetilde X=N+\widetilde P$ to a new vector--field $\widetilde X_\star=N+\widetilde P_\star$, with the first component of the vector $\widetilde P_*$ being bounded as
\begin{align}
\|\widetilde P_{\star, 1}\|_{u_\star}\le &\varepsilon_{-}
\VERT \widetilde P_\star\VERT^{w_K}_{u_\star}\le \varepsilon_{-}\max\left\{2^{-c_2L^3}\VERT \widetilde P\VERT^{w_K}_{u}\, ,\ 2 c_0\,K^{-\ell+\delta}\VERT \widetilde P\VERT^{w_K}_{u, \ell} \right\}\end{align}
Using~\eqref{g0},~\eqref{tildeP}, that $\widetilde P$ vanishes outside $V_\circ$, the chain rule and the holomorphy of $P(A, y, \cdot)$,
\begin{align}
\VERT\widetilde P\VERT^{w_K}_{u, \ell}\le 2^\ell\VERT P_{V_\circ}\VERT^{w_K}_{u, \ell}\le 2^\ell\, \frac{\ell!}{s^\ell}\,\VERT P_{(V_\circ)_s}\VERT^{w_K}_{u} \qquad \forall\ 0\le \ell\le \ell_*
\end{align}
where $P_{(V_\circ)_s}(A, y, \psi)$ denotes the restriction of $P(A, y, \cdot)$ on $(V_{\circ})_s$, while $s$ is the analyticity radius of $P(A, y, \cdot)$.
We take $s$ so small that
\begin{align}
\VERT P_{(V_\circ)_s}\VERT^{w_K}_{u}\le 2 \VERT P_{V_\circ}\VERT^{w_K}_{u}
\end{align}
Then we have
\begin{align}
\|\widetilde P_{\star, 1}\|_{u_\star}
\le &2 \varepsilon_{-}\max\left\{2^{-c_2L^3}\, ,\ c_0\,2^{\ell+1}\ell!s^{-\ell} K^{-\ell+\delta}\right\}\VERT P_{V_\circ}\VERT^{w_K}_{u}\le
2\varepsilon_{-}2^{-c_2L^3}\VERT P_{(V_\circ)_s}\VERT^{w_K}_{u}\le
2\varepsilon_{-}2^{-c_2L^3}Q^{-1}
\end{align}
where we have used the inequality
\begin{align}\label{bound4}
c_0\,2^{\ell+1}\ell!s^{-\ell} K^{-\ell+\delta}\le 2^{-c_2L^3}
\end{align}
which will be discussed below.
On the other hand, analogous techniques as the ones used to obtain~\eqref{bounds3} provide
\begin{align}c \epsilon\le \|{P_1}_{V}\|_u\le \epsilon
\,,\quad c L^{\frac{1}{2}} e^{-L}\le Q^{-1}\le C L^{\frac{1}{2}} e^{-L}\,.\end{align}
with $\epsilon:=C L^3 e^{-4L}$ and $0<c<1$.
So,
\begin{align}\|\widetilde P_{\star, 1}\|_{u_\star}
\le C_32^{-c_3 L^3}\epsilon\end{align}
which is what we wanted to prove.
It remains to discuss~\eqref{bound4}. By Stirling and provided that $\ell>2\delta$,~\eqref{bound4} is implied by
\begin{align}
K>1\,,\quad \left(\frac{4c_0\sqrt{2\pi}\ell^{\frac{3}{2}}}{es \sqrt K}\right)^\ell\le 2^{-c_2L^3}
\end{align}
These inequalities are satisfied by choosing $\ell$, $\ell_*$ and $\xi$ to be related to $L$ such in a way that
\begin{align}\ell&=
\max\left\{[c_2 L^3]+1\,, [2\d]+1\,,\left[\left(\frac{1}{2\pi}\frac{e^2\sigma^2}{64c_0^2}\right)^{\frac{1}{3}}\right]+1\right\}\,,\quad \ell_*>\ell\\ K&=\left[\left(\frac{c_1}{\xi\sqrt L}\right)^{\frac{1}{1+\delta}}\right]>2\pi\frac{64 c_0^2}{e^2\sigma^2}\ell^3>1\,.\quad \square\end{align}
|
1,477,468,750,602 | arxiv | \section{Introduction}
Leptogenesis \cite{fy} is a very appealing explanation of the
baryon asymmetry of the Universe, the more so since it can also
incorporate the origin of neutrino masses and mixings. In triplet
see-saw models, a cosmological lepton asymmetry is produced by the
decay of a heavy Higgs triplet, in presence of the Sakharov
condition of i) lepton--number violation; ii) CP violation; iii)
out--of--equilibrium decay. The same Higgs triplet is also
responsible of the dimension--five operator that induces neutrino
masses through the see-saw mechanism in the vacuum expectation
values \cite{Tss}. The lepton asymmetry is then transformed to a
baryon asymmetry through the mechanism of sphaleron conversion.
This model has already been discussed in the literature with multi
Higgs triplets \cite{Tlepto} or with additional right-handed
neutrinos \cite{hybrids}. As a general feature, it presents some
interesting differences compared to the usual see-saw mechanism
where the decaying particles producing leptogenesis are assumed to
be right--handed neutrinos. In particular, the triplet see-saw is
more predictive \cite{anna,chun03,chun05}, since Yukawa matrices
can be fixed through neutrino masses, and no unknown Majorana mass
matrices are present. An important feature of the triplet
leptogenesis is that it suffers from a strong wash--out effect due
to gauge--triplet annihilations \cite{Hambye, chun_scopel} which
cannot be neglected, particularly, for low mass of the triplet.
Even with such a strong annihilation effect, triplets can develop
an appropriate lepton asymmetry during their thermal evolution,
and this, in the non-supersymmetric version of the model, has been
shown to lead to the possibility of a high efficiency for
leptogenesis production even for low triplet masses \cite{Hambye}.
This is an intriguing aspect that enables us to circumvent the
gravitino problem requiring the upper--bound on the reheating
temperature of the Universe, $T_{RH} \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} 10^{8}$ GeV in
supergravity theories.
In this paper, we want to extend the discussion of the properties
of supersymmetric triplet see--saw leptogenesis to a general
phenomenological scenario, with a minimal set of theoretical
assumptions, and to compare it to the non--supersymmetric version
of the model. In our study, we will cover the triplet mass down
to the TeV range, for which the model has a prospect of being
tested in future colliders \cite{chun03,tev}.
The plan of the paper is as follows. In Section
\ref{section:themodel} we introduce the Higgs triplet model and
the corresponding Boltzmann equations. In Section
\ref{section:discussion} we describe our minimal assumptions and
then discuss the produced lepton asymmetry in the
non--supersymmetric version of the model in Section
\ref{section:non_susy}, and in the supersymmetric one in Section
\ref{section:susy}. We devote Section \ref{section:conclusions} to
our conclusions.
\section{The model and Boltzmann equations}
\label{section:themodel}
In the supersymmetric form of the Higgs triplet model \cite{anna},
one needs to introduce vector-like pairs of
$\Delta=(\Delta^{++},\Delta^+,\Delta^0)$ and
$\Delta^c=(\Delta^{c--}, \Delta^{c-},\Delta^{c0})$ with
hypercharge $Y=1$ and $-1$, allowing for the renormalizable
superpotential as follows:
\begin{equation}
W= \lambda_L LL \Delta + \lambda_1 H_1 H_1 \Delta + \lambda_2 H_2
H_2 \Delta^c + M \Delta \Delta^c,
\end{equation}
where $\lambda_L LL \Delta$ contains the neutrino mass term,
$\lambda_L \nu \nu \Delta^0$. In the supersymmetric limit, the
Higgs triplet vacuum expectation value $\langle \Delta^0 \rangle=
\lambda_2 \langle H_2^0 \rangle^2/M$ gives the neutrino mass
\begin{equation}
m_\nu = 2 \lambda_L \lambda_2 {v_2^2 \over M},
\label{eq:neutrino_mass}
\end{equation}
\noindent with $v_2\equiv \langle H^0_2 \rangle$. Working in the
supersymmetric limit, we use the same notations for the bosonic
and fermionic degrees of the superfields. A heavy particle $X$,
which can be any component of $\Delta, \Delta^c$, decays to the
leptonic final states, ${L}{L}$, as well as the Higgs final
states, ${H}_1{H}_1$ and ${H}_2{H}_2$, the
out-of-equilibrium decay of which will lead to a lepton asymmetry of
the universe. The corresponding decay rate is $ \Gamma_X =
{|\lambda_L|^2 + |\lambda_1|^2 + |\lambda_2|^2 \over 8\pi } M$.
One of the important quantities in our analysis is $K \equiv {\Gamma_X
/ H(M)}$ which is given by
\begin{equation}
K = {|\lambda_L|^2 + |\lambda_1|^2 + |\lambda_2|^2 \over 16\pi
|\lambda_L| |\lambda_2|} {|m_\nu| M^2 \over v_2^2 H(M) } \simeq
{16 \over \sqrt{B_L B_2}}\, \left({|m_\nu| \over 0.05 \mbox{ eV}
}\right),
\label{eq:k}
\end{equation}
where $H(M)=1.66 \sqrt{g_*} M^2/m_{Pl}$ is the Hubble parameter at
the temperature $T=M$, and $B_{L,2}$ are the branching ratios of
the triplet decays to $LL$ and $H_2 H_2$, respectively.
For the
relativistic degrees of freedom in thermal equilibrium $g_*$, we
will use the Supersymmetric Standard Model value: $g_*=228.75$.
The parameter $K$ takes the minimum value of $K_{min}=32$ for
$B_L=B_2=1/2$ and gets larger for $B_L$ or $B_2 \ll 1$. For our
discussion, we will fix $m_\nu=0.05$ eV, which corresponds to the
atmospheric neutrino mass scale.
The resulting
lepton asymmetry of the universe is determined by the interplay of
the three asymmetries developed in the decay channels $X \to f_i$
where $f_i=LL, H_1H_1, H_2H_2$ for $i=L,1,2$, respectively.
Their cosmological evolutions crucially depend on the corresponding
$K$-values $K_i$ and the CP asymmetries $\epsilon_i $ which are
defined by
\begin{equation}
K_i\equiv KB_i \quad\mbox{and}\quad
\epsilon_{i} \equiv {\Gamma(X\to f_i ) - \Gamma(\bar{X}\to
\bar{f}_i ) \over \Gamma_X } \,.
\label{eq:ki_epsiloni}
\end{equation}
\noindent The above CP asymmetries follow
the relation; $\epsilon_L + \epsilon_1 + \epsilon_2 \equiv 0$.
Note here that the model contains non-trivial CP asymmetries
$\epsilon_i$ which can be generated after integrating out
additional triplets or right-handed neutrinos
\cite{Tlepto,hybrids} or from CP phases in the supersymmetry
breaking sector \cite{softL,softT,chun_scopel}.
Before discussing how to generate leptogenesis in this model, let us
introduce for comparison its non--supersymmetric version.
The simplest realization of the triplet model corresponds to the
following Lagrangian:
\begin{equation}
{\cal L} = {\cal L}_{\rm SM} + |D_\mu \Delta|^2- M^2 |\Delta|^2+
\left (\lambda_L L L \Delta +
M \lambda_H\, H H \Delta^*\, +\, {\rm h.c.}\right),
\label{eq:non_susy_model}
\end{equation}
\noindent with the hypercharge assignments: $Y_L=-1/2$, $Y_H=1/2$
and $Y_{\Delta}=1$. The neutrino mass term is still given by
Eq.~(\ref{eq:neutrino_mass}), with the substitutions
$v_2\rightarrow v$ and $\lambda_2\rightarrow \lambda$, and
$v\equiv \langle H^0 \rangle$). For the relativistic degrees of
freedom in thermal equilibrium $g_*$, the corresponding Standard
Model value is $g_*=108.75$, while the total decay rate is given
by $ \Gamma_X = {|\lambda_L|^2 + |\lambda_1|^2 + |\lambda_H|^2
\over 16\pi } M$, so that the parameter $K \equiv {\Gamma_X /
H(M)}$ is given by:
\begin{equation}
K \simeq {11.6 \over \sqrt{B_L B_H}}\, \left({|m_\nu| \over 0.05
\mbox{ eV} }\right). \label{eq:k_non_susy}
\end{equation}
\smallskip
In order to discuss how to generate a lepton asymmetry in the
supersymmetric triplet seesaw model let us first consider the
general case of a charged particle $X$ ($\bar{X}$) decaying to a
final state $j$ ($\bar{j}$) and generating tiny CP asymmetric
number densities, $n_X-n_{\bar{X}}$ and $n_j-n_{\bar{j}}$. The
relevant Boltzmann equations in the approximation of
Maxwell--Boltzmann distributions are
\begin{eqnarray} \label{boltzmann}
{d Y_X \over d z} &=& - z K \left[ \gamma_D (Y_X-Y_X^{eq}) +
\gamma_A {(Y_X^2-Y_X^{eq\,2})\over Y_X^{eq}} \right]
\nonumber\label{eq:boltzmann_X}
\\
{d Y_x \over d z} &=& - z K \gamma_D \left[ Y_x-
\sum_k 2 B_k {Y_X^{eq}\over Y_k^{eq}} Y_k \right]
\nonumber\label{eq:boltzmann_x}
\\
{d Y_j \over d z} &=& 2 z K \gamma_D\left[ \epsilon_j (Y_X-Y_X^{eq})
+ B_j ( Y_x - 2 {Y_X^{eq} \over Y_j^{eq} } Y_j ) \right],
\label{eq:boltzmann_asym}
\end{eqnarray}
where $Y$'s are the number densities in unit of the entropy
density $s$ as defined by $Y_X\equiv n_X/s \approx n_{\bar{X}}/s$,
$Y_x \equiv (n_X-n_{\bar{X}})/s$, $Y_j\equiv (n_j-n_{\bar{j}})/s$,
and $z=M/T$. The quantities $\epsilon_i$ are defined in
Eq.~(\ref{eq:ki_epsiloni}).
The evolution of the $X$ abundance is determined by the decay and
inverse decay processes, as well as by the annihilation effect,
and are accounted for by the functions $\gamma_D$ and $\gamma_A$,
respectively. Note that the triplets are charged under the
Standard Model gauge group and thus have a nontrivial gauge
annihilation effect which turns out to be essential in determining
the final lepton asymmetry. Moreover, as a consequence of
unitarity, the relation $2 Y_x + \sum_j Y_j\equiv 0$ holds, so
that one can drop out the equation for $Y_x$, taking the
replacement:
\begin{equation}
Y_x=-{1\over2} \sum_j Y_j,
\label{eq:sum_asym}
\end{equation}
in the last of Eqs.~(\ref{boltzmann}).
In the supersymmetric version of the model, the heavy particle $X$ can
be either of the six charged particles; $X=\Delta^{\pm\pm},
\Delta^{\pm}$ or $\Delta^{0, \bar{0}}$ for each triplets $(\Delta,
\Delta^c)$. Each of them follows the first Boltzmann equation in
Eq.~(\ref{boltzmann}) where $\gamma_D$ and $\gamma_A$ are given by
\begin{eqnarray}
\gamma_D &=& {K_1(z) \over K_2(z)} \label{eq:gamma_d}\\
\gamma_A &=& {\alpha_2^2 M \over \pi K H(M)}
\int^\infty_1\!\! dt\, \frac{K_1(2zt)}{K_2(z)}\, t^2 \beta(t)\, \sigma(t),
\label{eq:sigmat_int}
\end{eqnarray}
with
\begin{eqnarray}
&&\sigma(t)=(14+11 t_w^4)(3+\beta^2)+(4+ 4 t_w^2+t_w^4)\left [
16+4(-3-\beta^2 + \frac{\beta^4+3}{2\beta}\ln
\frac{1+\beta}{1-\beta})\right ]\nonumber \\ &&+4 \left
[-3+\left(4-\beta^2+\frac{(\beta^2-1)(2-\beta^2)}{\beta}\ln\frac{1+\beta}{1-\beta}\right)
\right ],\label{eq:sigmat}
\end{eqnarray}
where $t_w\equiv\tan(\theta_W)$ with $\theta_W$ the Weinberg
angle, and $\beta(t)\equiv \sqrt{1-t^{-2}}$. The function
$\gamma_D$ is the ratio of the modified Bessel functions of the
first and second kind which as usual takes into account the decay
and inverse decay effects in the Maxwell--Boltzmann limit. The
function $\gamma_A$ \cite{chun_scopel} accounts for the
annihilation cross-section of a triplet component $X$ summing all
the annihilation processes; $X\bar{X}^\prime \to $ Standard Model
gauge bosons/gauginos and fermions/sfermions where $X^\prime$ is
some triplet component or its fermionic partner.
The corresponding expression for $\sigma(t)$ in the
non-supersymmetric version of the model, accounting for the
annihilations of the triplets to the Standard Model gauge bosons
and fermions is given by \cite{Hambye}:
\begin{eqnarray}
&&\sigma(t)=\left (25+\frac{41}{2} t_w^4\right)\frac{\beta^2}{3}+(4+ 4 t_w^2+t_w^4)\left [
4+4(1-\beta^2 + \frac{\beta^4-1}{2\beta}\ln
\frac{1+\beta}{1-\beta})\right ]\nonumber \\ &&+4 \left
[-1+\left(2-\frac{5}{3}\beta^2+\frac{(\beta^2-1)^2}{\beta}\ln\frac{1+\beta}{1-\beta}\right)
\right ].\label{eq:sigmat_sm}
\end{eqnarray}
The r\^ole played by annihilation and decay in the determination of
the triplet density $Y_{X}$ can be understood in the following
way. When the branching ratios $B_i$ of the different decay channels are all
of the same order, inverse decays freeze out at a temperature $z_f$
determined by
\begin{equation}
K
z_f^{5/2} e^{-z_f}=1.
\label{eq:zf_K}
\end{equation}
At that temperature the thermal averages of
the annihilation and decay rates can be compared by considering the
following ratio \cite{fry}:
\begin{equation}
\frac{<\Gamma_A>}{<\Gamma_D>}(z_f)\simeq 2 {\alpha^2\over
\alpha_X} z_f^{-3/2}e^{-z_f},
\label{eq:ratio}
\end{equation}
\noindent where $\alpha_X = K H(M)/M$. If the quantity in
Eq.~(\ref{eq:ratio}) is bigger (smaller) than 1 the triplet
freeze--out is determined by annihilation (inverse--decay). Thus,
in this case, the annihilation effect becomes dominant for $$M
\lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} 10^{15} z_f e^{-2z_f} \mbox{ GeV},$$ and so it can be
neglected when $M\lower.7ex\hbox{$\;\stackrel{\textstyle>}{\sim}\;$} 10^8$ GeV for $K=32$ and $M\lower.7ex\hbox{$\;\stackrel{\textstyle>}{\sim}\;$} 1$ TeV for
$K\approx 4300$.
However, if one has $K_i\equiv B_i K<<1$ in one channel, with the
same quantity bigger than 1 in the other channels, the condition
of Eq.~(\ref{eq:zf_K}) must be modified by using $K_i$ instead of
$K$. This leads to a smaller $z_f$ and shifts to higher masses the
transition between dominance of annihilation and inverse decay in
the determination of the triplet density.
\section{Set-up for the discussion}
\label{section:discussion}
In the following, we will discuss the phenomenology of thermal
leptogenesis within the more generic version of the framework
introduced in the previous section, i.e. by considering the
branching ratios $B_i$ and the CP asymmetries $\epsilon_i$ as free
parameters, with the additional constraints $\sum B_i=1$, $\sum
\epsilon_i=0$, and $|\epsilon_i|\le 2 B_i$ (this last condition
ensures that all physical amplitudes are positive, and simply
states that the amount of CP violation cannot exceed 100\% in each
channel). This choice implies 2 free parameters in the
non--supersymmetric version of the model and 4 in the
supersymmetric one, besides the triplet mass parameter $M$. In
order to show our results, we choose to discuss, for every
particular choice of the parameters, the amount of CP violation
which is needed to provide successful leptogenesis, which we
define by the value $\bar{\epsilon}$ that the the biggest of the
$|\epsilon_i|$'s must have in order to provide $Y_L=10^{-10}$ for
the final lepton asymmetry (this amount of $Y_L$ leads to the
correct baryon asymmetry compatible to observations once
reprocessed by sphaleron interactions). Since sphaleron conversion
is suppressed at temperatures below Electro-Weak symmetry
breaking, in our calculation we stop the evolution of $Y_L$ below
$T=m_Z$, with $m_Z$ the Z--boson mass.
The quantity $\bar{\epsilon}$ is inversely proportional to the usual
efficiency factor $\eta$, defined by the relation:
\begin{equation}
Y_L=\epsilon_L \eta (Y_X+Y_{\bar{X}})|_{T>>M},
\label{eq:efficiency}
\end{equation}
\noindent where $\eta$ is determined by the amount of wash-out effect
by inverse decay and by the suppression of the number density $Y_X$ by
gauge annihilations. One gets $\eta=1$ in the limit where the
triplets decay strongly out-of-equilibrium when $T>>M$.
By fixing $\bar{\epsilon}$ we reduce by one the number of free
parameters. Moreover, since an overall minus sign for the
$\epsilon_i$'s implies a change of sign in the final value of $Y_L$,
we discuss $Y_L$ and $\bar{\epsilon}$ in their absolute values. In this way
only the ratios among the $\epsilon_i$'s are relevant as input
parameters, and we can define the $\epsilon_i$'s in
such a way that $max(|\epsilon_i|)= 1$.
\section{The non-supersymmetric version of the model}
\label{section:non_susy}
We start by discussing the non--supersymmetric version of the
model. In this case the triples have two two decay channels,
$X\rightarrow LL, HH$, and the process of triplet annihilations is
governed by Eq.~(\ref{eq:sigmat_sm}). In our convention CP
violations are fixed, $\epsilon_L=-\epsilon_H=1$ so there are only
2 free parameters, the triplet mass $M$ and one of the two
branching ratios, for example $B_L$, with $B_L+B_H=1$. The result
of our calculation is shown in Figure
\ref{fig:contour_epsilon_bar}, which is consistent with the result
of Ref.~\cite{Hambye}. In our figure, the curves at constant
values of $\bar{\epsilon}$ are plotted in the $B_L$--$M$ plane (in
order to blow--up the regions where $B_L<<B_H$ or $B_L>>B_H$, the
two complementary quantities $B_L$ and $1-B_L$ are plotted in
logarithmic scale in the range 0,0.5). The marked regions in the
lower corners are excluded by the conditions $|\epsilon_i|\le 2
B_i$, $i=L,H$, while those in the upper corners are excluded
because one of the Yukawa couplings is non--perturbative due to
Eq.~(\ref{eq:neutrino_mass}). The figure is symmetric under the
exchange $B_L\rightarrow 1-B_L=B_H$, a feature that can be easily
explained by the fact that the parameter $K$ remains the same
(\ref{eq:k}) and because the identity (\ref{eq:sum_asym}) implies
$|Y_L|=|Y_H|$ at late times (when all the triplets have decayed
away).
\begin{figure}
\includegraphics[width=0.35\textwidth,angle=-90,bb = 126 31 397 766]{contour_join_last.ps}
\caption{\label{fig:contour_epsilon_bar} Contour plots of the
amount of CP violation $\bar{\epsilon}$ needed to provide the
observed baryon asymmetry in the Universe (as defined in Section
\protect\ref{section:discussion}), in the non--supersymmetric
version of the triplet see-saw model described in Section
\protect\ref{section:non_susy}, as a function of the triplet decay
branching ratio to leptons $B_L$, and of the triplet mass $M$. The
marked lower corners are excluded by the conditions
$|\epsilon_i|\le 2 B_i$, $i=L,H$, while in the upper corners one
of the Yukawa couplings becomes bigger than 1 due to
Eq.~(\protect\ref{eq:neutrino_mass}).}
\end{figure}
The change of behavior of the various curves between $M\simeq
10^{8}$ and $M\simeq 10^{10}$ GeV signals a transition in the
evolution of the triplet number density between
decays/inverse--decays and annihilations. When the freeze--out
temperature of the triplets is determined by
decays/inverse--decays, $\bar{\epsilon}$ is only a function of the
branching ratios (note that the parameter $K$ does not depend on
$M$, see Eq.~(\ref{eq:k})), so curves are parallel to the vertical
axis. On the other hand, when it is the annihilation process that
determines the triplet density freeze-out, this strongly
suppresses the efficiency $\eta$ at low values of $M$ so that
higher values of $\bar{\epsilon}$ are needed in order to obtain
successful leptogenesis.
Another important feature that can be seen in the figure is given
by the fact that the highest efficiencies (lower values for
$\bar{\epsilon}$) are reached whenever $B_L<<B_H$ or $B_H<<B_L$.
As already discussed in Ref.~\cite{Hambye}, this is due to the
fact that in these cases one of the two decay channels has
$K_i<<1$ (even if, due to Eq.~(\ref{eq:k}), $K\lower.7ex\hbox{$\;\stackrel{\textstyle>}{\sim}\;$} 32$), and so
is ``slow'' compared to the Hubble expansion, while the other is
``fast''. As a consequence of this, the ``slow'' channel can decay
out of equilibrium with efficiency close to 1 and develop a
sizeable asymmetry $Y_{slow}$ , while, at the same time, a
corresponding asymmetry with opposite sign $Y_x$ is left over in
the triplet density (since in this process $Y_x+Y_{slow}$ is
approximately conserved). The quantity $Y_x$ is eventually
converted into an asymmetry $Y_{fast}$ in the fast decay channel
(with $|Y_{slow}|=|Y_{fast}|$ at later times due to
Eq.~(\ref{eq:sum_asym})), when the triplets get out of kinetic
equilibrium and decay. So, the reason why $Y_{fast}$ is not
erased by the sizeable wash-out effect is clear: due to wash-out,
triplets and anti-triplets decay practically with the same rate,
but more final particles are produced than final antiparticles
because there are more triplets than antitriplets available for
decay in the first place. In this way an asymmetry in the triplet
density can be stored and eventually converted to a lepton
asymmetry, acting in practice as a lepton--number reservoir. This
very simple physical picture can be significantly modified if more
than two decay channels are present, as will be illustrated in the
following sections for the supersymmetric version of the model.
\begin{figure}
\includegraphics[width=0.65\textwidth,bb =41 196 516 634 ]{SM.ps}
\caption{\label{fig:SM} Amount $\bar{\epsilon}$ of CP violation
needed to provide the baryon asymmetry observed in the Universe,
as a function of the triplet mass $M$. Solid lines are calculated
in the non--supersymmetric triplet see-saw model of Section
\protect\ref{section:non_susy}, while dashed ones in the
supersymmetric model discussed in Section
\protect\ref{section:susy}, where $B_1=\epsilon_1=0$. Each curve
corresponds to a different choice of the parameter $B_L=0.5$,
$10^{-1}$, $10^{-2}$, $10^{-3}$, $10^{-4}$, $10^{-5}$, $10^{-6}$,
$10^{-7}$ and $10^{-8}$, from top to bottom. The range of each
curve is limited by the unitarity and perturbativity constraints
shown in Fig. \protect\ref{fig:contour_epsilon_bar}, as explained
in the text.}
\end{figure}
\medskip
The dependence of the quantity $\bar{\epsilon}$ as a function of
the triplet mass $M$ is discussed in Figure \ref{fig:SM}, where
the solid lines correspond to the non--supersymmetric model
discussed in this section, while the dashed ones show a
supersymmetric modification of the model discussed in the next
section. Several features of this figure are worth noticing, as
they outline quite general properties of triplet thermal
leptogenesis:
\begin{itemize}
\item As expected, $\bar{\epsilon}$ is proportional to $B_L$, and the
higher values (lower efficiencies) are obtained when
$B_l=B_H=1/2$.
\item The lowest value of $\bar{\epsilon}$ is reached at
$\bar{\epsilon}\simeq 10^{-8}$. This value corresponds to the limit of
out-of-equilibrium decay, and can be easily obtained from
Eq.~(\ref{eq:efficiency}) by setting the efficiency $\eta=1$. In fact,
since $Y_X|_{T>>M}\simeq 1/g_*$, where $g_*\simeq 10^{2}$ is the
number of degrees of freedom in the Early Universe, from
$Y_L\simeq 10^{-10}\simeq\bar{\epsilon} 10^{-2}$ one gets
$\bar{\epsilon}\simeq 10^{-8}$. This minimal value for $\bar{\epsilon}$
is obtained on general grounds that do not depend on
the microphysics, so it is not expected to change
in the modifications of the model that will be discussed in the next Sections.
\item The available range for $M$ at fixed $\bar{\epsilon}$ is bounded
from below by the unitarity constraint, and from above by the
perturbativity limit. As shown in Figure
\ref{fig:contour_epsilon_bar}, the two bounds converge at low
$B_L$ or $1-B_L$ corresponding to small values of
$\bar{\epsilon}$, and eventually meet (outside the bounds of the
figure) for $B_L=1-B_L\simeq 10^{-8}$. That is why the range of
$M$ gets smaller for low values of $\bar{\epsilon}$, and
eventually a particular value of $M\simeq 10^{-10}$ is singled out
for which the efficiency reaches its maximum value.
\item Two different regimes for $M$ are clearly distinguishable. In
particular, the strong loss of efficiency at lower values of $M$ is
due to the effect of annihilations in the determination of the
triplet freeze--out temperature. This temperature is significantly
lowered, with a consequent suppression of the final lepton
asymmetry, compared to the case where decays/inverse--decays
dominate, which corresponds to the regime of higher values for
$M$.
\item One realizes that $K$ increases but $K_L=K*B_L$ decreases from top to
bottom. When the annihilation dominates for lower $M$, the
Boltzmann equations show that the quantity $Y_X-Y_X^{eq}$ is
determined independently of $K$ and thus the final asymmetry $Y_L$
increases with $K$. As mentioned at the end of Section II, the
figure also shows that the dominance of inverse decay starts at
larger $M$ for smaller $K_L$.
\end{itemize}
\section{The supersymmetric version of the model}
\label{section:susy} In the supersymmetric version of the model
the particle content is enlarged, both because of the additional
supersymmetric degrees of freedom (striplets, sleptons and Higgsinos),
and from the fact that one more Higgs(+Higgsino) doublet is
included. Actually, this latter aspect will turn out to be more
relevant than the former for our discussion. In fact, barring possible
Susy--breaking effects which are expected to be suppressed for values
of $M$ above the Supersymmetry --breaking scale, triplet decay
amplitudes to particles belonging to the same supermultiplets are the
same, and can be factored out in the Boltzmann equations for
asymmetries. This implies that including supersymmetric partners in
Eqs.~(\ref{eq:boltzmann_asym}) is as simple as multiplying by 2 all
the relevant degrees of freedom, and the branching ratios $B_L$, $B_1$
and $B_2$ and CP asymmetries $\epsilon_L$, $\epsilon_1$ and
$\epsilon_2$ will refer to a sum over all the members of each
supermultiplet. As far as the triplet density $Y_X$ is concerned, the
supersymmetric version of the annihilation cross section given in
Eq.~(\ref{eq:sigmat}) must be used, where annihilations to
supersymmetric particles, as well as cohannihilations of the triplets
with their fermionic superpartners are taken into
account\cite{chun_scopel} (in our equations we assume that triplets
and striplets are degenerate in mass
). This implies a low--temperature
annihilation cross--section about a factor 8 bigger compared to the
non--supersymmetric case \cite{chun_scopel}, and a corresponding loss
of efficiency at low masses, where annihilation drives triplet
freeze--out.
\subsection{Standard Model-like case without ${\bf X\to H_1
H_1}$}
As long as only the Higgs supermultiplet $H_2$ is included
in the model (i.e., in the case with $B_1$=$\epsilon_1$=0), the
resulting phenomenology is not expected to change qualitatively
compared to the non-supersymmetric case: from the practical point
of view, the supersymmetric case with only $H_2$ corresponds just
to the non-supersymmetric one where the degrees of freedom are
multiplied by two and the annihilation cross section is about a
factor of 8 bigger (changing the degrees of freedom implies also a
slight modification of the $K$ parameter of about $\sqrt{2}$ at
fixed branching ratios). In order to show this point, in Figure
\ref{fig:SM} we show with dashed lines the result of a calculation
analogous to that shown by solid ones, where the supersymmetric
version of the model with $B_1$=$\epsilon_1$=0 is used. As can bee
seen, the two models are qualitatively quite similar, the
supersymmetric scenario implying worse efficiencies compared to
the non--supersymmetric one over the whole range of $M$. This may
be explained by the fact that in supersymmetry both the
annihilation cross section (which lowers the efficiency at low
$M$) and the $K$ parameter (which reduces it at high $M$) are
bigger compared to the non-supersymmetric case.
\subsection{Role of the third channel: ${\bf X\to H_1 H_1}$}
When non--vanishing $B_1$ and $\epsilon_1$ are considered, the number
of free parameters becomes 4 (two branching ratios and two
asymmetries) plus the triplet mass $M$. In this case, a qualitatively
different phenomenology arises compared to the previous cases.
\begin{figure}
\includegraphics[width=0.65\textwidth,bb =41 196 516 634 ]{high_m.ps}
\caption{\label{fig:high_M} Same as in Fig. \protect\ref{fig:SM}.
Thin dashed lines are calculated in the supersymmetric model discussed
in Section \protect\ref{section:susy}, where $B_1=\epsilon_1=0$,
while solid and thick--dashed lines show the same quantity for
$B_L=B_2\simeq 1/2$, $\epsilon_1=1$
and, from top to bottom, $B_1=10^{-2}$, $B_1=10^{-3}$, $B_1\le
10^{-4}$. For solid curves $\epsilon_L=0$, while
for thick-dashed ones $\epsilon_L=-1$.}
\end{figure}
A first remarkable difference is due to the fact that $B_1$ is not
constrained by Eq.~(\ref{eq:neutrino_mass}) and can be taken
arbitrarily small even for high values of $M$. This implies that
out-of-equilibrium decay and very low values of $\bar{\epsilon}$
are expected to be reached without encountering the upper bound
on $M$ observed in the curves of Fig. \ref{fig:SM}, due to the non
perturbativity of $\lambda_L$ or $\lambda_2$. This is shown in
Figure \ref{fig:high_M}, where $\bar{\epsilon}$ is plotted as a
function of $M$ for $B_L$=$B_2$=1/2, and for very low values of
$B_1$.
The other important feature of the model is that, as in all scenarios
with more than two decay channels, now a hierarchy in the CP violation
parameters is possible. This implies that, for instance, in some
particular channel CP violation can be suppressed compared to the
other two, or even absent. An example of this is again shown in
Fig. \ref{fig:high_M}, where the values $\epsilon_1$=1 and
$\epsilon_L=0,-1$ are assumed for each value of $B_1$. As can be seen
from Fig. \ref{fig:high_M}, even the case $\epsilon_L=0$ can provide
leptogenesis with a good efficiency. This fact at first sight might
seem quite amusing, since one could wonder how a CP--conserving decay
of the triplets to leptons may lead to any lepton asymmetry at
all. However the answer to this question is contained in the same
mechanism explained in Section \ref{section:non_susy}, where the
asymmetry in the triplet density produced by out-of-equilibrium decay
in a slow decay channel could be fully converted to an asymmetry in
the fast one even in presence of a very strong wash--out effect. As
already pointed out previously, in the fast channel CP violation
produced by triplet decays is negligible even in the case where
$\epsilon_i\ne 0$, the final asymmetry being produced only by the fact
that the number of decaying triplets is different from the number of
decaying antitriplets. So, having $\epsilon_i=0$ or strongly
suppressed in this fast channel doesn't make any difference.
The existence of a hierarchy in the $\epsilon_i$ parameters can
however strongly affect the physical mechanism described above
whenever CP violation is suppressed in a slow channel, as can be
na\"ively expected since this is the channel that drives
leptogenesis. As a matter of fact, if the slow decay channel
has a small $\epsilon_i$ it cannot develop a sizeable asymmetry
$Y_{slow}$, while, at the same time, the corresponding asymmetry with
opposite sign $Y_x$ left over in the triplet density (due to the
approximate conservation of $Y_x+Y_{slow}$) is also suppressed,
leading so to a suppression of the asymmetry also in the other,
fast channel.
\subsection{The possibility of cancellations}
In the fast channel another important fact may arise: the two
mechanisms of asymmetry production (i.e. direct CP violation in the
decay and asymmetry in the density of the triplets) may give rise to
effects of the same order of magnitude, and, if the sign of the
$\epsilon_i$ parameters is the same in the fast and in the slow
channels, even cancel out, leading so to a vanishing final
asymmetry. In this scenario, which implies a numerical cancellation in
the Boltzmann equations, the more populated between $X$ and $\bar{X}$
decays with the lower rate to the corresponding final state $L$ or
$\bar{L}$, in such a way that no final asymmetry is produced.
\begin{figure}
\includegraphics[width=0.65\textwidth,bb =41 196 516 634 ]{peaks1.ps}
\caption{\label{fig:peaks1} Same as in
Fig. \protect\ref{fig:high_M}, solid lines, but with $\epsilon_L=1$
and $\epsilon_1=0.1$ (solid lines) and $\epsilon_1=-0.1$ (dashed
lines). The presence of peaks in $\bar{\epsilon}$ signals a vanishing
efficiency $\eta$.}
\end{figure}
In order to show this effect, in Fig, \ref{fig:peaks1} the
parameter $\bar{\epsilon}$ is plotted as a function of $M$ for the
same choice of parameters as for the solid lines of Fig.
\ref{fig:high_M}, but assuming $\epsilon_L$=1 and $\epsilon_1=0.1$
(solid lines) and $\epsilon_1=-0.1$ (dashed lines). As can bee
seen in the figure, now peaks arise for $\epsilon_1=0.1$
signaling a vanishing efficiency $\eta$, while are not present
for $\epsilon_1=-0.1$. As explained before, this happens because
the $\epsilon_i$ parameter corresponding to the slowest decay
channel is suppressed compared to the other ones, and the
cancellation mechanism described in the previous paragraph may set
in when $\epsilon_1$ and $\epsilon_L$ have the same sign.
\begin{figure}
\includegraphics[width=0.65\textwidth,bb =41 196 516 634 ]{epsilon2_0.ps}
\caption{\label{fig:epsilon2_0} Same as in Fig. \protect\ref{fig:SM},
but with $\epsilon_2$=0. From top to bottom, the solid lines correspond to the case when
$B_1$=$B_L=1/2$ and $B_2=10^{-4}$, $10^{-3}$, $10^{-2}$;
the dashed lines to $B_2=10^{-4}$ and $B_1=10^{-1}$,
$10^{-2}$, $10^{-3}$, $10^{-4}$, $10^{-5}$, $10^{-6}$;
the dotted lines to $B_2=10^{-4}$ and $B_L=10^{-1}$,
$10^{-2}$, $10^{-3}$, $10^{-4}$.}
\end{figure}
\subsection{Lepton asymmetry with vanishing ${\bf
\epsilon_2}$}
As discussed previously, when its CP--violating term is exactly
vanishing, the slow channel no longer drives leptogenesis. If its
branching ratio is also much smaller than the other two, the slow
channel may be neglected altogether. So, for instance, when
$\epsilon_1=0$, the case $B_1<<B_{L,2}$, is equivalent to taking
$B_1=0$ (so, the upper dashed curve in Fig. \ref{fig:SM}, where
$B_1=0$ and $B_L$=$B_2$=1/2, is equivalent to the case where $B_1$
is small but non--vanishing). Taking $\epsilon_{L,2}$=0 and very
small values for the corresponding $B_{L,2}$ is still equivalent
to neglecting the corresponding channel, although, due to
Eqs.~(\ref{eq:neutrino_mass},\ref{eq:k}), a higher value for the $K$
parameter, and a possible perturbativity constraint at high $M$
are induced.
In order to show this, in Fig. \ref{fig:epsilon2_0} we show with solid
lines the case $\epsilon_2$=0, $B_2<<B_L$=$B_1\simeq$1/2. As expected,
in this scenario the efficiency is very poor, since the slow channel,
having vanishing $\epsilon_i$, cannot drive leptogenesis through
out-of-equilibriun decay. Moreover, all curves show an upper bound on
$M$ due to the perturbativity constraint, that shifts to lower $M$ for
smaller $B_2$. As already mentioned, for these curves, due to
Eq.~(\ref{eq:k}), the parameter $K$ is very high. Moreover, as
expected, $\bar{\epsilon}$ scales with $K\propto \sqrt{B_2}$ (the
efficiency is expected to scale as $1/(z_f K)$ for $K>>1$ and if the
inverse--decay effect is important \cite{fry}). It is worth noticing
now that the relative weight of the two competing effects of
annihilation and decay/inverse--decay in the determination of the
triplet freeze--out temperature depends on $K$, since the rate of
latter effect grows with $K$ while the former does not. So, the net
consequence of a big $K$ is to suppress the annihilation effect,
which, in turn, is instrumental in lowering the efficiency $\eta$ at
low values of $M$. As a consequence of this, a larger $K$ reduces the
values of $M$ where annihilation starts to dominate, as is observable
in Fig. \ref{fig:epsilon2_0}, where the change of behavior of all the
solid curves at low $M$ is shifted to the left. At lower $M$, when
annihilation dominates in the determination of the (s)triplet density,
the efficiency grows with $K$.
If, on the other hand, in this same scenario a hierarchy between
$B_L$ and $B_1$ is assumed, the presence of another slow channel
whose corresponding CP--violation parameter $\epsilon$ is not
suppressed can in principle increase the efficiency, allowing,
eventually, to reach the values of $\bar{\epsilon}$
typical of early out-of-equilibrium decay. This effect, however, is only
possible in the case $B_1<B_L$; the alternative case $B_L<B_1$ has
a much bigger value of $K$, wich implies a better efficiency at
low mass but a worse one at higher $M$. This is shown in Fig.
\ref{fig:epsilon2_0}, where the dashed curves, which have
$\epsilon_2$=0, $B_2$=$10^{-4}$, and $B_1$=$10^{-1}$, $10^{-2}$,
$10^{-3}$, $10^{-4}$, $10^{-5}$, $10^{-6}$, from top to bottom,
can reach values as low as $\bar{\epsilon}\simeq 10^{-8}$ when
$B_1$ is sufficiently small. On the other hand, the dotted curves
in the same figure, with $\epsilon_2$=0, $B_2$=$10^{-4}$, and
$B_L$=$10^{-1}$, $10^{-2}$, $10^{-3}$, $10^{-4}$, from top to
bottom have a worse efficiency, as expected. Note, in this last
case, the {\it inverse} proportionality between $\bar{\epsilon}$
and $K$: this is due to the fact that the relevant parameter is
$K_L\propto \sqrt{B_L/B_2}$, so that when $K$ increases $K_L$ gets
smaller.
\begin{figure}
\includegraphics[width=0.65\textwidth,bb =41 196 516 634 ]{epsilonl_0.ps}
\caption{\label{fig:epsilonl_0} Same as in Fig. \protect\ref{fig:SM},
but with $\epsilon_L$=0. The upper dotted line corresponds to
$B_L=10^{-6}$ and $B_2=10^{-2}$. For the two solid lines
$B_1=10^{-2}$, $B_2=10^{-6}$ (left) and $B_2=10^{-2}$, $B_1=10^{-6}$
(right). This last curve corresponds also to the case $B_L=10^{-2}$
$B_1=10^{-6}$. The dashed curve has $B_L=10^{-2}$ and $B_2=10^{-6}$.}
\end{figure}
\subsection{Lepton asymmetry with vanishing ${\bf
\epsilon_L}$}
As pointed out in the previous paragraph, the case $\epsilon_L$=0,
$B_L<<B_{1,2}$ is expected to give a very low efficiency for
leptogenesis. An example of this case is shown in Fig.
\ref{fig:epsilonl_0}, where the upper dotted line shows the case:
$\epsilon_L=0$, $B_L=10^{-6}$, $B_2=10^{-2}$. Eventually, taking
even smaller values of $B_L$ is equivalent to dropping $L$ from
the Boltzmann equation. On the other hand, having $\epsilon_L$=0,
$B_1<<B_{L,2}$ leads to high values of the efficiency, as already
discussed in Figure \ref{fig:high_M}. Apart from a higher value of
$K$, which implies a better efficiency at small masses but a worst
one overall, the case $\epsilon_L$=0, $B_2<<B_{L,1}$ is analogous,
since now it is the $H_2$ channel that drives leptogenesis,
decaying out-of-equilibrium with a non--suppressed $\epsilon_2$.
An example of this scenario is given by the dashed line in Fig.
\ref{fig:epsilonl_0}, where $\epsilon_L$=0, $B_2=10^{-6}$,
$B_L=10^{-2}$.
On the other hand, a qualitatively different situation is given
by: $\epsilon_L$=0, $B_{1,2}<<B_L$. In this case there are two
slow decay channels, and the corresponding $\epsilon_{1,2}$ are
not suppressed. However, since $\epsilon_1$=-$\epsilon_2$, and,
due to Eq.~(\ref{eq:sum_asym}), at late times $Y_L=-Y_1-Y_2$, a
cancellation between $Y_1$ and $Y_2$ may occur if both quantities
reach their out-of-equilibrium value, leading so to a vanishing
$Y_L$. This implies that, in order to reach a good efficiency,
some hierarchy between $B_1$ and $B_2$ is needed, in order to have
only one slow channel. This is again shown in Fig.
\ref{fig:epsilonl_0} by the two solid lines, that correspond to:
$\epsilon_L$=0, $B_1=10^{-2}$, $B_2=10^{-6}$ (left) and
$\epsilon_L$=0, $B_1=10^{-6}$, $B_2=10^{-2}$ (right). In this case
both curves reach a good efficiency, with a less stringent
perturbativity limit for the latter due to a smaller value of $K$.
In both cases, since $K_{slow}\equiv B_{slow} K<<1$, inverse
decays in the slow channel freeze out very early, when
annihilation still dominates in the determination of the triplet
density. As a consequence of this the efficiency is a growing
function of $K$, and this explains why the solid curve on the left
($K\simeq$ 16000) is below the one on the right ($K\simeq$ 160).
Besides, in the latter case the curve would remain the same by the
exchange ($B_L \leftrightarrow B_2$), i.e.: $B_1=10^{-6}$,
$B_L=10^{-2}$. This is due do the fact that the slow channel would
remain the same, as well as the corresponding $\epsilon_i$, while
also $K$ would be unchanged.
\section{Conclusions}
\label{section:conclusions} In this paper we have analyzed the
phenomenology of leptogenesis in the supersymmetric triplet
see--saw mechanism. Taking the branching ratios $B_i$ of the decay
rate of the triplets as free parameters, as well as the
CP--violation parameters $\epsilon_i$, with the additional
constraints $\sum_i B_1$=1 and $\sum_i \epsilon_i$=0, we have
calculated the amount $\bar{\epsilon}$ of CP violation which is
needed to provide successful leptogenesis. In the most favourable
case of early out-of-equilibrium decay of the triplets to leptons,
this number is of order $10^{-8}$. However, it is well known that
in this scenario inverse decays and annihilations of triplets
(this last effect for lower values of $M$) contribute in general
to erase the asymmetry, basically keeping the triplets in thermal
equilibrium until late times, when $T<M$ and their number density
is exponentially suppressed. An exception to this, within the
framework of the non--supersymmetric version of the model, is
known to be the case when one branching ratio is much smaller than
the other, in such a way that one $K_i= B_i K <<$1 even if $K>>$1.
We have referred to this kind of decay channel as to a slow one,
opposed to the fast ones having $K_i>>1$. In this case it is
sufficient that just one slow channel produces a sizeable
asymmetry, since a corresponding asymmetry is also developed in
the triplet density, which is eventually converted into an
asymmetry of the fast channel when the triplets decay. In the
supersymmetric version of the Model, this mechanism is still at
work. However, mainly because of the interplay of three decay
channels instead of two, a richer phenomenology arises:
\begin{itemize}
\item
the Yukawa coupling $\lambda_1$ of the additional Higgs doublet
can be made arbitrarily small at high triplet masses, allowing a
good efficiency also for $M\lower.7ex\hbox{$\;\stackrel{\textstyle>}{\sim}\;$} 10^{12}$ GeV. In presence of only
one Higgs doublet this is not possible due to the perturbativity
bound on $\lambda_L$ implied by Eq.~(\ref{eq:neutrino_mass}).
\end{itemize}
A hierarchy among the CP--violation parameters $\epsilon_i$'s is
allowed. Defining them in such a way that $max(|\epsilon_i|)= 1$,
and using the notation $\epsilon_{slow}$ and $\epsilon_{fast}$
for the CP--violating parameters in a slow ($K_i<<1$) and fast
($K_i>>1$) channel, respectively, this enriches the phenomenology,
because different combinations are possible:
\begin{itemize}
\item $\epsilon_{slow}=1$: The efficiency of leptogenesis reaches its
maximal value. Inverse decays in the slow channel freeze out
early, and annihilations turn out to dominate over inverse decays
in the determination of the triplet density up to quite high
values of the triplet mass $M$. In these cases the final asymmetry
is a growing function of the $K$ parameter. Moreover, the final
asymmetry is insensitive to the actual value of $\epsilon_{fast}$.
An apparently surprising example of this situation is when
$\epsilon_L=\epsilon_{fast}=0$, since in this case even a
vanishing $\epsilon_L$ can lead to efficient leptogenesis. An
exception to this case is given by the particular situation with
$\epsilon_{slow}=1$ in {\it two} channels, namely when
$\epsilon_L<<1$ and the decay channels to $H_1$ and to $H_2$ are
both slow with $\epsilon_1=-\epsilon_2=1$. In fact, if $B_1$ and
$B_2$ are comparable, a cancellation takes place between the
asymmetries in the two channels, leading to a vanishing lepton
asymmetry, $Y_L=-Y_{H_1}-Y_{H_2}\simeq 0$.
\item $\epsilon_{slow}<1$ and one slow channel: The final lepton
asymmetry is suppressed, as it would be na\"ively expected, since
$\epsilon_i$ is small in the only available slow channel which is
supposed to drive leptogenesis through out-of-equilibrium decays.
In this case there are two fast channels with $\epsilon_{fast}=\pm
1$, and in the channel where $\epsilon_{fast}$ has the same sign
as $\epsilon_{slow}$ this may lead to a cancellation in the
Boltzmann equations, implying a vanishing final asymmetry.
Moreover, inverse decays freeze out late in this case ($z_f\sim
\ln K >>1$), and decay is typically dominant over annihilation in
the determination of the triplet density, except for very light
values of $M$. As a consequence of this the efficiency scales as
$1/(z_f K)$ whenever $K>>1$
\item $\epsilon_{slow}<1$ and two slow channels: Since only one
$\epsilon_i$ can be small, the other slow channel with
unsuppressed $\epsilon_i$ may drive leptogenesis with a good
efficiency. In this case, in practice the decay channel with
$\epsilon_{slow}<1$ drops out from the Boltzmann equation, and a
system with just two decay channels is recovered. However, if
$\epsilon_{slow}=\epsilon_{L,2}$, the phenomenology is different
compared to the non--supersymmetric case, because the $K$
parameter is much bigger, reducing the efficiency at high masses
and improving it at lower ones. Moreover, the unitarity limit is
more constraining at high $M$ compared to the non-supersymmetric
case.
\end{itemize}
In conclusion, the present analysis suggests that in the
supersymmetric Triplet Seesaw Model successful leptogenesis can be
attained in a wide range of scenarios, some of which appear to be non
trivial or even counter--intuitive, provided that an asymmetry in the
decaying triplets can develop at early times and be eventually
converted into a lepton asymmetry, acting in practice as a
lepton--number reservoir.
|
1,477,468,750,603 | arxiv | \section{Introduction}
The quite deep issue of how to represent human knowledge
in a way that is most useful for applications has been
present in research for decades now.
Often, knowledge representation is necessary in a context of
incomplete information, whereby inductive processes are
required in addition.
As a result, two facets that are common to a great number of works in
knowledge representation, and particularly more so in
contexts of inductive inference, machine learning, or
data analysis, are logic and probability.
Adding probability-based mechanisms to already expressive
logics enhances their expressiveness and usefulness, but pays
heavy prices in terms of computational difficulty.
Even without probability, certain degrees of expressivity
and computational feasibility are known to be incompatible,
and this is reflected in the undecidability results for many logics.
In other cases, the balance between expressivity and feasibility
hinges on often open complexity-theoretic statements.
To work only within logics known to be polynomially tractable
may imply serious expressiveness limitations.
Literally hundreds of studies have explored
this difficult balance. Even limiting ourselves somewhat
to the machine learning perspective, we could mention a
large number of references such as those cited in the book
\cite{deRaedt}, for one.
Both in machine learning and in data mining, one particularly
well-studied knowledge representation mechanism is given
by relaxed implication connectives: a relatively natural
abstract concept which can be made concrete in various ways.
The common idea is to relax the semantics of the implication
connective so as to allow for exceptions,
a feature actually mandatory in all applications in data analysis
or machine learning. However, this can be done in any of
a number of ways; and each form of endowing relaxed implications
with a precise meaning yields a different notion with, often,
very different properties. See the survey~\cite{GH}.
This paper focuses on one of the simplest forms of relaxed
implication, endowed with its most natural semantics: the one
given by conditional probability.
Syntactically, these
partial implications are pairs of conjunctions of positive
propositional literals.
For sets $X$ and $Y$ of propositional variables, we write
the corresponding implication as $X\to Y$.
Now, instead of the classical semantics, whereby a model
satisfies the implication if it either fails the antecedent
or fulfills the consequent, we want to quantify exceptions;
hence, instead of individual propositional models, our semantic
structures are, then, so-called ``transactional datasets'',
that is, multisets of propositional models. By mere counting,
we find, on each dataset, a frequentist probability for
$X$ and $Y$ seen as conjunctions (or, equivalently, as
events): then, the meaning of the implication is simply
that the conditional probability of the consequent, given
the antecedent, exceeds some fixed threshold,
here denoted~$\gamma\in(0,1)$.
In application-aware works, very often
that quantity, the frequentist conditional probability,
is called \emph{confidence} of the partial implication.
We also use this name here.
This probabilistic version
of implications has been proposed in different
research communities. For instance, \cite{Lux}
introduced them as ``partial implications''; much later,
\cite{AIS} defined ``association rules'' (see also
\cite{AMSTV} and the survey \cite{CegRod}):
these are partial implications that impose the additional
condition that the consequent is a single propositional
variable, and where additional related parameters are used
to assess their interest.
Actually, confidence does not seem to be the best
choice in practice for the meaning of a partial
implication, as discussed e.g.~in~\cite{GH}.
However, it is clearly the most natural choice and the
obvious step to start the logical study of partial
implications, many other preferable options being
themselves, actually, variations or sophistications
of confidence.
Motivated by practical issues, several works have analyzed
notions of redundancy among partial implications:
two proposals in \cite{AgYu} and \cite{KryszPAKDD} turned out
to be equivalent among them
and were, in turn, as described
in \cite{Balcazar}, equivalent to the natural notion of logical
entailment of one partial implication by another
(modulo minor details such as allowing
or disallowing empty antecedents or consequents).
This entailment means that any dataset
in which the premise reaches confidence at least~$\gamma$
must assign confidence at least $\gamma$ as well to the conclusion.
The contributions of \cite{Balcazar} that are relevant to
the present paper
are chiefly syntactic characterizations of one partial implication
entailing another, and of two partial implications
entailing another. Further details are provided below;
for the time being, we simply indicate that, whereas the
case of one premise is quite natural, the case of two
premises is quite complex. For perspective, let's briefly
consider here the case of transitivity. In contrast with
full implications, which obey it, here transitivity
fails: it is not difficult to see that,
if $X\to Y$ has confidence over $\gamma$, and
$Y\to Z$ as well, still most occurrences of $Y$ could be
without $X$, leaving low or even
zero confidence for $X\to Z$. Even
if we consider $X\to Y$ and $XY\to Z$, the probabilities
multiply together and leave just $\gamma^2 < \gamma$ as provable
threshold. (Cf.~\cite{Balcazar}.)
A tempting intuition is to generalize the
observation and jump to the statement that no nontrivial
consequence follows from two partial implications; however,
this statement is wrong, and an explicit example of proper
entailment from two premises is given
in the same reference and restated below in
Section~\ref{sec:uptotwopremises}.
Besides offering this observation, \cite{Balcazar} goes
beyond, and generalizes the example into a precise
characterization of when a partial implication is
entailed by two partial implications. The proof is
not deep, using just basic set-theoretic constructions;
but it is long, cumbersome, and of limited intuitive value.
Attempts at generalizing it directly to more than two premises
rapidly reach unmanageable difficulties, among which the
most important one is the lack of hints at a crucial
property that we will explain below in Section~\ref{sec:nice}.
Here, we identify an alternative, quite different approach, that turns
out to be successful in finding the right generalization. The new
ingredient is a connection with linear programming that is almost
identical to a technical lemma in \cite{ParisSimmonds}. Stated in our
language, the lemma asserts that $k$ partial implications entail
another if and only if the dual of a natural linear program associated
to the entailment is feasible.
\jlb{We also complete the analysis
of the analogous case where we operate in the presence
of classical implications --- reminder}
We develop this tool and use it to get our main results:
1) for low enough values of the confidence threshold $\gamma$, we
use this
connection to show that $k$ partial implications
never entail nontrivially another one;
2) for high enough values of $\gamma$, we use it also to provide a
characterization of the cases in which $k$ partial implications
entail another one, but this one purely in terms of elementary
Boolean algebraic conditions among the sets of attributes that make
the partial implications;
3) for the intermediate values of $\gamma$, we explain how to compute
the exact threshold, if any, at which a specific set of
$k$ partial implications entails another one.
The characterizations
provide algorithms to
decide whether a given entailment holds. More concretely, under very
general conditions including the case that $\gamma$ is large, the
connection to linear programming
gives an algorithm that is polynomial in the
number of premises~$k$, but exponential in the number of attributes
$n$. Our subsequent characterization
reverses the situation: it gives an algorithm that
is polynomial in $n$ but exponential in~$k$. This may sound surprising
since the proof of this characterization is based on the
previous LP-based take;
but it merely reflects the fact that, in our proof of our main
characterization, the theory of linear programming was just used as a
technical tool.
At any rate,
our main characterization also shows that the decision
problem for entailments at large $\gamma$ is in NP, and this does not
seem to follow from the linear programming
formulation
by itself
(since the program is exponentially big in $n$), let alone the
definition of entailment (since the number of datasets on $n$
attributes is infinite). We discuss this in Section~\ref{sec:closing}.
\jlb{Reminder: We planned to complete also the analysis
of the analogous case where we operate in the presence
of classical implications.}
\section{Preliminaries and notation} \label{sec:prelim}
Our expressions involve
propositional variables, which receive Boolean
values from propositional models; we define
their semantics through data-sets: simply,
multisets of propositional models.
However, we mostly follow a terminology
closer to the standard one in the data analysis
community, where our propositional variables are
called attributes or, sometimes, items; likewise,
a set of attributes (that is, a propositional model),
seen as an element of a dataset, is often called a
transaction.
Thus, attributes take Boolean values, true or false, and
a transaction is simply a subset of attributes, those that would be
set to true if we thought of it as a
propositional model.
Typically, our
set of attributes is simply~$[n] := \{1,\ldots,n\}$, for a natural
number $n$, so transactions are subsets of~$[n]$. Fix now such a set
of attributes.
If $Z$ is a transaction and $X$ is a set of attributes, we say that
$Z$ covers $X$ if $X \subseteq Z$. A data-set, as a multi-set of
transactions, is formally a mapping from the set of all transactions to
the natural numbers: their multiplicities as members of the data-set
(alternative formalizations exist in the literature).
If $\mathcal{D}$ is a data-set and $X$ is a set
of attributes, we write $\mathrm{C}_{\mathcal{D}}[\, X \,]$ for the
number of transactions in $\mathcal{D}$ that cover $X$, counted with
multiplicity.
A partial or probabilistic implication is made of a pair of finite
subsets $X$ and $Y$ of attributes. We write them as $X \to Y$. If $X$
and $Y$ are sets of attributes, we write $XY$ to denote their union $X
\cup Y$. This is fully customary and very convenient notation in this
context. Let $X \to Y$ be a partial implication with all its
attributes in $[n]$.
If $\mathcal{D}$ is a data-set on the set of attributes $[n]$, and
$\gamma$ is a real parameter in the interval $[0,1]$, we write
$\mathcal{D} \models_\gamma X \to Y$ if either
${\mathrm{C}}_{\mathcal{D}}[\, X\,] = 0$, or else
${\mathrm{C}_{\mathcal{D}}[\, XY \,]}/{\mathrm{C}_{\mathcal{D}}[\, X
\,]} \geq \gamma$. Thus, if we think of $\mathcal{D}$ as
specifying the probability distribution on the set of transactions
that assigns probabilities proportionally to their multiplicity in
$\mathcal{D}$, then $\mathcal{D} \models_\gamma X \to Y$ if and only
if the conditional probability of $Y$ given $X$ is at least~$\gamma$.
If $X_0 \to Y_0,\ldots,X_k \to Y_k$ are partial implications, we write
\begin{equation}
X_1 \to Y_1,\ldots,X_k \to Y_k \models_\gamma X_0 \to Y_0
\label{eqn:entailment}
\end{equation}
if for every data-set $\mathcal{D}$ for which $\mathcal{D}
\models_\gamma X_i \to Y_i$ holds for every $i \in [k]$, it also holds
that $\mathcal{D} \models_\gamma X_0 \to Y_0$. Note that the symbol
$\models_\gamma$ is overloaded much in the same way that the symbol
$\models$ is overloaded in propositional logic. In
case Expression~\eqref{eqn:entailment} holds, we say that the entailment holds, or
that the set $X_1 \to Y_1,\ldots,X_k \to Y_k$ entails $X_0 \to Y_0$ at
confidence threshold $\gamma$. If $\Sigma$ is a set of partial
implications for which $\Sigma \models_\gamma X_0 \to Y_0$ holds but
$\Gamma \models_\gamma X_0 \to Y_0$ does not hold for any proper
subset $\Gamma \subset \Sigma$, then we say that the entailment holds
properly. Note that entailments without premises vacuously hold
properly when they hold. The real number $\gamma$ is often referred
to as the confidence parameter.
A linear program (LP) is the following optimization problem: $\min \{
c^\mathrm{T} x : Ax \geq b,\, x \geq 0 \}$, where $x$ is a vector of
$n$ real variables, $b$ and $c$ are vectors in $\mathbb{R}^m$ and
$\mathbb{R}^n$, respectively, and $A$ is a matrix in
$\mathbb{R}^{m\times n}$. The program is feasible if there exists an
$x \in \mathbb{R}^n$ such that $Ax \geq b$ and $x \geq 0$. The program
is unbounded if there exist feasible solutions with arbitrarily small
values of the objective function $c^\mathrm{T} x$. If the goal were
$\max$ instead of $\min$, unboundedness would refer to arbitrarily
large values of the objective function. The dual LP is $\max\{
b^\mathrm{T} y : A^{\mathrm{T}} y \leq c,\, y \geq 0 \}$, where $y$ is
a vector of $m$ real variables. Both LPs together are called a
primal-dual pair. The duality theorem of linear programming states
that exactly one of the following holds: either both primal and dual
are infeasible, or one is unbounded and the other is infeasible, or
both are feasible and have optimal points with the same optimal value. See
\cite{Karloff} [Corollary 25 and Theorem 23].
\section{Related Work and Technical Basis}
We review here connected existing work. We describe first
the results from \cite{Balcazar} on entailments among
partial implications with one or two premises. The study there starts
with a detailed comparison of entailment as defined in
Section~\ref{sec:prelim} with the notions of redundancy among partial
implications previously considered in the literature. Here we go
directly to the point and consider entailment as defined in
Section~\ref{sec:prelim} from the start.
Then, we
develop a variant of a result in \cite{ParisSimmonds},
adapted to our context and notation, on which our main
results are based, plus additional properties related
to that variant.
\subsection{Entailment with up to two premises} \label{sec:uptotwopremises}
We discuss here Expression~\eqref{eqn:entailment} for $k\leq 2$.
For this subsection and most of the paper we assume that the confidence
parameter $\gamma$ is strictly positive; otherwise everything holds
everywhere, and strictly below $1$; otherwise we fall back to
classical implication.
The case of zero premises, i.e.~tautological
partial implications, trivializes to the classical case:
$\models_\gamma X_0 \rightarrow Y_0$ if and only if $Y_0 \subseteq
X_0$, at any positive confidence threshold~$\gamma$. The first interesting
case is thus the entailment from one partial implication $X_1 \to Y_1$
to another $X_0 \to Y_0$. If $X_0 \to Y_0$ is tautological by itself,
there is nothing else to say. Otherwise, entailment is still
characterized by a simple Boolean algebraic condition on the sets
$X_0$, $Y_0$, $X_1$, and $Y_1$ as stated in the following theorem:
\begin{theorem}[\cite{Balcazar}] \label{th:case1}
Let $\gamma$ be a confidence parameter in $(0,1)$ and let $X_0 \to Y_0$
and $X_1 \to Y_1$ be two partial implications. Then the following are
equivalent:
\begin{enumerate}
\item $X_1 \to Y_1 \models_\gamma X_0 \to Y_0$,
\item either $Y_0 \subseteq X_0$, or $X_1 \subseteq X_0$ and $X_0Y_0 \subseteq X_1Y_1$.
\end{enumerate}
\end{theorem}
\noindent
Note that the second statement is independent of $\gamma$.
This shows that entailment at confidence $\gamma$ below $1$
differs from classical entailment. An example shows this equally well:
although it is obvious that $A \to B$ classically entails $AC \to BC$,
the entailment fails badly when both the premise and the conclusion
are considered as partial implications at some confidence $\gamma$ in
$(0,1)$: any data-set with many ocurrences of $AB$, only one
occurrence of $AC$, and none at all of $BC$, ruins everything. Of
course, what fails is that $X_0Y_0$ is not included in $X_1Y_1$.
The case of two partial implications entailing a third was also solved
in \cite{Balcazar}. The starting point for that study was a specific
example of a non-trivial entailment:
\begin{equation}
A\to BC, \, A\to BD \models_{1/2} ACD\to B. \label{eqn:exampleoftwo}
\end{equation}
Indeed, this entailment holds true at any $\gamma$ in the interval
$[1/2,1)$. This is often found counterintuitive. The intuition of many
is that combining two partial implications that only guarantee the
threshold $\gamma<1$ would lead to arithmetic operations leading to values
unavoidably below~$\gamma$. Classical transitivity as discussed in the
introduction is a good example. However, this
intuition is incorrect, as~\eqref{eqn:exampleoftwo} shows. The good
news is that a similar statement, when appropriately generalized,
covers all the cases of entailment from two partial implication
premises. We omit the proof of~\eqref{eqn:exampleoftwo} as it follows
from the next theorem, which will be generalized in our main result.
\begin{theorem}[\cite{Balcazar}] \label{th:case2}
Let $\gamma$ be a confidence parameter in $(0,1)$ and let $X_0 \to Y_0$,
$X_1 \to Y_1$ and $X_2 \to Y_2$ be three partial implications.
If $\gamma \geq 1/2$, then the following are equivalent:
\begin{enumerate}
\item $X_1 \to Y_1,\, X_2 \to Y_2\, \models_{\gamma} X_0 \to Y_0$,
\item either $Y_0 \subseteq X_0$, or $X_i \subseteq X_0$ and $X_0Y_0
\subseteq X_iY_i$ for some $i \in \{1,2\}$, or all seven
inclusions below hold simultaneously:
\begin{enumerate}
\renewcommand{\theenumii}{\roman{enumii}}
\item
$X_1 \subseteq X_2Y_2$ and $X_2 \subseteq X_1Y_1$,
\item
$X_1 \subseteq X_0$ and $X_2 \subseteq X_0$,
\item
$X_0 \subseteq X_1X_2Y_1Y_2$.
\item
$Y_0 \subseteq X_0Y_1$ and
$Y_0 \subseteq X_0Y_2$,
\end{enumerate}
\end{enumerate}
\end{theorem}
\noindent
Indeed, the characterization is even tighter than what this statement
suggests: whenever $\gamma < 1/2$, it can be~shown that entailment
from two premises holds only if it holds from one or zero premises.
This was also proved in~\cite{Balcazar}, thus fully covering all cases
of entailment with two premises and all confidence parameters
$\gamma$. Note, finally, that all conditions stated in the theorem are
easy to check by an algorithm running in time $O(n)$, where $n$ is the
number of attributes, if the sets are given as bit vectors, say.
The proof of Theorem~\ref{th:case2} in \cite{Balcazar} is rather long
and somewhat involved, although it uses only elementary Boolean
algebraic manipulation. For instance, several different
counterexamples to the entailment are built ad hoc depending on which
of the seven set-inclusion conditions fail. Its intuition-building
value is pretty limited, and a generalization to the case of
more than two premises remained elusive. A somewhat subtle point
about Theorem~\ref{th:case2} is that the seven inclusion conditions
alone do not characterize proper entailment (even if $\gamma \geq
1/2$, that is): they are only necessary conditions for that. But when
these necessary conditions for proper entailment are disjuncted with
the necessary and sufficient conditions for improper entailment, what
results is an \emph{if and only if} characterization of
entailment. That is why the theorem is stated as it is, with the two
escape clauses at the beginning of part \emph{2}. Our main result will
have a similar flavour, but with fewer cases to consider.
Before we move on to larger numbers of premises, one more comment is
in order. Among the seven set-inclusion conditions in the statement of
Theorem~\ref{th:case2}, those in the first item $X_1 \subseteq X_2Y_2$
and $X_2 \subseteq X_1Y_1$ are by far the least intuitive. Discovering
the right generalization of this turned out to be the key to getting
our results. This is discussed in Sections~\ref{sec:nice}
and~\ref{sec:morenice}. Before that, however, we need to discuss a
characterization of entailment in terms of linear programming duality.
Interestingly, LP will end up disappearing altogether from the
statement that generalizes Theorem~\ref{th:case2}; its use will merely
be a (useful) technical detour.
\subsection{Entailment in terms of LP duality}
\jlb{Notational issue: $[k]$ or $K$ indexing the premises?
$[k]$ is more rigid and requires reordering when we look
at subsets and improper entailment. $K$ requires to make
sure that $0\notin K$ or to use a different notation
than $X_0$ and $Y_0$ for the conclusion. Also: some symbol
less opaque than $A_{i,Z}$?}
\aa{Al final he resuelto usar $[k]$ en esta seccion.} \aa{Nueva
notacion para $A_{i,Z}$!} \aa{Lo de la racionalidad de $x_Z$ a
medio argumento no requiere que $\gamma$ sea racional; solo que los
racionales son densos en los reales y que las aplicaciones lineales
son continuas. Asi lo he puesto.}
The goal in this section is to
characterize the valid entailments
as in Expression~\eqref{eqn:entailment},
\begin{equation*}
X_1 \to Y_1,\ldots,X_k \to Y_k \models_\gamma X_0\to Y_0,
\end{equation*}
where each $X_i \rightarrow Y_i$ is a partial implication on the set
of attributes $[n]$.
The characterization can be seen as a variant,
stated in the standard form of linear programming and
tailored to our setting, of Proposition~4
in \cite{ParisSimmonds}, where it applies to
deduction rules of probabilistic consequence
relations in general propositional logics.
Our linear programming formulation makes it
easy to check a number of simple properties of
the solutions of the dual linear program at play,
which are necessary for our application (Lemma~\ref{prop:chaos}).
Before we state the characterization, we want to
give some intuition for what to expect. At the same time we introduce
some notation and terminology.
Following standard usage in full implications (see e.g.~\cite{AFP}),
we say that a transaction $Z \subseteq [n]$ covers $X\to Y$ if
$X\subseteq Z$, and that it violates it if $X\subseteq Z$ but
$Y\not\subseteq Z$. If $Z$ covers $X \to Y$ without violating it,
that is, $XY \subseteq Z$, we
say that $Z$ witnesses $X \to Y$.
For each partial implication $X \rightarrow Y$ and each transaction
$Z$ we define a weight $w_Z(X \rightarrow Y)$ that, intuitively,
measures the extent to which $Z$ witnesses $X \rightarrow
Y$. Moreover, since we are aiming to capture confidence threshold $\gamma$
we assign the weight proportionally:
$$
\begin{array}{lll}
w_Z(X \to Y) = 1-\gamma &
\text{ if } Z \text{ witnesses } X \to Y, \\
w_Z(X \to Y) = -\gamma &
\text{ if } Z \text{ violates } X \to Y, \\
w_Z(X \to Y) = 0 &
\text{ if } Z \text{ does not cover } X \to Y.
\end{array}
$$
With these weights in hand we give a quantitative interpretation to
the entailment in Expression~\eqref{eqn:entailment}.
First note that the weights are defined in such a way that, as long as
$\gamma > 0$, a transaction $Z$ satisfies the implication $X \to Y$
interpreted classically if and only if $w_Z(X \to Y) \geq 0$. With
this in mind the entailment in Expression~\eqref{eqn:entailment} interpreted
classically would read as follows: for all $Z$, whenever all weights
on the left are non-negative, the weight on the right is also
non-negative. Of course, a sufficient condition for this to hold would
be that the weights on the right are bounded below by some
non-negative linear combination of the weights on the left, uniformly
over $Z$. What the characterization below says is that this sufficient
condition for classical entailment is indeed necessary and sufficient
for entailment at confidence threshold $\gamma$, if the weights are chosen
proportionally to $\gamma$ as above. Formally:
\begin{theorem}
\label{th:mainLP}
Let $\gamma$ be a confidence parameter in $[0,1]$ and let $X_0 \to
Y_0, \ldots,X_k \to Y_k$ be a set of partial implications. The
following are equivalent:
\begin{enumerate}
\item $X_1 \to Y_1,\ldots, X_k \to Y_k \models_\gamma X_0\to
Y_0$
\item There is a vector $\lambda = (\lambda_1,\ldots,\lambda_k)$ of
real non-negative
components such that for all $Z \subseteq [n]$
\begin{equation}
\sum_{i=1}^k \lambda_i \cdot w_Z(X_i \rightarrow Y_i) \leq w_Z(X_0
\rightarrow Y_0)
\label{eqn:inequalities}
\end{equation}
\end{enumerate}
\end{theorem}
\noindent Towards the proof of Theorem~\ref{th:mainLP}, let us state a useful
lemma. This gives an alternative understanding of the weights $w_Z(X
\rightarrow Y)$ than the one given above:
\begin{lemma}
\label{lm:satisfies}
Let $\gamma$ be a confidence parameter in $[0,1]$, let $X \rightarrow
Y$ be a partial implication, let $\mathcal{D}$ be a transaction
multiset, and for each $Z \subseteq [n]$ let $x_Z$ be the multiplicity
of $Z$ in $\mathcal{D}$, that is, the number of times that $Z$ appears
(as a complete transaction) in $\mathcal{D}$. Then, $\mathcal{D}
\models_{\gamma} X \rightarrow Y$ if and only if $\sum_{Z \subseteq
[n]} w_Z(X \rightarrow Y) \cdot x_Z \geq 0$.
\end{lemma}
\begin{proof}
Let $\mathcal{U}$ denote the set of transactions in $\mathcal{D}$
that cover $X \to Y$, let $\mathcal{V}$ denote those that violate $X
\to Y$, and $\mathcal{W}$ those that witness $X \to Y$. Observe
that $\mathcal{U} = \mathcal{V} \cup \mathcal{W}$ and that this
union is a partition. By definition, $\mathcal{D} \models_\gamma X
\rightarrow Y$ means that either $\sum_{Z \in \mathcal{U}} x_Z = 0$,
or else $\left({\sum_{Z \in \mathcal{W}} x_Z}\right)/\left({\sum_{Z
\in \mathcal{U}} x_Z}\right) \geq \gamma$. Recalling that
$\mathcal{V} \cup \mathcal{W} = \mathcal{U}$ is a partition, this is
equivalent to $\sum_{Z \in \mathcal{W}} x_Z \geq \gamma \cdot
\left({\sum_{Z \in \mathcal{W}} x_Z + \sum_{Z \in \mathcal{V}}
x_Z}\right)$. Rearranging we get $\sum_{Z \in \mathcal{W}}
(1-\gamma) \cdot x_Z - \sum_{Z \in \mathcal{V}} \gamma \cdot x_Z
\geq 0$, from which the result follows by recalling that $w_Z(X \to
Y) = 1-\gamma$ for each $Z \in \mathcal{W}$ and $w_Z(X \to Y) =
-\gamma$ for each $Z \in \mathcal{V}$, and that $w_Z(X \to Y) = 0$
for every other $Z$.
\end{proof}
This lemma is parallel to the first part of the
proof of Proposition~4 in \cite{ParisSimmonds}.
With this lemma in hand we can prove Theorem~\ref{th:mainLP}.
We resort to duality here, while the version in
\cite{ParisSimmonds} uses instead the closely related
Farkas' Lemma.
\begin{proof}[Proof of Theorem~\ref{th:mainLP}]
The statement of Lemma~\ref{lm:satisfies} leads to a natural linear
program: for every $Z$ let $x_Z$ be a non-negative real variable,
impose on these variables the inequalities from
Lemma~\ref{lm:satisfies} for $X_1 \to Y_1$ through $X_k\to Y_k$, and
check if the corresponding inequality for $X_0 \to Y_0$ can be
falsified by minimizing its left-hand side:
\begin{tabbing}
\indent\indent\indent\indent\indent\indent \= $P$: \= min \= $\sum_{Z \subseteq [n]} w_Z(X_0 \to Y_0) \cdot x_Z$\=\\
\> \> s.t. \> $\sum_{Z \subseteq [n]} w_Z(X_i \to Y_i) \cdot x_Z \geq 0^{\strut}$
\;\; \= all $i\in[k]$, \\
\> \> \> $x_Z \geq 0^{\strut}$ \> all $Z$.
\end{tabbing}
The dual $D$ of $P$ has one non-negative variable $y_i$ for every
$i \in [k]$ and one inequality constraint for each non-negative
variable~$x_Z$. Since the objective function of $D$ would just be
the trivial constant function $0$ we write it directly as a linear
programming feasibility problem:
\begin{tabbing}
\indent\indent\indent\indent\indent\indent \= $D$: \= $\sum_{i \in [k]} w_Z(X_i \to Y_i) \cdot y_i \leq
w_Z(X_0 \to Y_0)$ \; \= all $Z$, \\
\> \> $y_1,\ldots,y_k \geq 0^{\strut}$
\end{tabbing}
\noindent Note that this is really the characterization statement in
the theorem that we are trying to prove, with $y_i$ in place of
$\lambda_i$. Thus, the theorem will be proved if we show that the
following are equivalent:
\begin{enumerate}
\item[(1)] $X_1 \to Y_1,\ldots,X_k \to Y_k \models_\gamma X_0 \to
Y_0$,
\item[(2)] the primal $P$ is feasible and bounded below,
\item[(3)] the dual $D$ is feasible.
\end{enumerate}
(1) $\Rightarrow$ (2). We prove the contrapositive. Assume that $P$
is unbounded below; it is certainly feasible since the all-zero vector
satisfies all constraints. Let $x$ be a feasible solution with
$\sum_{Z \subseteq [n]} w_Z(X_0 \to Y_0) \cdot x_Z < 0$. Since the
rationals are dense in the reals and linear maps are surely
continuous, we may assume that $x$ has rational components with a
positive common denominator $N$, while preserving feasibility and a
negative value for the objective function. Then $N \cdot x$ is still a
feasible solution and its components are natural numbers. Let
$\mathcal{D}$ be the transaction multiset that has $N \cdot x_Z$
copies of $Z$ for every $Z \subseteq [n]$. By feasibility
we have $\sum_{Z \subseteq [n]} w_Z(X_i \to Y_i) \cdot N \cdot x_Z
\geq 0$ and therefore $\mathcal{D} \models_{\gamma} X_i \rightarrow
Y_i$ for every $i \in [k]$ by Lemma~\ref{lm:satisfies}. On the other
hand $\sum_{Z \subseteq [n]} w_Z(X_0 \to Y_0) \cdot N \cdot x_Z < 0$
from which it follows that $\mathcal{D} \not\models_{\gamma} X_0
\rightarrow Y_0$, again by Lemma~\ref{lm:satisfies}.
(2) $\Rightarrow$ (3). This is a direct consequence of the duality
theorem for linear programming: if $P$ is feasible and bounded
below, $D$ is feasible; see the preliminaries and the references
there.
(3) $\Rightarrow$ (1). Assume $D$ is feasible and let $y$ be a
feasible solution. Let $\mathcal{D}$ be a transaction multiset such
that $\mathcal{D} \models_{\gamma} X_i \rightarrow Y_i$ for every $i
\in [k]$. For every $Z \subseteq [n]$, let $x_Z$ be the number of
times that $Z$ appears (alone, as a complete transaction)
in $\mathcal{D}$. By dual feasibility of $y$ and positivity of $x_Z$
we get
\begin{align*}
\sum_{Z \subseteq [n]} w_Z(X_0 \to Y_0) \cdot x_Z
\geq \sum_{Z
\subseteq [n]} \Big({\sum_{i \in [k]} w_Z(X_i \to Y_i) \cdot y_i}
\Big) \cdot x_Z.
\end{align*}
Distributing, exchanging the order of summation, and refactoring,
the right-hand side reads
\begin{align*}
\sum_{i \in [k]} y_i \cdot \Big({\sum_{Z \subseteq [n]} w_Z(X_i \to
Y_i) \cdot x_Z}\Big).
\end{align*}
Note that this is non-negative since the $y_i$ are non-negative and
$\sum_{Z \subseteq [n]} w_Z(X_i \to Y_i) \cdot x_Z \geq 0$ for every
$i \in [k]$ by the assumption on $\mathcal{D}$ and
Lemma~\ref{lm:satisfies}. This proves that $\sum_{Z \subseteq [n]}
w_Z(X_0 \to Y_0) \cdot x_Z \geq 0$, from which $\mathcal{D}
\models_\gamma X_0 \to Y_0$ follows by one more call to
Lemma~\ref{lm:satisfies}.
\end{proof}
\subsection{Properties of the LP characterization}
\aa{IMPORTANTE: En esta seccion ya no asumimos $k \geq 2$, solo $k
\geq 1$. La unica propiedad que requeria $k \geq 2$ se ha ido a la
seccion ``low-gamma'' que es el unico sitio donde se necesitaba.}
\aa{El lemma ``prop:chaos'' tiene una entrada mas (y una menos; see
above). La nueva permite acortar la implicacion directa del teorema
de caracterizacion conjuntista a minimos historicos.} \aa{La
mayoria de las entradas del lemma ``prop:chaos'' parece que
requieran $\gamma < 1$ lo cual me parece, como minimo, curioso. Y
ese lemma se usa en casi todas partes o sea que siempre hay que
pedir que $\gamma$ este en $(0,1)$.}
Whenever an entailment as in Expression~\eqref{eqn:entailment} holds properly,
the characterization in Theorem~\ref{th:mainLP} gives a good deal of
information about the inclusion relationships that the sets satisfy,
and about the values that the $\lambda_i$ can take. In this section we
discuss this. Note that, from now on, the confidence parameter
$\gamma$ is in $(0,1)$ instead of $[0,1]$.
\begin{lemma}
\label{prop:chaos}
Let $\gamma$ be a confidence parameter in $(0,1)$ and let $X_0 \to
Y_0,\ldots,X_k \to Y_k$ be a set of partial implications with $k \geq
1$.
Assume that the entailment $X_1 \to Y_1, \ldots, X_k \to
Y_k \models_\gamma X_0 \to Y_0$ holds properly. In particular, $Y_0
\not\subseteq X_0$. Let $\lambda = (\lambda_1,\ldots,\lambda_k)$ denote any vector as
promised to exist by Theorem~\ref{th:mainLP} for this entailment.
The following hold:
\begin{enumerate}
\item
$\lambda_i > 0$ for every $i \in [k]$.
\item
$X_0Y_0 \subseteq X_1Y_1 \cdots X_kY_k$.
\item
$\sum_{i \in [k]} \lambda_i \leq 1$.
\item
$X_i \subseteq X_0$ for every $i \in [k]$.
\item
$X_iY_i \not\subseteq X_0$ for every $i \in [k]$.
\item
$\sum_{i \in [k]} \lambda_i = 1$.
\item $Y_0 \subseteq X_0Y_i$ for every $i \in [k]$.
\end{enumerate}
\end{lemma}
\begin{proof}
The order in which we state them is the one that we deem best to
follow smoothly the flow of proofs, as some of them are proved
jointly and/or depend on previous ones. In what follows, for every
$Z$, define:
\begin{enumerate}
\item[] $U_Z = \{ i \in [k] : Z \text{ covers } X_i \to
Y_i \}$,
\item[] $V_Z = \{ i \in [k] : Z \text{ violates } X_i \to Y_i \}$,
\item[] $W_Z = \{ i \in [k] : Z \text{ witnesses } X_i \to Y_i \}$.
\end{enumerate}
Note that $U_Z = V_Z \cup W_Z$ and that this union is a partition.
\aa{In a previous version I had $[k]\cup\{0\}$; that lead to wrong
statements. Now $0$ is treated separately.}
1. For every $i \in [k]$, if $\lambda_i = 0$, then the inequalities in
Expression~\eqref{eqn:inequalities} reduce to the same inequalities for the
entailment without the $i$-th premise, and the remaining $\lambda_j$ would
still be a solution. Then, by Theorem~\ref{th:mainLP} itself the
entailment would not be proper, as premise $i$ could be removed
without affecting its validity.
2. Consider the inequality in Expression~\eqref{eqn:inequalities} for $Z = X_1Y_1
\cdots X_kY_k$. Obviously $Z$ witnesses every $X_i \to Y_i$, so $W_Z =
[k]$. Assume for contradiction that $X_0Y_0 \not\subseteq X_1Y_1
\cdots X_kY_k$. Then the inequality reads either $-\gamma \geq
(1-\gamma) \cdot \sum_{i \in [k]} \lambda_i$ or $ 0 \geq (1-\gamma)
\cdot \sum_{i \in [k]} \lambda_i$, and both cases are impossible since
the right-side is strictly positive by the previous item and the fact
that $\gamma < 1$. Therefore $X_0Y_0 \subseteq X_1Y_1 \cdots X_kY_k$.
3. Considering still the same inequality, we know now that it reads
$1-\gamma \geq (1-\gamma) \cdot \sum_{i \in [k]} \lambda_i$. From
this we conclude that $\sum_{i \in [k]} \lambda_i \leq 1$ since
$\gamma < 1$.
4, 5 and 6. Now consider the inequality in Expression~\eqref{eqn:inequalities}
for $Z = X_0$. As the entailment is proper we have $Y_0 \not\subseteq
X_0 = Z$ and therefore $Z$ violates $X_0 \to Y_0$. So the inequality
reads $-\gamma \geq (1-\gamma) \cdot \sum_{i \in W_Z} \lambda_i -
\gamma \cdot \sum_{i \in V_Z} \lambda_i$. As $\lambda_i \geq 0$ we get
$-\gamma \geq -\gamma \cdot \sum_{i \in V_Z} \lambda_i$ and therefore
$\sum_{i \in V_Z} \lambda_i \geq 1$ since $\gamma > 0$. But also $\sum_{i
\in [k]} \lambda_i \leq 1$ from which it follows that $V_Z = [k]$ since
each $\lambda_i$ is strictly positive. Thus $Z$ violates every $X_i \to
Y_i$, so $X_i \subseteq Z = X_0$ and $X_iY_i \not\subseteq Z = X_0$
for every $i$. Also $\sum_{i \in [k]} \lambda_i = 1$ follows.
7. For every $i \in [k]$, consider the inequality
in Expression~\eqref{eqn:inequalities} for $Z = X_0Y_i$. We proved in item~4 that
$X_i \subseteq X_0$. It follows that $X_iY_i \subseteq X_0Y_i = Z$ and
thus $i \in W_Z$. Now assume for contradiction that $Y_0 \not\subseteq
Z$. Then $Z$ violates $X_0 \to Y_0$ and the inequality reads $-\gamma
\geq (1-\gamma) \cdot \sum_{j \in W_Z} \lambda_j - \gamma \cdot \sum_{j \in
V_Z} \lambda_j$. Since $i \in W_Z$ and $\lambda_j \geq 0$ for every $j \in
[k]$, the right-hand side of this inequality is at least $(1-\gamma)
\cdot \lambda_i - \gamma \cdot \sum_{j \in [k]\setminus\{i\}} \lambda_j = \lambda_i -
\gamma \cdot \sum_{j \in [k]} \lambda_j$. But this is strictly bigger than
$-\gamma$ since $\lambda_i > 0$ by item~1 and $\sum_{j \in [k]} \lambda_i \leq
1$ by item~3. This contradiction proves that the assumption $Y_0
\not\subseteq Z$ was wrong. Thus $Y_0 \subseteq Z = X_0Y_i$.
\end{proof}
\section{Low thresholds}
As it turns out, if the confidence parameter $\gamma$ is too low, then
there cannot be any entailment as in Expression~\eqref{eqn:entailment} that does
not already follow from one of its premises. In such a case the
characterization follows from known ones. This is what the next
theorem states:
\begin{theorem} \label{th:lowgamma}
Let $\gamma$ be a confidence parameter in $(0,1)$
and let $X_0 \to Y_0,\ldots,X_k \to Y_k$ be a set of partial implications
with $k \geq 1$. If $\gamma < 1/k$, then the following are equivalent:
\begin{enumerate}
\item $X_1 \to Y_1,\ldots, X_k \to Y_k \models_\gamma X_0 \to Y_0$,
\item $X_i \to Y_i \models_\gamma X_0 \to Y_0$ for some $i \in [k]$,
\item either $Y_0 \subseteq X_0$, or $X_i \subseteq X_0$ and $X_0Y_0
\subseteq X_iY_i$ for some $i \in [k]$.
\end{enumerate}
\end{theorem}
\begin{proof}
The equivalence between \emph{2.}~and \emph{3.}~follows from the
characterization of entailments with one premise. We prove the
equivalence between \emph{1.}~and~\emph{2.}, and for that we just need
to argue the implication \emph{1.}~to \emph{2.}~since the other one is
obvious. Assume \emph{1.}~and let $L \subseteq [k]$ be minimal under
set inclusion so that $\{ X_i \to Y_i : i \in L \} \models_\gamma X_0 \to
Y_0$. If $|L| \leq 1$ we already have what we want. Assuming $|L|
\geq 2$ we prove $\gamma \geq 1/k$; this will prove the theorem.
Let $\lambda = (\lambda_i : i \in L)$ be a solution to the inequalities
in Expression~\eqref{eqn:inequalities} for $\{ X_i \to Y_i : i \in L \}
\models_\gamma X_0 \to Y_0$ as per Theorem~\ref{th:mainLP}. By the
minimality of $L$, the entailment $\{ X_i \to Y_i : i \in L \}
\models_\gamma X_0 \to Y_0$ is proper. As $\gamma$ is in the interval $(0,1)$
and $|L| \geq 1$ (indeed $\geq 2$), Lemma~\ref{prop:chaos} applies to
$\{ X_i \to Y_i : i \in L \} \models_\gamma X_0 \to Y_0$ and says that
$X_i \subseteq X_0$ for every $i \in L$, by part~4. Consequently, by
the fact that $|L| \geq 2$, the minimality of $L$, and the
characterization of entailment with at most one premise
(Theorem~\ref{th:case1}), we have
$X_0Y_0 \not\subseteq X_iY_i$ for every $i \in L$. Now, for fixed $i
\in L$, let us look at the inequality in Expression~\eqref{eqn:inequalities} for
$Z = X_iY_i$. The above says that $Z$ does not witness $X_0 \to Y_0$
so $w_Z(X_0 \to Y_0) \leq 0$. Of course $Z$ witnesses $X_i \to Y_i$,
so $w_Z(X_i \to Y_i) = 1-\gamma$. Any other weight is at least $-\gamma$.
Therefore, the inequality implies the following: $0 \geq \lambda_i \cdot
(1-\gamma) -\gamma\cdot \sum_{j \in L\setminus\{i\}} \lambda_j = \lambda_i - \gamma \cdot
\sum_{j \in L} \lambda_j$. By Lemma~\ref{prop:chaos}, part~3, we have
$\sum_{j \in L} \lambda_j \leq 1$. We conclude that $\lambda_i \leq \gamma$, and
this holds for every $i \in L$. Adding over $i \in L$ we get $\sum_{i
\in L} \lambda_i \leq \gamma \cdot |L|$, and the left-hand side is~$1$ by
Lemma~\ref{prop:chaos}, part~6. Thus $\gamma \geq 1/|L| \geq 1/k$ and the
theorem is proved. \end{proof}
\section{High thresholds}
The goal of this section is to characterize entailments from $k$
partial implications when the confidence parameter $\gamma$ is
large enough, and our proofs will show that $(k-1)/k$ is
enough. Ideally, the characterization should make it easy to decide
whether
an entailment holds, or at least easier than solving the linear
program given by Theorem~\ref{th:mainLP}. We come quite close to
that. Before we get into the characterization, let us first discuss
the key new concept on which it rests.
\subsection{Enforcing homogeneity} \label{sec:nice}
We say that a set of partial implications $X_1 \to Y_1,\ldots,X_k \to
Y_k$ enforces homogeneity if for every $Z$ the following holds:
\begin{tabbing}
\indent\indent\indent
\= \underline{if} \= for all $i \in [k]$ either $X_i \not\subseteq Z$ \underline{or} $X_iY_i \subseteq Z$ holds, \\
\> \underline{then} \= either $X_i \not\subseteq Z$ holds for all $i \in [k]$ \\
\> \> \underline{or} $X_iY_i \subseteq Z$ holds for all $i \in [k]$.
\end{tabbing}
In words, enforcing homogeneity means that every $Z$ that does not
violate any $X_i \to Y_i$, either witnesses them all, or does not
cover any of them. Note that this definition does not depend on any
confidence parameter. For economy of words,
sometimes we refer to a set of partial implications that enforces
homogeneity as being \emph{nice}.
Note also that the empty set of partial
implications vacuously enforces homogeneity; in fact,
sets with less than two
elements are trivially nice.
\aa{Estaba demasiado cableado para hacerlo desaparecer.}
Homogeneity sounds like a very strong requirement. However, as the
following lemma shows, it is at the heart of proper entailments.
\begin{lemma}
\label{lem:nicetynew}
Let $X_1 \to Y_1,\ldots,X_k \to Y_k$ be a set of partial implications
with $k \geq 1$. If there exists a partial implication $X_0 \to Y_0$
and a confidence parameter $\gamma$ in the interval $(0,1)$ for which
the entailment $X_1 \to Y_1,\ldots,X_k \to Y_k \models_\gamma X_0 \to
Y_0$ holds properly, then $X_1 \to Y_1,\ldots,X_k \to Y_k$ enforces
homogeneity.
\end{lemma}
\begin{proof}
Fix $X_0 \to Y_0$ and $\gamma$ as in the statement of the lemma. We
must show that if $Z$ does not violate $X_i \to Y_i$ for any $i \in
[k]$, then either $Z$ witnesses all of them, or $Z$ does not cover
any of them. Fix $Z$ that does not violate $X_i \to Y_i$ for any $i
\in [k]$. In particular, for every $i \in [k]$, either $Z$ does not
cover $X_i \to Y_i$, or $Z$ witnesses $X_i \to Y_i$. Thus $w_Z(X_i
\to Y_i) \geq 0$ for every $i \in [k]$. If $Z$ does not cover $X_j
\to Y_j$ for any $j \in [k]$ we are done. Assume then that $Z$
covers $X_j \to Y_j$ for some $j \in [k]$. Since it does not violate
it, it witnesses it, which means that $w_Z(X_j \to Y_j) = 1-\gamma$.
Now let us take a solution $\lambda = (\lambda_1,\ldots,\lambda_k)$ as
promised by Theorem~\ref{th:mainLP}, and let us consider the
inequality in Expression~\eqref{eqn:inequalities} for our fixed $Z$. This
inequality reads $w_Z(X_0 \to Y_0) \geq \sum_{i \in [k]} \lambda_i
\cdot w_Z(X_i \to Y_i)$. Since we proved that $w_Z(X_i \to Y_i) \geq
0$ for every $i \in [k]$, the right-hand side is at least $\lambda_j
\cdot w_Z(X_j \to Y_j)$, which is $\lambda_j \cdot (1-\gamma)$, for
the $j$ from the previous paragraph. Now, by
Lemma~\ref{prop:chaos}.1 we have $\lambda_j > 0$ because the
entailment is proper. Putting all this together we get $w_Z(X_0 \to
Y_0) > 0$, so $Z$ witnesses $X_0 \to Y_0$. Thus $X_0Y_0 \subseteq
Z$. But we also know that $X_i \subseteq X_0$ for every $i \in [k]$
by Lemma~\ref{prop:chaos}.4. Thus $X_i \subseteq Z$ for every $i
\in [k]$. Since $Z$ does not violate $X_i \to Y_i$ for any $i \in
[k]$, it must then be that $Z$ witnesses $X_i \to Y_i$ for every $i
\in [k]$. Precisely what we were trying to prove.
\end{proof}
The next lemma in this section characterizes \emph{nicety}. For a
partial implication $X \rightarrow Y$, let $X \Rightarrow Y$ denote
its classical counterpart. Naturally, we write $Z \models X
\Rightarrow Y$ if either $X \not\subseteq Z$ or $XY \subseteq Z$,
i.e. if $Z$ satisfies the implication classically. Also, in the
context of classical implications, we use $\models$ to denote
classical entailment.
\begin{lemma}
\label{lm:antecsnicenew}
Let $X_1 \to Y_1,\ldots,X_k \to Y_k$ be a set of partial implications
and let $U = X_1Y_1 \cdots X_kY_k$. Then, the following are
equivalent:
\begin{enumerate}
\item $X_1 \to Y_1,\ldots,X_k \to Y_k$ enforces homogeneity,
\item $X_1 \Rightarrow Y_1,\ldots,X_k \Rightarrow Y_k \models
X_i \Rightarrow U$, all $i \in [k]$.
\end{enumerate}
\end{lemma}
\begin{proof}
Assume $X_1\to Y_1,\ldots,X_k \to Y_k$ enforces homogeneity. Let
$Z\models X_i\Rightarrow Y_i$ for all $i\in [k]$. Then, by
homogeneity, either $X_i\not\subseteq Z$ for all $i\in [k]$, and
then it also holds $Z\models X_i\Rightarrow U$ for all $i\in [k]$,
or $X_iY_i\subseteq Z$ for all $i\in [k]$ so that $U\subseteq Z$,
and $Z\models X_i\Rightarrow U$ for all $i\in [k]$ as
well. Therefore, $X_1\Rightarrow Y_1,\ldots,X_k\Rightarrow Y_k$
entail every $X_i\Rightarrow U$.
Conversely, assume that $X_1 \Rightarrow Y_1,\ldots,X_k \Rightarrow
Y_k$ entail every $X_i\Rightarrow U$ and let $Z\models
X_i\Rightarrow Y_i$ for all $i\in [k]$, hence $Z\models
X_i\Rightarrow U$ for all $i\in [k]$. Then either $U\subseteq Z$
and we are done, or, else, the only way to satisfy all these
classical implications is by falsifying all the premises, so that
$X_i\not\subseteq Z$ for all $i\in [k]$. Therefore we have proved
that $X_1 \to Y_1,\ldots,X_k \to Y_k$ enforces homogeneity.
\end{proof}
This characterization is quite useful. Look at the set of three
partial implications $B \to ACH, C \to AD, D \to AB$ on the attributes
$A,B,C,D,H$. By the lemma this set enforces homogeneity, but any of
its two-element subsets fails to do so.
\aa{Cuidado con el comentario que viene
que lo he pensado poco.} Note also that condition~\emph{2.} in the
lemma can be decided efficiently by testing the unsatisfiability of
all the propositional Horn formulas of the form $(X_1 \Rightarrow Y_1)
\wedge \cdots \wedge (X_k \Rightarrow Y_k) \wedge X_j \wedge \neg A$
as $j$ ranges over $[k]$ and $A$ ranges over the attributes in $U$.
\subsection{Main result for high threshold}
\jlb{Taken from file whathappens withproofs and then lightly edited.}
\jlb{In the next theorem, we need to add ``nonemtpy'' to the $K$ in
both points 2 and 3, and we need to add $Y'\not\subseteq X'$ to the
hypotheses. The case of $Y'\not\subseteq X'$ and $K = \emptyset$
indeed correspond to each other but do not fit the current
statements of points (b) and (c) in 3. We can state that borderline
case separately if we find it convenient.} \aa{He anadido la
clausula de escape ``either $Y_0 \subseteq X_0$ or ...''. Creo que
eso basta. El $k \geq 1$ inicial se necesita ni que sea para poder
escribir $(k-1)/k$ sin que pete la maquina.}
We are ready to state and prove the characterization theorem for
$\gamma \geq (k-1)/k$.
\begin{theorem}
\label{th:mainHGnew}
Let $\gamma$ be a confidence parameter in $(0,1)$ and let $X_0 \to
Y_0,\ldots,X_k \to Y_k$ be a set of partial implications with $k \geq
1$. If $\gamma \geq (k-1)/k$, then the following are equivalent:
\begin{enumerate}
\item $X_1 \to Y_1,\ldots,X_k \to Y_k \models_\gamma X_0 \to Y_0$,
\item there is a set $L \subseteq [k]$ such that $\{
X_i \to Y_i : i \in L \} \models_\gamma X_0 \to Y_0$ holds properly,
\item either $Y_0 \subseteq X_0$, or there is a non-empty $L
\subseteq [k]$ such that the following conditions hold:
\begin{enumerate}
\item[(a)] $\{ X_i \to Y_i : i \in L \}$ enforces homogeneity,
\item[(b)] $\bigcup_{i \in L} X_i \subseteq X_0 \subseteq \bigcup_{i \in L}
X_iY_i$,
\item[(c)] $Y_0 \subseteq X_0 \cup \bigcap_{i \in L} Y_i$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{proof}
That \emph{1.}~implies \emph{2.}~is clear: the family of all sets
$L\subseteq [k]$ for which the entailment $\{ X_i \rightarrow Y_i :
\hbox{$i \in L \}$} \models_\gamma X_0 \rightarrow Y_0$ holds is non-empty,
as \emph{1.}~says that~$[k]$ belongs to it. Since it is finite it
has minimal elements. Let~$L$ be one of them.
From \emph{2.}~to \emph{3.}, the index set $L$ will be the same in
both statements, unless $L = \emptyset$, in which case $Y_0
\subseteq X_0$ must hold and we are done. Assume then that $L$ is
not empty. Part \emph{(a)} we get automatically from
Lemma~\ref{lem:nicetynew} since $\{X_i \to Y_i : i \in L \}$
properly entails $X_0 \to Y_0$ at $\gamma$, which is in the interval
$(0,1)$. Now we prove \emph{(b)}. By Theorem~\ref{th:mainLP}, let
$\lambda = (\lambda_i : i \in L)$ be a solution to the inequalities
in Expression~\eqref{eqn:inequalities} for the entailment $\{X_i \to Y_i : i
\in L\} \models_\gamma X_0 \to Y_0$. From the fact that this
entailment is proper and the assumptions that $|L| \geq 1$ and
$\gamma \in (0,1)$, we are allowed to call Lemma~\ref{prop:chaos}.
The first inclusion in \emph{(b)} follows from that lemma, part~4.
The second inclusion in \emph{(b)} also follows from that lemma,
part~2. Finally, for~\emph{(c)} we just refer to part~7 and
straightforward distributivity. \jlb{Consider expanding the
argument here.} \jlb{YES PLEASE. I don't follow it
anymore!}\aa{Lo que no se entendia ha quedado encapsulado en
``prop:chaos''; era crucial para eso escribir ese lemma sin que
use mas que $k \geq 1$ (aqui $|L| \geq 1$). Veanse los comentarios
de alli.}
For the implication from~\emph{3.}~to~\emph{1.}~we proceed as
follows. If $Y_0 \subseteq X_0$ there is nothing to prove since then
the entailment is trivial. Assume then that $L$ is non-empty and
satisfies \emph{(a)}, \emph{(b)}, and \emph{(c)}. By
Theorem~\ref{th:mainLP} it suffices to show that the inequalities
in Expression~\eqref{eqn:inequalities} for the entailment $\{X_i \to Y_i : i
\in L \} \models_\gamma X_0 \to Y_0$ have a solution $\lambda = (\lambda_i : i
\in L)$ with non-negative components.
Let $\ell = |L|$ and set $\lambda_i = 1/\ell$ for $i\in L$. Recall that
$L$ is not empty so $\ell \geq 1$ and this is well-defined. For
fixed $Z$, we prove that the inequality in Expression~\eqref{eqn:inequalities}
for this $Z$ is satisfied by these $\lambda_i$. In the following, let $X
= \bigcup_{i\in L} X_i$ and $Y = \bigcap_{i\in L} Y_i$. We
distinguish cases according to whether $X\subseteq Z$.
First assume that $X \not\subseteq Z$. Then, by the first inclusion in
\emph{(b)}, $X_0 \not\subseteq Z$ so $Z$ does not cover $X_0 \to Y_0$
and $w_Z(X_0 \to Y_0) = 0$. Also, there exists $j \in L$ such that
$X_j \not\subseteq Z$.
If $X_iY_i \not\subseteq Z$ for every $i\in L$, then $Z$ does not
witness any $X_i \to Y_i$, so $w_Z(X_i \to Y_i) \leq 0$ for every $i
\in L$. Whence $\sum_{i\in L} \lambda_i \cdot w_Z(X_i \to Y_i)$ is
non-positive and then bounded by $w_Z(X_0 \to Y_0) = 0$ as required.
Hence, suppose now that there exists $i \in L$ such that $X_iY_i
\subseteq Z$. We also have a $j \in L$ such that $X_j \not\subseteq
Z$. Thus $Z$ witnesses $X_i \to Y_i$ and fails to cover $X_j \to
Y_j$, and both $i$ and $j$ are in $L$. As $\{ X_i \to Y_i : i \in L
\}$ enforces homogeneity, this means that $Z$ violates \hbox{$X_h \to Y_h$}
for some $h \in L$. For that $h$ we have $w_Z(X_h \to Y_h) =
-\gamma$. The rest of weights are at most $1-\gamma$ and therefore
$\sum_{i \in L} \lambda_i \cdot w_Z(X_i \to Y_i)$ is bounded above by
\begin{align*}
&-\frac{1}{\ell} \cdot
\gamma + \frac{\ell-1}{\ell} \cdot (1-\gamma) = \frac{\ell-1}{\ell} - \gamma.
\end{align*}
Since $\ell \leq k$, this is at most $(k-1)/k - \gamma$. In turn, this is
non-positive and then bounded by $w_Z(X_0 \to Y_0) = 0$ by the
assumption that $\gamma \geq (k-1)/k$. This proves that the inequalities
corresponding to these $Z$'s are satisfied.
Assume now instead $X\subseteq Z$. In this case $Z$ covers $X_i \to
Y_i$ for every $i \in L$. Thus we split $L$ into two sets, $L = V \cup
W$, where $V$ is the set of indices $i \in L$ such that $Z$ violates
$X_i \to Y_i$, and $W$ is the set of indices $i \in L$ such that $Z$
witnesses $X_i \to Y_i$. Of course $w_Z(X_i \to Y_i) = -\gamma$ for
every $i \in V$ and $w_Z(X_i \to Y_i) = 1-\gamma$ for every $i \in
W$. We consider three subcases.
1. If $W = \emptyset$, then every $X_i \to Y_i$ with $i \in L$ is
violated and then, using that the $\lambda_i$'s add up to 1,
$\sum_{i \in L} \lambda_i \cdot w_Z(X_i \to Y_i) = -\gamma
\cdot \sum_{i \in L} \lambda_i = -\gamma \leq w_Z(X_0 \to Y_0)$; i.e. the
inequality holds.
2. If $W = L$, then every $X_i \to Y_i$ with $i \in L$ is witnessed.
Using \emph{(b)} we get $X_0 \subseteq \bigcup_{i\in L} X_iY_i
\subseteq Z$, and the non-emptiness of $L$ applied to \emph{(c)}
ensures the existence of some $i \in L$ for which $Y_0\subseteq X_0
\cup Y \subseteq X_0 \cup Y_i\subseteq Z$.\aa{Aqui habia un pequeno
cristo porque confundiamos $Y_0$ con $Y'$ (que ahora se llaman $Y$ y
$Y_0$, respectivamente), y creo que el argumento no estaba bien.}
Thus $X_0 \to Y_0$ is also witnessed and $\sum_{i\in L} \lambda_i \cdot
w_Z(X_i \to Y_i) = (1-\gamma) \cdot \sum_{i \in L} \lambda_i = 1-\gamma = w_Z(X_0
\to Y_0)$; i.e.~the inequality holds.
3. We consider now the general case where $W\neq\emptyset$ and $W\neq
L$. The fact that $W \ne \emptyset$ ensures that there is some $i \in
L$ such that $Y_i \subseteq Z$. Condition \emph{(c)}
then ensures that $Y_0 \subseteq X_0 \cup Y \subseteq
X_0 \cup Y_i$ for this $i$. Altogether $X_0 \to Y_0$ is either
witnessed or uncovered according to whether $X_0 \subseteq
Z$.\jlb{Careful, I think we meant ``according to whether $X'\subseteq
Z$'' there!}\aa{Right! And same comment as above about $Y_0$ and
$Y'$ applies.} In both cases $w_Z(X_0 \to Y_0) \geq 0$. To
complete the proof, let us split $\sum_{i\in L} \lambda_i \cdot w_Z(X_i \to Y_i)$
as follows:
$$
\frac{1}{\ell}\cdot (1-\gamma)\cdot |W| - \frac{1}{\ell} \cdot \gamma\cdot
(\ell-|W|).
$$
The fact that $W \neq L$ implies $|W| \leq \ell-1$. Therefore this is
at most
\begin{align*}
\frac{1}{\ell}\cdot(|W| - \gamma \cdot \ell) \leq \frac{\ell-1}{\ell} - \gamma
\leq
\frac{k-1}{k} - \gamma \leq 0 \leq w_Z(X_0 \to Y_0).
\end{align*}
\aa{Ahi habia un gazapillo: lo que hay que usar es $|W| \leq |L|-1$,
no que $|W| \leq k-1$ como sugeria el texto. Usando $|W| \leq k-1$
solo hubiera salido $(k-1)/|L|-\gamma$ y eso no sabemos que sea $\leq 0$
porque podria pasar que $|L| < k$.}%
In the middle inequalities we used the fact that $\ell \leq k$ and the
assumption that $\gamma \geq (k-1)/k$. We proved what we want; i.e. the
inequality holds.
This closes the cycle of implications and the theorem is proved.
\end{proof}
\subsection{Other properties of nicety} \label{sec:morenice}
Enforcing homogeneity
turned out to play a key role in
the main result about the case of high confidence threshold.
In this section we collect a few
additional observations about it. The first one is quite trivial: sets
of less than two partial implications are trivially nice. This does say,
however, that every set of partial implications has some nice
subset.
The case $k = 2$ is a bit more interesting. Nicety corresponds exactly
to the mysterious conditions in Theorem~\ref{th:case2}; cf.~the
discussion in Section~\ref{sec:uptotwopremises}.
\begin{lemma}
A set of two partial implications $X_1 \to Y_1, X_2 \to Y_2$ enforces
homogeneity if and only of both $X_1\subseteq X_2Y_2$ and
$X_2\subseteq X_1Y_1$ hold.
\end{lemma}
\begin{proof}
Assume $X_1\not\subseteq X_2Y_2$. Then $Z = X_2Y_2\models
X_1\Rightarrow Y_1$ and $Z\models X_2\Rightarrow Y_2$, but this does
not happen homogenously. The same holds if $X_2\not\subseteq X_1Y_1$
by symmetry. Conversely, if both inclusions hold, consider any $Z$
such that $Z\models X_1\Rightarrow Y_1$ and $Z\models X_2\Rightarrow
Y_2$. If $X_1\not\subseteq Z$, then $X_2Y_2\not\subseteq Z$ either,
hence $X_2\not\subseteq Z$ is the only way to satisfy the second
implication; by symmetry, we obtain $X_1\not\subseteq Z$ if and only
if $X_2\not\subseteq Z$. Thus homogeneity holds.
\end{proof}
Finally, a recurrent situation concerns sets of partial implications
with a common left-hand side. The next lemma says that every such set
is nice.
\begin{lemma}
Every set of partial implications of the form $X \to Y_1,\ldots,X \to
Y_k$ enforces homogeneity.
\end{lemma}
\begin{proof}
This is a direct application of Lemma~\ref{lm:antecsnicenew}.
\end{proof}
\section{Intervening thresholds}
The rest of the values of $\gamma$ require ad hoc consideration in
terms of the actual partial implications involved. We start by
defining what will end up being the \emph{critical} confidence
threshold for a given entailment.
\subsection{Critical threshold}
Let $\Sigma = \{ X_1 \to Y_1,\ldots,X_k \to Y_k \}$ be a set of
partial implications with $k \geq 1$ and all its attributes in $[n]$,
and let $X \subseteq [n]$. Define:
\begin{equation}
\gamma^* = \gamma^*(\Sigma, X) := \inf_{\lambda} \max_Z
\frac{\sum_{i \in W_Z} \lambda_i}{\sum_{i \in V_Z \cup W_Z} \lambda_i}
\label{eqn:gammastar}
\end{equation}
where
\begin{enumerate}
\item $Z$ ranges over all subsets of $[n]$ with $X \not\subseteq Z$,
\item $V_Z = \{ i \in [k] : Z \text{ violates } X_i \to Y_i \}$,
\item $W_Z = \{ i \in [k] : Z \text{ witnesses } X_i \to Y_i \}$,
\item $\lambda$ ranges over vectors $(\lambda_1,\ldots,\lambda_k)$ of
non-negative reals such that $\sum_{i \in [k]} \lambda_i = 1$,
\end{enumerate}
and, by convention any occurrence of $0/0$ in the definition of
$\gamma^*$ is taken as $0$, and a vacuous maximum is taken as
$0$. Note that this last case occurs only if $X = \emptyset$ since
otherwise there is always the possibility of taking $Z = \emptyset$.
Note also that since all $\lambda_i$ are non-negative, the only way the
denominator can be zero is by making the numerator also zero. It
should be pointed out that the convention about $0/0$ is \emph{not} an
attempt to repair a discontinuity; in general, the discontinuities of
the rational functions inside the max are not repairable. A final
comment on the definition is that we required $k \geq 1$. This ensures
that the $\inf$ is not vacuous, which in turn implies $0 \leq \gamma^*
\leq 1$: the lower bound is obvious, and for the upper bound just take
$\lambda_i = 1/k$ for every $i\in[k]$, which is well-defined when $k
\geq 1$.
\aa{Should this be defined as $\inf$? Does it matter? Probably not.}
\aa{De hecho, ya no se porque con tantas convenciones ya no se si las
cosas se extienden continuamente o que. Lo defino $\inf$ por si
acaso. Esto obliga a $\epsilon$$\delta$'s en la demostracion.}
\aa{Por cierto, antes pedia $\lambda_i > 0$ y no lo usaba. Ahora solo pido
$\lambda_i \geq 0$, lo cual hace que la minimizacion sea sobre un cerrado
y tenga mejor pinta. Aun y asi... $\inf$ por si acaso.}
Observe that $\gamma^*$ is defined for a set of partial inequalities
and a single set $X$ of attributes. Typically $X$ will be the
left-hand side of another partial inequality $X_0 \to Y_0$, but
$\gamma^*(\Sigma,X_0)$ is explicitly defined not to depend on
$Y_0$. For later reference let us also point out that, with the
notation $V_Z$ and $W_Z$ from above, the inequalities
in Expression~\eqref{eqn:inequalities} for an entailment $X_1 \to Y_1,\ldots,X_k
\to Y_k \models_\gamma X_0 \to Y_0$ can be written as $w_Z(X_0 \to Y_0)
\geq (1-\gamma) \cdot \sum_{i \in W_Z} \lambda_i - \gamma \cdot \sum_{i \in V_Z}
\lambda_i$. It is not the first time we use this sort of notation.
\aa{In the final version of the characterization theorem we should
remove the hypothesis $Y' \not\subseteq X'$ and handle its failure
by allowing the possibility that $|K|=0$.} \aa{This is taken care
by an escape clause ``Either $Y_0 \subseteq X_0$ or ...''.}
\aa{Initially I had allowed the case $k = 0$ but then I realized that
the definition of $\gamma^*$ requires $k \geq 1$: otherwise the set
of vectors $\lambda$ that add up to one is empty.}
\jlb{The proof also needs $X'\neq\emptyset$ because at some point we
pick any $Z$ such that $X'\not\subseteq Z$. The particular case
$X'=\emptyset$ probably works as well but, as of today, seems to
require its own ad hoc proof.} \aa{This has been taken care of by
conveying that $\max \emptyset = 0$ in the definition of $\gamma^*$;
is this good? I added a comment after the definition and in the
proof.}
\subsection{Characterization for all thresholds}
The main result of this section is a characterization theorem in the
style of Theorem~\ref{th:mainHGnew} that captures all possible
confidence parameters.
\begin{theorem}
\label{th:everygamma}
Let $\gamma$ be a confidence parameter in $(0,1)$ and let $X_0 \to
Y_0, \ldots,X_k \to Y_k$ be a set of partial implications with $k \geq
1$. The following are equivalent:
\begin{enumerate}
\item $X_1 \to Y_1,\ldots,X_k \to Y_k \models_\gamma X_0 \to Y_0$,
\item there is a set $L \subseteq [k]$ such that $\{
X_i \to Y_i : i \in L \} \models_\gamma X_0 \to Y_0$ holds properly,
\item either $Y_0 \subseteq X_0$, or there is a non-empty $L \subseteq
[k]$ such that the following conditions hold:
\begin{enumerate}
\item
$\{ X_i \rightarrow Y_i : i \in L \}$ enforces homogeneity,
\item
$\bigcup_{i \in L} X_i \subseteq X_0 \subseteq \bigcup_{i \in L} X_iY_i$,
\item
$Y_0 \subseteq X_0 \cup \bigcap_{i \in L} Y_i$,
\item
$\gamma \geq \gamma^*(\{ X_i \rightarrow Y_i : i \in L \},X_0)$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{proof}
That \emph{1.}~implies \emph{2.}~is clear, as in
Theorem~\ref{th:mainHGnew}. From~ \emph{2.}~to~\emph{3.}, we may
assume that $L$ is non-empty as in Theorem~\ref{th:mainHGnew}. Let $\lambda
= (\lambda_1,\ldots,\lambda_k)$ be a vector of non-negative reals that satisfy
the inequalities in Expression~\eqref{eqn:inequalities} as per
Theorem~\ref{th:mainLP}. Then properties~\emph{(a)}, \emph{(b)},
and~\emph{(c)} just follow from Lemma~\ref{prop:chaos} in the same way
as in Theorem~\ref{th:mainLP}. It remains to argue \emph{(d)}. To see
this first note that for every $Z$ such that $X_0 \not\subseteq Z$ we
have $w_Z(X_0 \to Y_0) = 0$ and therefore the inequality
in Expression~\eqref{eqn:inequalities} for this $Z$ reads as $0 \geq (1-\gamma) \cdot
\sum_{i \in W_Z} \lambda_i - \gamma \cdot \sum_{i \in V_Z}$. Rearranging we get
$\gamma \geq \left({\sum_{i \in W_Z} \lambda_i}\right)/\left({\sum_{i \in
V_Z \cup W_Z} \lambda_i}\right)$, where $0/0$ is
interpreted as $0$. In particular, the maximum of the right-hand side
over all $Z$ such that $X_0 \not\subseteq Z$ is bounded by $\gamma$, and
thus $\gamma^*$ is also bounded by $\gamma$. Note that this also covers
the case of empty $X_0$ since in that case the max in $\gamma^*$ is
vacuous, which we conveyed to define as $0$.
Now we prove that \emph{3.}~implies \emph{1.} \aa{Actually this proof
does not use neither (a) (nicety) not the second inclusion in (b);
remove?} Assuming \emph{(a)} through \emph{(d)}, it is enough to
find a solution to the inequalities in Expression~\eqref{eqn:inequalities} for
the entailment $\{ X_i \to Y_i : i \in L \} \models_\gamma X_0 \to
Y_0$. What we show is that for every positive real $\epsilon > 0$
there is a vector $\lambda = (\lambda_i : i \in L)$ with non-negative real
components such that the following inequality holds uniformly for
every $Z \subseteq [n]$:
\begin{equation}
\sum_{i \in L} \lambda_i \cdot w_Z(X_i \to Y_i) \leq w_Z(X_0 \to Y_0) +
\epsilon.
\label{eqn:ineqeps}
\end{equation}
By basic real analysis this will be enough (it is worth pointing out
that a more direct \emph{continuity} argument to replace $\inf$ by
$\min$ would not work here; as stated earlier, the discontinuities of
the rational functions at $0/0$ are, in general, not
repairable). \aa{Am I right? Want to be more precise here?} Fix then a
positive real $\epsilon > 0$ and let $\lambda = (\lambda_i : i \in L)$ be such
that the max in the definition of $\gamma^*$ is at most $\gamma^* +
\epsilon$. For fixed $Z$, we prove Expression~\eqref{eqn:ineqeps} by cases:
1. First assume that $X_0Y_0 \subseteq Z$. Then, $Z$ witnesses $X_0
\to Y_0$ and $w_Z(X_0 \to Y_0) = 1-\gamma$. The left-hand side
in Expression~\eqref{eqn:ineqeps} can be written as $(1-\gamma)\cdot\sum_{i \in W_Z}
\lambda_i - \gamma \cdot \sum_{i \in V_Z} \lambda_i$. Using $\lambda_i \geq 0$ and
$\sum_{i \in L} \lambda_i = 1$ this is at most $(1-\gamma) \cdot \sum_{i \in L}
\lambda_i = (1-\gamma) = w_Z(X_0 \to Y_0)$, which in turn is at most the
right-hand side in Expression~\eqref{eqn:ineqeps}; i.e.~the inequality holds.
2. From now on, we assume that $X_0Y_0\not\subseteq Z$. For this case
assume additionally that $X_0\subseteq Z$. In particular $Y_0
\not\subseteq Z$ and $Z$ violates $X_0 \to Y_0$, so $w_Z(X_0 \to Y_0)
= -\gamma$. By \emph{(b)} we have $X_i \subseteq X_0$, whereas, by
\emph{(c)} we know that $Y_0 \subseteq X_0Y_i$ for every $i \in
L$. Since $X_0 \subseteq Z$ and $Y_0 \not\subseteq Z$, this means that
$X_i \subseteq Z$ but $Y_i \not\subseteq Z$ for every $i \in L$. It
follows that $Z$ violates $X_i \to Y_i$ and $\hbox{$w_Z(X_i \to Y_i)$} = -\gamma$
for every $i \in L$. Using $\sum_{i \in L} \lambda_i = 1$, the left-hand
side in Expression~\eqref{eqn:ineqeps} is $-\gamma \cdot \sum_{i \in L} \lambda_i = -\gamma =
w_Z(X_0 \to Y_0)$, which is at most the right-hand side
in Expression~\eqref{eqn:ineqeps}; i.e.~the inequality holds.
3. Given the previous cases, we can assume now $X_0\not\subseteq Z$,
so $Z$ does not cover $X_0 \to Y_0$ and $w_Z(X_0 \to Y_0) = 0$. The
choice of $(\lambda_i : i \in L)$ implies that the ratio inside the
max in the definition of $\gamma^*$ is at most $\gamma^* + \epsilon$
for our $Z$; since we are in the case $X_0 \not\subseteq Z$, the ratio
for our $Z$ is in the max. By \emph{(d)} it is also at most $\gamma +
\epsilon$. It follows that $(\gamma + \epsilon) \cdot \sum_{i \in V_Z
\cup W_Z} \lambda_i \geq \sum_{i \in W_Z} \lambda_i$ by non-negativity of the
$\lambda_i$. Rearranging we get $(1-\gamma) \cdot \sum_{i \in W_Z} \lambda_i -
\gamma \cdot \sum_{i \in V_Z} \lambda_i \leq \epsilon \cdot \sum_{i \in V_Z
\cup W_Z} \lambda_i$. Since $\lambda_i \geq 0$ and $\sum_{i \in L} \lambda_i \leq
1$, the right-hand side is at most $\epsilon$, which is precisely
$w_Z(X_0 \to Y_0) + \epsilon$ since $Z$ does not cover $X_0 \to Y_0$
and $w_Z(X_0 \to Y_0) = 0$. This is the right-hand side
in Expression~\eqref{eqn:ineqeps}; i.e.~the inequality holds.
This closes the cycle of implications and the proof.
\end{proof}
\aa{I can't recall the proof of the thing below [that always $\gamma^*
\leq (k-1)/k$, now gone].}
\aa{Now that I think if it, maybe I just knew that $\gamma^* \leq
(k-1)/k$ \underline{BY} Theorem~\ref{th:mainHGnew}. So the corollary
goes backwards :-). But I am still confused: what if $\Sigma$ does
not entail anything of the form $X \to Y$ with $Y \not\subseteq X$
at any $\gamma$ at all?.}
\jlb{Connection to the case of low gamma? e.g. maybe via an upper
bound on gamma-star?}
\subsection{An interesting example}
In view of the characterization theorems obtained so far, one may
wonder if the critical $\gamma$ of any entailment among partial
implications is of the form $(k-1)/k$. This was certainly the case for
$k = 1$ and $k = 2$, and Theorems~\ref{th:mainHGnew}
and~\ref{th:everygamma} may sound as hints that this could be the
case. In this section we refute this for $k = 3$ in a strong way: we
compute $\gamma^*$ for a specific entailment for $k = 3$ to find out
that it is the unique real solution of the equation
\begin{equation}
1-\gamma + (1-\gamma)^2/\gamma + (1-\gamma)^3/\gamma^2 = 1.
\label{eqn:equation}
\end{equation}
Numerically \cite{WolframAlpha}, the unique real solution is
$$
\gamma_{c} \approx 0.56984\ldots.
$$
Consider the following 5-attribute entailment for a generic confidence
parameter $\gamma$:
$$
B \rightarrow ACH,\; C \rightarrow AD,\; D \rightarrow AB
\;\models_\gamma\; BCDH \rightarrow A.
$$
Let us compute its $\gamma^*(\Sigma,X)$ where $\Sigma$ is the
left-hand side, and $X = BCDH$. In other words, we want to determine a
triple $\lambda = (\lambda_1,\lambda_2,\lambda_3)$ that minimizes
$$
\max_Z \frac{\sum_{i \in W_Z} \lambda_i}{\sum_{i \in V_Z \cup W_Z}
\lambda_i}
$$
as $Z$ ranges over the sets that do not include $X = BCDH$, and
subject to the constraints that $\lambda_1,\lambda_2,\lambda_3 \geq 0$
and $\lambda_1 + \lambda_2 + \lambda_3 = 1$. There are $2^5 = 32$
possible $Z$'s out of which two ($ABCDH$ and $BCDH$) contain $X$ and
therefore do not contribute to the maximum. Some others give value $0$
to the ratio and therefore do not contribute to the maximum
either. Note that if either $|Z|\leq 2$, or $|Z|=3$ and $A \not\in Z$,
then $W_Z = \emptyset$, so the numerator is $0$ and hence the ratio is
also $0$ (recall the convention that $0/0$ is $0$). Thus, the only
sets $Z$ that can contribute non-trivially to the maximum are those of
cardinality $4$ or $3$ that contain the attribute $A$. There are four
$Z$ of the first type ($ABCD$, $ABCH$, $ABDH$ and $ACDH$) and six $Z$
of the second type ($ABC$, $ABD$, $ABH$, $ACD$, $ACH$ and $ADH$). The
corresponding ratios are
\begin{align*}
\frac{\lambda_2 + \lambda_3}{\lambda_1 + \lambda_2 + \lambda_3},
\frac{\lambda_1}{\lambda_1 + \lambda_2},
\frac{\lambda_3}{\lambda_1 + \lambda_3},
\frac{\lambda_2}{\lambda_2 + \lambda_3},
\frac{0}{\lambda_1 + \lambda_2},
\frac{\lambda_3}{\lambda_1 + \lambda_3},
\frac{0}{\lambda_1},
\frac{\lambda_2}{\lambda_2 + \lambda_3},
\frac{0}{\lambda_2},
\frac{0}{\lambda_3}.
\end{align*}
Those with $0$ numerator cannot contribute to the maximum so, removing
those as well as duplicates, we are left with
$$
\frac{\lambda_2 + \lambda_3}{\lambda_1 + \lambda_2 + \lambda_3},
\frac{\lambda_1}{\lambda_1 + \lambda_2},
\frac{\lambda_3}{\lambda_1 + \lambda_3},
\frac{\lambda_2}{\lambda_2 + \lambda_3}.
$$
Since all $\lambda_i$ are non-negative, the first dominates the third and
we are left with three ratios:
\begin{equation}
\frac{\lambda_2 + \lambda_3}{\lambda_1 + \lambda_2 + \lambda_3},
\frac{\lambda_1}{\lambda_1 + \lambda_2},
\frac{\lambda_2}{\lambda_2 + \lambda_3}. \label{eqn:terms}
\end{equation}
We claim\aa{What comes next is an analytical proof of what we once did
through 3D plots on my desktop. We could consider including the
plots as well.} that a $\lambda_{c}$ that satisfies the
constraints and minimizes the maximum of the three terms in
(\ref{eqn:terms}) is
$$
\begin{array}{rcl}
\lambda_{c,1} & = & 1-\gamma_c \\
\lambda_{c,2} & = & (1-\gamma_c)^2 / \gamma_c \\
\lambda_{c,3} & = & (1-\gamma_c)^3 / \gamma_c^2
\end{array}
$$
where $\gamma_c$ is the unique real solution of the equation
in Expression~\eqref{eqn:equation}.
Clearly this choice of $\lambda_c$ satisfies the
constraints of non-negativity, and they add up to one precisely
because their sum is the left-hand side in Expression~\eqref{eqn:equation}. By
plugging in, note also that this $\lambda_c$ makes all three terms in
(\ref{eqn:terms}) equal to $\gamma_c$; that is,
\begin{equation}
\frac{\lambda_{c,2} + \lambda_{c,3}}{\lambda_{c,1} + \lambda_{c,2} +
\lambda_{c,3}} =
\frac{\lambda_{c,1}}{\lambda_{c,1} + \lambda_{c,2}} =
\frac{\lambda_{c,2}}{\lambda_{c,2} + \lambda_{c,3}} =
\gamma_c. \label{eqn:maximum}
\end{equation}
For later reference, let us note that the left-hand side of
(\ref{eqn:equation}) is a strictly decreasing function of $\gamma$ in
the interval $(0,1)$ (e.g. differentiate it, or just plot it) and
therefore
\begin{equation}
1-\gamma_0 + (1-\gamma_0)^2/\gamma_0 + (1-\gamma_0)^3/\gamma_0^2 > 1
\label{eqn:inequation}
\end{equation}
whenever $0 < \gamma_0 < \gamma_c$.
In order to see that $\lambda_c$ minimizes the maximum of the three
terms in (\ref{eqn:terms}) suppose for contradiction that $\lambda$
satisfies the constraints and achieves a smaller maximum, say $0 <
\gamma_0 < \gamma_c$. Since $\gamma_0$ is the maximum of the three
terms in (\ref{eqn:terms}) we have
$$
\begin{array}{rcl}
\gamma_0 & \geq & (\lambda_2 + \lambda_3)/(\lambda_1 + \lambda_2 + \lambda_3) \\
\gamma_0 & \geq & \lambda_1 / (\lambda_1 + \lambda_2) \\
\gamma_0 & \geq & \lambda_2 / (\lambda_2 + \lambda_3).
\end{array}
$$
Using $\lambda_1,\lambda_2,\lambda_3 \geq 0$ and $\lambda_1 + \lambda_2 + \lambda_3 =
1$, and rearranging, we get
$$
\begin{array}{rcl}
\lambda_1 & \geq & 1 - \gamma_0 \\
\lambda_2 & \geq & \lambda_1 \cdot (1-\gamma_0)/\gamma_0 \geq (1-\gamma_0)^2/\gamma_0 \\
\lambda_3 & \geq & \lambda_2 \cdot (1-\gamma_0)/\gamma_0 \geq (1-\gamma_0)^3/\gamma_0^2.
\end{array}
$$
Adding all three inequalities we get
$$
\lambda_1 + \lambda_2 + \lambda_3 \geq 1-\gamma_0 +
(1-\gamma_0)^2/\gamma_0 + (1-\gamma_0)^3/\gamma_0^2.
$$
But this is a contradiction: the left-hand side is $1$ since
$\lambda$ satisfies the constraints, and the right-hand side is
strictly bigger than $1$ by (\ref{eqn:inequation}). This proves the
claim.
Finally, this example also shows that for $\gamma$ midway through
$1/k$ and $(k-1)/k$, the vector solution to the inequalities
in Expression~\eqref{eqn:inequalities} could be very non-uniform. In this example
with $\gamma = \gamma_c$, the solution is
$\lambda_c \approx (0.43016, 0.32472,
0.24512)$. In contrast, for $\gamma \geq (k-1)/k$, the proof of
Theorem~\ref{th:mainHGnew} shows that it is always possible to take
$\lambda_i = 1/|L|$ for $i \in L$ and $\lambda_i = 0$ for $i\in[k]\setminus L$.
In this case, the vector $(\lambda_1,\lambda_2,\lambda_3) =
(1/3,1/3,1/3)$ works for $\gamma \geq 2/3$, but fails otherwise. To
see that it fails when $\gamma < 2/3$, take the inequality for $Z =
ABCD$ in Expression~\eqref{eqn:inequalities}.
By the way, it is easy to check that conditions \emph{(a)}, \emph{(b)}
and \emph{(c)} hold for this example, thus Theorem~\ref{th:everygamma}
says that $\gamma_c \approx 0.56984$ is the smallest confidence at
which the entailment holds.
\section{Closing remarks} \label{sec:closing}
Our study gives a useful handle on entailments among partial or
probabilistic implications. The very last comment of the previous
section is a good illustration of its power. However, there are a few
questions that arose and were not fully answered by our work.
For the forthcoming discussion, let us take $\gamma = (k-1)/k$ for
concreteness. The linear programming characterization in
Theorem~\ref{th:mainLP} gives an algorithm to decide if entailment
holds that is polynomial in $k$, the number of premises, but
exponential in $n$, the number of attributes. This is due to the
dimensions of the matrix that defines the dual LP: this is a $2^n
\times k$ matrix of rational numbers in the order of $1/k$ (for our
fixed $\gamma = (k-1)/k$). On the other hand, the characterization
theorem in Theorem~\ref{th:mainHGnew} reverses the situation: there
the algorithm is polynomial in $n$ but exponential in $k$. In order to
see this, first note that condition \emph{(a)} can be solved by
running $O(nk)$ Horn satisfiability tests of size $O(nk)$ each, as
discussed at the end of Section~\ref{sec:nice}. Second, conditions
\emph{(b)} and \emph{(c)} are really straightforward to check if the
sets are given as bit-vectors, say. So far we spent time polynomial in
both $n$ and $k$ in checking the conditions of the
characterization. The exponential in $k$ blow-up comes, however, from
the need to \emph{pass} to a subset $L \subseteq [k]$, as potentially
there are $2^k$ many of those sets to check. It does show, however,
that the general problem in the case of $\gamma \geq (k-1)/k$ is in
NP. This does not seem to follow from the linear programming
characterization by itself, let alone the definition of
entailment. But is it NP-hard? Or is there an algorithm that is
polynomial in both $k$ and $n$? One comment worth making is that an
efficient \emph{separation oracle} for the exponentially many
constraints in the LP of Theorem~\ref{th:mainLP} might well exist,
from which a polynomial-time algorithm would follow from the ellipsoid
method.
It is tempting to think that the search over subsets of $[k]$ can be
avoided when we start with a proper entailment. And indeed, this is
correct. However, we do not know if this gives a characterization of
proper entailment. In other words, we do not know if conditions
\emph{(a)}, \emph{(b)} and \emph{(c)}, by themselves, guarantee proper
entailment. The proof of the direction~\emph{3.} to~\emph{1.} in
Theorem~\ref{th:mainHGnew} does not seem to give this, and we suspect
that it does not. If they did, we would get an algorithm to check for
proper entailment that is polynomial in both $n$ and $k$.
From a wider and less theoretical prespective, it would be very
interesting to find real-life situations in problems of data analysis,
say, in which partial implications abound, but many are redundant. In
such situations, our characterization and algorithmic results could
perhaps be useful for detecting and removing such redundancies, thus
producing outputs of better quality for the final user. This was one
of the original motivations for the work in \cite{Balcazar}, and our
continuation here.
\bibliographystyle{IEEEtran}
|
1,477,468,750,604 | arxiv | \section{Introduction}
\hspace*{1em}Let $\mathcal X$ be a smooth algebraic orbifold (Def.\ref{def al orb} and Remark \ref{remark alg orb}) over an algebraically closed field $k$. We consider the moduli functor $\mathcal M$ of modified slope (Def.\ref{def mslope}) semistable torsion free sheaves on $\mathcal X$. Following \cite{hl}, \cite{lan}, \cite{sp}, we firstly define the following functor:
\[
\widehat{\mathcal M}: ({\rm Sch}/k)^o\longrightarrow (\rm Sets)
\]
as follows. Let $T$ be a scheme of finite type over $k$ and let $\widehat{\mathcal M}(T)$ be the set of isomorphism classes of $T$-flat family of torsion free semistable sheaves on $\mathcal X$. If $f:T^\prime\rightarrow T$ is a morphism of schemes, let $\widehat{\mathcal M}(f)$ be the morphism obtained by pulling back sheaves via the morphism $f_{\mathcal X}=\text{id}_{\mathcal X}\times f$, i.e \[
\widehat{\mathcal M}(T)\longrightarrow\widehat{\mathcal M}(T^\prime),\quad [E]\longmapsto [f^*_{\mathcal X}E].
\]
Then, the moduli functor $\mathcal M$ is defined to be the quotient functor of $\widehat{\mathcal M}$ by equivalence relation $\sim$:
\[
E\sim E^\prime,\quad\text{for $E,E^\prime\in\mathcal M^\prime(T)$ if and only if there is a line bundle $L$ on $T$ such that $E^\prime=p_2^*L\otimes E$},
\]
where $p_2:\mathcal X\times T\rightarrow T$ is the projection onto $T$. In general, the moduli functor $\mathcal M$ is not representable. In fact, if $\mathcal X$ is a projective scheme and there is a properly semistable sheaf on $\mathcal X$, then the moduli functor $\mathcal M$ can not be represented (Lemma 4.1.2 in \cite{hl}). In the case that $\mathcal M$ is representable, Nironi has show that the corresponding moduli scheme is proper over $k$ (Theorem 6.22 in \cite{fn}). But, by the Grothendieck's valuative criteria, we can also consider the separatedness and properness of $\mathcal M$ directly. Indeed,
Langton \cite{lan} has showed that the moduli functor of slope semistable torsion free sheaves on smooth projective varieties over $k$ is separated and proper. And then, Maruyana \cite{mm}, Mehta and Ramanathan \cite{mr} generalised the Langton'results to Gieseker stability. In recent years, the many problems about the modui functor of semistable sheaves on algebraic orbifold are concerned. And, there is not a similar result on algebraic orbifold. For the researchers' convenience, we give the proof that the moduli functor of slope semistable torsion free sheaves on algebraic orbifold is separated and properness. For the case of Gieseker stability, the similar result can be obtain following the line of Maruyana \cite{mm}, Mehta and Ramanathan \cite{mr}, for the sake of the key Lemma \ref{lemm ext 1} on algebraic orbifold (which corresponds to the Proposition 6 in \cite{lan}). In the next paragraph, we give precisely description of the problem.\\
Assume that $R$ is a discrete valuation ring over $k$ with maximal ideal $(\pi)$ and quotient field $K$. Consider the following cartesian diagram:
\begin{equation*}
\xymatrix{
\mathcal X_K \ar@{^(->}[r]^i \ar[d] & \mathcal X_R \ar[d] & \mathcal X_k \ar[d] \ar@{_(->}[l]_{j}\\
\text{Spec}(K) \ar@{^(->}[r] & \text{Spec}(R) & \text{Spec}(k) \ar@{_(->}[l] }
\end{equation*}
where $\mathcal X_R=\mathcal X\times \text{Spec}(R)$, $\mathcal X_K=\mathcal X\times \text{Spec}(K)$ and
$\mathcal X_k=\mathcal X\times\text{Spec}(k)=\mathcal X$.
Consequently, we have:
\begin{enumerate}
\item $\mathcal M$ is separated if and only if two families $E_R$, $E_R^\prime$ of torsion free semistable sheaves over $\text{Spec}(R)$ agree on the generic fiber $\mathcal X_K$, then they agree on $\mathcal X_R$;
\item $\mathcal M$ is proper if and only if every torsion free semistable sheaves $E_K$ on $\mathcal X_K$ can be uniquely extend to a flat family of torsion free sheaves on $\mathcal X_R$, under isomorphism.
\end{enumerate}
We state our main results:
\begin{theorem}
\ref{thm main 1}
Assume that $E_K$ is a torsion free sheaf on $\mathcal X_K$. Then
\begin{enumerate}
\item If $E_1$ and $E_2$ are two coherent subsheaves of $i_*E_K$ on $\mathcal X_R$ such that $i^*E_1=i^*E_2=E_K$ and $j^*E_1$, $j^*E_2$ are semistable torsion free sheaves on $\mathcal X_k$, at least one of which is stable, then there is an integer $p$ such that $E_1=\pi^p E_2$.
\item If $E_K$ is semistable, then there exists a coherent subsheaf $E\subseteqq {i_*}E_K$ such that $i^*E=E_K$ and $j^*E$ is torsion free and semistable on $\mathcal X_k$.
\end{enumerate}
\end{theorem}
As the application of the Theorem \ref{thm main 1}, in a forthcoming paper \cite{HJ}, we use it to show that the Hitchin map on the moduli space of Higgs bundles on Deligne-Mumford curves is proper.
\section{Torsion free sheaves on algebraic orbifolds}\label{sect tor sh}
Throughout this paper, we work over a fixed algebraically closed field $k$. Also, all the morphisms, schemes, algebraic spaces and stacks in this paper are finite type. In the following, we recall some basic knowledge about algebraic oribfolds and torsion free sheaves on it. For a detailed discussions, please refer to \cite{aov}, \cite{dm}, \cite{ak}, \cite{fn} and \cite{vistoli}.
\begin{definition}[Deligne-Mumford Tame stacks]
Let $\mathcal X$ be a Deligne-Mumford stack with coarse moduli space $p:\mathcal X\rightarrow X$. Then, $\mathcal X$ is tame if the pushforward functor $p_*:\text{QCoh}(\mathcal X)\rightarrow \text{QCoh}(X)$ is exact, where $\text{QCoh}(-)$ is the category of quasicoherent sheaves.
\end{definition}
\begin{definition}[Algebraic Orbifolds]\label{def al orb}
Let $\mathcal{X}$ be a Deligne-Mumford tame stack over $k$, which is isomorphic to a separated global quotient $\big[Z/G\big]$, where $Z$ is an algebraic space over $k$ and $G$ is a subgroup scheme (a locally closed subscheme which is a subgroup) of some $\text{GL}_{N,k}$. If the generic stabilizer of $\mathcal X$ is trivial, then $\mathcal X$ is called an algebraic orbifold over $k$.
\end{definition}
\begin{definition}\label{def irr and int}
A Deligne-Mumford stack $\mathcal X$ is called irreducible if it is not the union of two proper closed subsets, where the closed sets in $\mathcal X$ means reduced closed substacks of $\mathcal X$. It is called integral if it is both irreducible and reduced.
\end{definition}
\begin{remark}\label{remk irreducible}
A Deligne-Mumford stack $\mathcal X$ is irreducible if and only if its coarse moduli space is irreducible. In fact, there is a bijection between the closed subsets of $\mathcal X$ and the closed subsets of $X$, as pointed out by Conrad in \cite{bc}.
\end{remark}
Nironi \cite{fn} introduces the notion of projective (quasi-projective) Delgine-Mumford stack:
\begin{definition}[Projective (quasi-projective) Deligne-Mumford stack]
Let $\mathcal X$ be a Deligne-Mumford stack over a field $k$. We say $\mathcal X$ is projective (quasi-projective) over $k$ if it is a tame separated global quotient with projective (quasi-projective) coarse moduli scheme.
\end{definition}
\begin{remark}\label{remark alg orb}
In this paper, we only consider $\mathcal{X}$ to be an algebraic orbifold, which is irreducible and its coarse moduli space is a projective scheme over $k$, i.e it is projective.
\end{remark}
\begin{example}[Weighted Projective line]
The weighted Projective lines $\mathbb{P}(n,m)$ are algebraic orbifolds, when $m$ and $n$ are coprime.
\end{example}
For more example, the reader can consult \cite{ak}.
As point out by Nironi, for a stack, there is no very ample invertible sheaves unless it is an algebraic space. However, under certain hypothesis, there exist locally free sheaves, called generating sheaves, which behave like very ample sheaves.
\begin{definition}[Generating sheaf]
Let $\mathcal{X}$ be a Deligne-Mumford tame stack and let $\pi:\mathcal{X}\rightarrow X$ be the coarse moduli space of $\mathcal{X}$. A locally free sheaf $\mathcal E$ of $\mathcal X$ is said to be a generating sheaf if for any
quasi-coherent sheaf $F$, the following map
\[
\pi^{\ast}(\pi_{\ast}({\mathcal E}^\vee\otimes F)))\otimes\mathcal E\longrightarrow F
\]
is surjective.
\end{definition}
Olsson and Starr proved the existence of the generating sheaves and proved also that it is stable for arbitrary base change on the coarse moduli space.
\begin{proposition}[\cite{os}]
\begin{enumerate}
\item Let $\mathcal X$ be a separated Deligne-Mumford tame stack which is a global quotient over $k$, then there is a locally free sheaf $\mathcal E$ over $\mathcal X$ which is a generating sheaf for $\mathcal X$.
\item Let $\pi:\mathcal X\rightarrow X$ be the moduli space of $\mathcal X$ and $f:X^\prime\rightarrow X$ a morphism of algebraic spaces over $k$. Moreover, we have the following cartesian diagram:
\[
\xymatrix{
\mathcal X^\prime \ar[d]_{p} \ar[r] & \mathcal X \ar[d]^{\pi} \\
X^\prime \ar[r]^{f} & X }
\]
and $p^*\mathcal E$ is a generating sheaf for $\mathcal X^\prime$.
\end{enumerate}
\end{proposition}
For a Deligne-Mumford stack $\mathcal X$ with projective coarse moduli scheme over a field of characteristic zero, the existence of the generating sheaf is equivalent to $\mathcal X$ is a global quotient stack.
\begin{proposition}\cite{ak}
For a Deligne-Mumford stack $\mathcal X$ over $k$ and $char k=0$, the following are equivalent.
\begin{enumerate}
\item $\mathcal X$ has a projective coarse moduli space and is a quotient stack.
\item $\mathcal X$ has a projective coarse moduli space and possesses a generating sheaf.
\item $\mathcal X$ can be embedded into a smooth Deligne-Mumford stack with projective
coarse moduli space.
\end{enumerate}
\end{proposition}
As the case of schemes, the support of coherent sheaves on Deligne-Mumford stacks can be defined in the following way.
\begin{definition}[Support of Coherent sheaf]\label{def sp ch}
Let $\mathcal X$ be a Deligne-Mumford stack over $k$ and let $F$ be a coherent sheaf on $\mathcal X$. The support
${\rm supp}(F)$ of $F$ is the closed substack defined by the sheaf of ideals
\[
\xymatrix@C=0.5cm{
0 \ar[r] & \mathcal I_{F} \ar[r] & \mathcal{O}_{\mathcal X} \ar[r] & {\mathscr{H}om}_{\mathcal{O}_{\mathcal X}}(F,F)}.
\]
\end{definition}
\begin{definition}[Torsion free sheaf]\label{def tfs}
Let $\mathcal X$ be an projective Deligne-Mumford stack over $k$. A coherent sheaf $F$ is said to
be a torsion free sheaf if for every nonzero subsheaf $G\subseteq F$, the dimension of $\text{supp}(G)$
is $\text{dim}\mathcal X$.
\end{definition}
The torsion freeness of a coherent sheaf on a Deligne-Mumford stack is equivalent to its restriction to an \'etale covering.(Remark 3.3 in \cite{fn}).
\begin{lemma}[\cite{fn}]
With the same hypothesis as above, $F$ is a torsion free sheaf if and only if there is an \'etale covering
$f:U\rightarrow\mathcal X$ such that the restriction of $F$ to $U$ is torsion free.
\end{lemma}
\begin{proposition}\label{pro rk}
Assume that $\mathcal X$ is an integral projective Deligne-Mumford stack over $k$ and $F$ is a coherent sheaf on
$\mathcal X$. Then, there exists an open substack $\mathcal X^o$, such that the restriction $F|_{\mathcal X^o}$ to $\mathcal X^o$ of $F$ is locally free.
\end{proposition}
\begin{proof}
Take an \'etale covering $f:U\rightarrow\mathcal X$ such that $U$ is finite type over $k$. We have the following cartesian diagram:
\[
\xymatrix{
U\underset{\mathcal X}{\times}U \ar[d]_{\quad pr_1} \ar[r]^{\quad pr_2}
& U \ar[d]^{f} \\
U \ar[r]_{f} & \mathcal X }
\]
Denote $U\underset{\mathcal X}{\times}U$ by $R$. Then $R\underset{t}{\overset{s}{\rightrightarrows}}U$
is an algebraic groupoid, where $s=pr_1$ and $t=pr_2$.
Denote $f^*F$ by $F^\prime$. By the $2$-commutativity of above diagram, there is an isomorphism
\[
\phi: s^{*}F^\prime\longrightarrow t^{*}F^\prime.
\]
Because $U$ is reduced, there exists unique maximal nonempty open subset $U^\prime\subset U$
such that $F^\prime|_{U^\prime}$ is locally free. By the flatness of morphism $s$, $s^{-1}(U^\prime)$ is the unique maximal open subset on which $s^*F$ is locally free. Similarly, $t^{-1}(U^\prime)$ is the unique maximal open subset such that the restriction of $t^*F$ is locally free. Thus, $s^{-1}(U^\prime)=t^{-1}(U^\prime)$, i.e $U^\prime\subset U$ descents to an open substack $\mathcal X^o$ of $\mathcal X$ such that $F|_{\mathcal X^o}$ is locally free.
\end{proof}
\begin{definition}[Rank of coherent sheaf]\label{def rk}
Under the hypothesis of Proposition \ref{pro rk}, we can define the rank ${\rm rk}(F)$ of $F$ to
be the rank of $F|_{\mathcal X^o}$.
\end{definition}
In order to define a notion of Gieseker stability on projective Deligne-Mumford stacks, Nironi introduce the modified Hilbert polynomial in \cite{fn}. First of all, we recall the notion of polarization on Deligne-Mumford stacks.
\begin{definition}[Polarization]\label{def polar}
For a Projective Deligne-Mumford stack $\mathcal{X}$, the polarization of $\mathcal X$ is a pair $(\mathcal{E},\mathcal{O}_{X}(1))$, where $\mathcal E$ is a generating sheaf and $\mathcal{O}_X(1)$ is a very ample invertible sheaf on $X$.
\end{definition}
\begin{definition}[Modified Hilbert Polynomial]\label{def m hilbert p}
Fix a polarization $(\mathcal E,\mathcal{O}_{X}(1))$ on a projective Deligne-Mumford stack $\mathcal X$. For a coherent sheaf $F$ on $\mathcal X$, the modified Hilbert polynomial $P_F$ of $F$ is defined by
\[
P_F(m)=\mathcal{X}(\pi_\ast(F\otimes{\mathcal E^\vee})\otimes\mathcal{O}_{X}(m)), \]
where $\mathcal{X}(\pi_\ast(F\otimes{\mathcal E^\vee})\otimes\mathcal{O}_{X}(m))$ is
the Euler characteristic of $\pi_\ast(F\otimes{\mathcal E}^\vee)\otimes\mathcal{O}_{X}(m)$.
\end{definition}
\begin{remark}\label{remk hp}
In general, the modified Hilbert polynomial
\[
P_{F}(m)=\underset{i=0}{\overset{d}{\sum}}\frac{a_i(F)}{i!}\cdot m^i,
\]
where $d$ is the dimension of $F$ and $a_i(F)$ are intergers. In the special case: $F$ is a torsion free sheaf (\ref{def tfs}) on a projective algebraic orbifold $\mathcal X$ of dimension $\mathcal X$, then the coefficient $a_n(F)$ of the leading term is $\text{rk}(F)\text{rk}(\mathcal E)\text{deg}(\mathcal O_{X}(1))$, by the sake of Grothendieck-Riemann-Roch formula in \cite{bt}.
\end{remark}
\begin{definition}[Modified Slope]\label{def mslope}
Let $\mathcal X$ be an integral projective Deligne-Mumford stack over $k$. The modified Hilbert polynomial of $F$ is
$P_{F}(m)=\underset{i=0}{\overset{d}{\sum}}\frac{a_i(F)}{i!}\cdot m^i$.
The modified slope $\mu(F)$ of $F$ is
\[
\mu(F)=\frac{a_{d-1}(F)}{a_{d}(F)}.
\]
\end{definition}
Using the modified slope, we can introduce the notions of semistable (stable) torsion free sheaves.
\begin{definition}[Stability]
A torsion free sheaf $E$ is said to be semistable (resp. stable) if for all coherent subsheaves $F\subset E$ and ${\rm rk}(F)<{\rm rk}(E)$, we have
\[\mu(F)\leq\mu(E)\quad({\rm resp}.\quad\mu(F)<\mu(E)). \]
If $E$ is not semistable, we say $E$ is unstable.
\end{definition}
\begin{definition}[Subbundle of torsion free sheaf]\label{def subbd}
Suppose $E$ is a coherent subsheaf of torsion free sheaf $F$. If the quotient sheaf $F/E$ is also a torsion free sheaf, we say $E$ is a subbundle of $F$.
\end{definition}
Indeed, for every coherent subsheaf of a torsion free sheaf, there is a unique minimal subbundle contain it. We have the following proposition.
\begin{proposition}\label{prop subbd}
Let $\mathcal X$ be an integral projective Deligne-Mumford stack over $k$ and let $F$ be a torsion free sheaf on $\mathcal X$. For a coherent subsheaf $G$ of $F$, there is a unique subbundle $G^\prime\subset F$, such that
\begin{enumerate}
\item $G\subset G^\prime$ and $rk(G^\prime)=rk(G)$;
\item ${F}/{G^\prime}$ is torsion free.
\end{enumerate}
\end{proposition}
\begin{proof}
We have the following two exact sequences:
\[
\xymatrix@C=0.5cm{
0 \ar[r] & G\ar[rr] && F \ar[rr]^{j} && F/G \ar[r] & 0 }, \]
\[
\xymatrix@C=0.5cm{
0 \ar[r] & T(F/G) \ar[rr]&& F/G \ar[rr] && Q \ar[r] & 0 }, \]
where $T(F/G)$ is the maximal torsion subsheaf of $F/G$.
Then, $G^\prime = j^{-1}(T(F/G))$ and $F/G^\prime = Q$.
We have to check the uniqueness of $G^\prime$. Suppose there are two such sheaves $G_1$ and $ G_2$.
Then $rk(G_1\cap G_2)=rk(G)$. Also, there are two exact sequences
\[
\xymatrix@C=0.5cm{
0 \ar[r] &G_1\cap G_2 \ar[rr]&& G_1 \ar[rr] && (G_1+G_2)/G_2\ar[r] & 0},
\]
\[
\xymatrix@C=0.5cm{
0 \ar[r] &G_1\cap G_2 \ar[rr]&& G_2 \ar[rr] && (G_1+G_2)/G_1\ar[r] & 0}.
\]
By the torsion freeness of $(G_1+G_2)/G_2$ and $(G_1+G_2)/G_1$, we have
\[
rk(G_1\cap G_2)<rk(G_1)\quad rk(G_1\cap G_2)<rk(G_1). \]
This is impossible! Hence, we have $G_1+G_2\subset G_1$ and
$G_1+G_2\subset G_2$. So, $G_1=G_2$.
\end{proof}
\begin{definition}[Join of sheaves]
Suppose $F_1$ and $F_2$ are two coherent subsheaves of a torsion free sheaf $F$ on an integral projective Deligne-Mumford stack $\mathcal X$ over $k$. The unique subbundle $F_1\vee F_2$ of $F$ in Proposition \ref{prop subbd}
containing $F_1+F_2$, is called the join of $F_1$ and $F_2$.
\end{definition}
\begin{proposition}\label{pro inq}
If $F_1$ and $F_2$ are two subbundles of a torsion free sheaf $E$ on an integral Deligne-Mumford stack $\mathcal X$ over $k$, then
\[
a_{n-1}(F_1\vee F_2)+a_{n-1}(F_1\cap F_2)\geq a_{n-1}(F_1)+ a_{n-1}(F_2).
\]
\end{proposition}
\begin{proof}
By the two exact sequences
\[
\xymatrix@C=0.5cm{
0 \ar[r] & F_1\cap F_2 \ar[rr]&& F_1 \ar[rr] && ( F_1+F_2)/F_2\ar[r] & 0}
\]
and
\[
\xymatrix@C=0.5cm{
0 \ar[r] & F_1\cap F_2 \ar[rr]&& F_2 \ar[rr] && (F_1+F_2)/F_1\ar[r] & 0}, \]
we have
\[
P_{F_1\cap F_2}+P_{(F_1+F_2)/F_1}= P_{F_2},\quad
P_{F_1\cap F_2}+P_{(F_1+F_2)/F_2}= P_{F_1}.
\]
So,
\[
a_{n-1}(F_1\cap F_2)+a_{n-1}((F_1+F_2)/F_2)= a_{n-1}(F_1)
\]
and
\[
a_{n-1}(F_1\cap F_2)+a_{n-1}((F_1+F_2)/F_1)= a_{n-1}(F_2).
\]
Also, there is an exact sequence
\[
\xymatrix@C=0.5cm{
0 \ar[r]&(F_1+F_2)/F_1\ar[rr]&&(F_1\vee F_2)/F_1 \ar[rr]&&
(F_1\vee F_2)/(F_1+F_2) \ar[r] & 0 }.
\]
Hence, we have
\[
P_{(F_1+F_2)/F_1}+P_{(F_1\vee F_2)/(F_1+F_2)}=
P_{(F_1\vee F_2)/F_1}.
\]
Therefore,
\[
a_{n-1}((F_1+F_2)/F_1)+a_{n-1}((F_1\vee F_2)/(F_1+F_2))=
a_{n-1}((F_1\vee F_2)/F_1).
\]
And also, $a_{n-1}((F_1\vee F_2)/(F_1+F_2))\geq 0$,
because $(F_1\vee F_2)/(F_1+F_2)$ is a torsion sheaf.
So,
\[
a_{n-1}(F_1\cap F_2)+a_{n-1}((F_1\vee F_2)/F_1)\geq a_{n-1}(F_2).
\]
By the exact sequence
\[
\xymatrix@C=0.5cm{
0 \ar[r] &F_1 \ar[rr]&& F_1\vee F_2 \ar[rr] &&(F_1\vee F_2)/ F_1 \ar[r] & 0 },
\]
we have
\[
a_{n-1}(F_1\cap F_2)+ a_{n-1}((F_1\vee F_2)/F_1)= a_{n-1}(F_1). \]
Then,
\[
a_{n-1}(F_1\vee F_2)+a_{n-1}(F_1\cap F_2)\geq a_{n-1}(F_1)+ a_{n-1}(F_2).
\]
\end{proof}
As \cite{lan}, we introduce the $\beta$-invariant.
\begin{definition}
Let $E$ be a fixed torsion free sheaf on an integral projective Deligne-Mumford stack $\mathcal X$ over $k$. For every other torsion free sheaf $F$ on $\mathcal X$, we can define
the $\beta$-invariant as
\[
\beta(F)=a_n(E)a_{n-1}(F)-a_{n-1}(E)a_n(F). \]
\end{definition}
\begin{remark}
By the Proposition \ref{prop subbd}, if every subbundle $F\subset E$ satisfies $\beta(F)\leq 0$, then $E$ is semistable. \end{remark}
\begin{proposition}\label{prop pro beta}
Let $\mathcal X$ be an integral projective Deligne-Mumford stack over $k$.
\begin{enumerate}
\item If $F_1$ and $F_2$ are two subbundles of $E$ on $\mathcal X$, then
\[
\beta(F_1)+\beta(F_2)\leq \beta(F_1\vee F_2) + \beta(F_1\cap F_2)\].
\item If $\xymatrix@C=0.5cm{ 0 \ar[r] & F \ar[r] & G \ar[r]& K \ar[r] & 0 }$ is exact sequence
of torsion free sheaves on $\mathcal X$, then
\[
\beta(F)+\beta(K)=\beta(G). \]
\end{enumerate}
\end{proposition}
\begin{proof}
For the first statement, we have
\[
\beta(F_1)+\beta(F_2)=a_n(E)a_{n-1}(F_1)-a_{n-1}(E)a_n(F_1)+
a_n(E)a_{n-1}(F_2)-a_{n-1}(E)a_n(F_2)=
\]
\[a_n(E)\big(a_{n-1}(F_1)+a_{n-1}(F_2)\big)-a_{n-1}(E)\big(a_n(F_1)+a_n(F_2)\big)\leq \]
\[
a_n(E)\big(a_{n-1}(F_1\vee F_2)+a_{n-1}(F_1\cap F_2)\big)-a_{n-1}(E)\big(a_n(F_1)+a_n(F_2)\big).
\]
By the exact sequence $\xymatrix@C=0.5cm{0 \ar[r] & F_1\cap F_2 \ar[rr]&& F_1\oplus F_2
\ar[rr] && F_1+F_2\ar[r] & 0 }$, we have
\[
P_{F_1\oplus F_2}=P_{F_1\cap F_2} + P_{F_1+F_2}.
\]
So,
\[ a_{n}(F_1+F_2)+a_{n}(F_1\cap F_2)= a_{n}(F_1)+ a_{n}(F_2).
\]
Also, $a_n(F_1+F_2)=a_n(F_1\vee F_2)$. Then,
\[
\beta(F_1)+\beta(F_2)\leq \beta(F_1\vee F_2) + \beta(F_1\cap F_2)
\]
The second statement is obvious.
\end{proof}
Following \cite{lan}, we consider the set $\Gamma(E)$ of proper subbundles of $E$, which have the following Property:
\[
\Gamma(E)=\{F: \text{$F$ is a proper subbundle of $E$ such that for every subsheaf $G\subset F$, $\beta(G)<\beta(F)$}\}.
\]
\begin{remark}
The set $\Gamma(E)$ is nonempty. In fact, the zero sheaf is in $\Gamma(E)$. In addition, if $E$ is semistable, there is only one element in the set $\Gamma(E)$, i.e the zero sheaf.
\end{remark}
\begin{proposition}\label{prop beta bd}
Let $F$ be a maximal element of $\Gamma(E)$. For every subbundle $G\supseteq F$, we have
$\beta(G)\leq\beta(F)$.
\end{proposition}
\begin{proof}
Suppose $\beta(G)>\beta(F)$. Let $H\subset G$ be the minimal subbundle such that
$\beta(H)>\beta(F)$ and $F\subseteq H$. For every proper subbundle $I$ of $H$ and
$F\nsubseteq I$, we have
\[
\beta(I\vee F)-\beta(I)\geq \beta(F)-\beta(F\cap I)>0. \]
But, $\beta(H)\geq\beta(I\vee F)$. So, $\beta(H)>\beta(I)$.
Therefore, $H\in\Gamma(E)$. Contradiction!
\end{proof}
\begin{corollary}\label{cor beta}
There is unique maximal subbundle $F\in\Gamma(E)$. Also, for every subbundle $B\subseteq E$, $\beta(B)\leq\beta(F)$ with equality only if $B\supseteq F$.
\end{corollary}
\begin{proof}
If there are two maximal subbundles $F_1$ and $F_2$ in $\Gamma(E)$, then
\[
\beta(F_1\vee F_2)-\beta(F_2)\geq\beta(F_2)-\beta(F_1\cap F_2).
\]
So, $F_1\cap F_2=F_2$ and $F_1\cap F_2=F_1$. Hence, $F_2=F_1$.\\
By $\beta(F\vee B)-\beta(B)\geq\beta(F)-\beta(F\cap B)\geq 0$ and $\beta(F)\geq\beta(F\vee B)$,
we get
\[
\beta(F)\geq\beta(B)\text{ with equality only if $B\supseteq F$}.
\]
\end{proof}
\begin{remark}
In the above corollary, the $\beta(F)$ is the maximum value of $\beta$-invariant for subbundles in $E$. Also,
$\mathrm{Hom}_{\mathcal O_X}(F,E/F)$=0.
\end{remark}
At the end of this section, we show that the torsion free semistable sheaves is stable under the extension of the base field $k$($k$ is not necessarily algebraic closed).
\begin{proposition}\label{pro ex ss}
Let $k^\prime$ be an extension field of $k$. We have the following cartesian diagram:
\[
\xymatrix{
\mathcal X\times{\rm{Spec}}(k^\prime) \ar[d]_{p_2} \ar[r]^{\qquad p_1} &\mathcal X \ar[d] \\
{\rm{Spec}}(k^\prime) \ar[r] & {\rm{Spec}}(k)}
\]
Assume that the field $k$ is infinite when $k^\prime/k$ is not algebraic. Then $E^\prime=p_1^*E$ is semistable if and only if $E$ is semistable.
\end{proposition}
\begin{proof}
The proof can be proved as Proposition 3 \cite{lan}, or can be found in \cite{fn}.
\end{proof}
\section{The Main Results}
From now on, $\mathcal X$ is a $n$-dimensional smooth projective algebraic orbifold with a fixed polarization $(\mathcal E,\mathcal O_{\mathcal X}(1))$ over $k$. Let $R\supseteq k$ be a discrete valuation ring
with maximal ideal $m=(\pi)$. $K$ is the quotient field of $R$. Consider the following cartesian diagram:
\begin{equation*}
\xymatrix{
\mathcal X_K \ar@{^(->}[r]^i \ar[d] & \mathcal X_R \ar[d] & \mathcal X_k \ar[d] \ar@{_(->}[l]_{j}\\
\text{Spec}(K) \ar@{^(->}[r] & \text{Spec}(R) & \text{Spec}(k) \ar@{_(->}[l] }
\end{equation*}
where $\mathcal X_R=\mathcal X\times \text{Spec}(R)$, $\mathcal X_K=\mathcal X\times \text{Spec}(K)$ and
$\mathcal X_k=\mathcal X\times\text{Spec}(k)=\mathcal X$. $i:\mathcal X_K\rightarrow\mathcal X_R$ is the natural open immersion and $j:\mathcal X_k\rightarrow\mathcal X_R$ is the natural closed immersion. Our goal is to prove the following result:
\begin{theorem}\label{thm main 1}
Assume that $E_K$ is a torsion free sheaf on $\mathcal X_K$. Then
\begin{enumerate}
\item If $E_1$ and $E_2$ are two coherent subsheaves of $i_*E_K$ on $\mathcal X_R$ such that $i^*E_1=i^*E_2=E_K$ and $j^*E_1$, $j^*E_2$ are semistable torsion free sheaves on $\mathcal X_k$, at least one of which is stable, then there is an integer $p$ such that $E_1=\pi^p E_2$.
\item If $E_K$ is semistable, then there exists a coherent subsheaf $E\subseteqq {i_*}E_K$ such that $i^*E=E_K$ and $j^*E$ is torsion free and semistable on $\mathcal X_k$.
\end{enumerate}
\end{theorem}
We first state a lemma, which corresponds to Proposition 5 in \cite{lan}.
\begin{lemma}\label{lemm shp}
If $E_1$ and $E_2$ are two torsion free sheaves on $\mathcal X_{R}$ such that $i^*E_1=i^*E_2$, then the modified Hilbert polynomials $P_{j^*E_1}(m)=P_{j^*E_2}(m)$. In particular, $a_{n-1}(j^*E_1)=a_{n-1}(j^*E_2)$.
\end{lemma}
\begin{proof}
Since the field $k$ is algebraically closed and $R$ is a regular local ring, $\mathcal X_R$ is integral and smooth over $\text{Spec}(R)$. Then, the torsion free sheaf on $\mathcal X_R$ is flat over $\text{Spec}(R)$, since the torsion free modules over valuation rings are flat. By the Lemma 3.16 in \cite{fn}, $P_{j^*E_1}(m)=P_{j^*E_2}(m)$.
\end{proof}
$\mathcal X$ has an open dense substack $\mathcal X^o$ such that it is an irreducible smooth variety over $k$. Let $\gamma:\mathcal X^o\rightarrow \mathcal X$ be the corresponding open immersion. $\mathcal X_K^o=\mathcal X^o\times\text{Spec}(K)$ and
$\mathcal X_k^o=\mathcal X\times\text{Spec}(k)$ are also irreducible and smooth. Let $\Xi$ be the generic point of $\mathcal X_K^o$
and $\xi$ be the generic point of $\mathcal X_k^o$. And, we have the following cartesian diagram:
\[
\xymatrix{
\mathcal X_K^o \ar[d] \ar[r] & \mathcal X_R^o \ar[d] & \mathcal X_k^o \ar[l] \ar[d]\\
\mathcal X_K \ar[r] & \mathcal X_R & \mathcal X_k \ar[l] }
\]
Let $E_K$ be a torsion free sheaf of rank $r$ on $\mathcal X_K$. Since $\mathcal X^o$ is an integral scheme, the stalk $(E_K)_{\Xi}$ of $(E_K)|_{\mathcal X^o_K}$ at $\Xi$ is a free $\mathcal O_{\Xi}$ module. Denote the stalks of $\mathcal O_{\mathcal X^o_R}$ at $\Xi$ and $\xi$ by $\mathcal O_{\Xi}$ and $\mathcal O_{\xi}$, respectively.
\begin{lemma}\label{lemm ext 1}
Suppose $M\subset(E_K)_{\Xi}$ is a free rank $r$ $\mathcal O_\xi$-submodule of $(E_K)_{\Xi}$. Then there exists a unique torsion free sheaf $E\subseteq i_*E_K$ on $\mathcal X_R$ such that $i^*E=E_K$, $E_\xi=M$, and $j^*E$ is a torsion free sheaf on $\mathcal X_k$.
\end{lemma}
\begin{proof}
As above, $\mathcal O_\xi$ is the stalk of $\mathcal O_{\mathcal X_R^o}$ at the generic $\xi$ of $\mathcal X_k^o$ in $\mathcal X_R^o$. Then, there is a natural morphism $\beta_1:\rm Spec (\mathcal O_\xi)\rightarrow \mathcal X_R^o$. Besides, $\Xi$ is the generic point of $\mathcal X_R^o$ and $\mathcal X_R^o$ is integral. So, there are two natural morphisms $\alpha:\Xi\rightarrow \rm{Spec}(\mathcal O_\xi)$ and $\beta_2:\Xi\rightarrow\mathcal X_K^o$. Let $i^o:\mathcal X_K^o\rightarrow X_R^o$ be the open immersion obtained through base change from the open immersion ${\rm Spec}(K)\hookrightarrow {\rm Spec}(R)$. And also, they form the first square in the following:
\begin{equation*}\label{diag 1}
\begin{split}
\xymatrix{
\text{Spec}(\mathcal O_{\xi})\ar[r]^{\quad\beta_1} & \mathcal X_R^o \ar[r]^{\gamma_R} & \mathcal X_R \\
\Xi \ar[r]^{\beta_2} \ar[u]_{\alpha} & \mathcal X_K^o \ar[u]_{i^o} \ar[r]^{\gamma_K} &
\mathcal X_K \ar[u]_{i} }
\end{split}\tag{\rm{A}}
\end{equation*}
The second square is cartesian. Denote the torsion free sheaf on $\rm Spec(\mathcal O_\xi)$ corresponding to the modules $M$ by $\mathcal M$. Similarly, $\mathcal N$ is the free sheaf on $\Xi$ corresponding to $(E_K)_{\Xi}$.
\begin{center}\label{claim 1}
\textbf{Claim}: $E=i_*E_K\cap(\gamma_R\circ\beta_1)_*\mathcal M$ satisfies the conditions in the conclusion of Lemma \ref{lemm ext 1}.
\end{center}
\textbf{First step}: we need to explain the intersection of $i_*{E_K}$ and $(\gamma_R\circ\beta_1)_*\mathcal M$ in $(\gamma_R\circ\beta_1\circ\alpha)_*\mathcal N$.\\
By the inclusion $M\subseteq(E_K)_{\Xi}$, we have the inclusion:
\begin{equation*}\label{inclu a}
{(\gamma_R\circ\beta_1)}_*\mathcal M \subseteq (\gamma_R\circ\beta_1)_*(\alpha_*\mathcal N).\tag{a}
\end{equation*}
In addition, $\Xi$ is the generic point of $\mathcal X_K^o$, there is another inclusion:
\begin{equation*}\label{inclu b}
(i\circ{\gamma_K})_*({\gamma_K}^*E_K)\subseteq (i\circ\gamma_K)_*({\beta_2}_*\mathcal N).\tag{b}
\end{equation*}
By the diagram (\ref{diag 1}), we get ${(i\circ\gamma_K\circ\beta_2)}_*\mathcal N ={{(\gamma_R\circ\beta_1\circ\alpha)}_*}\mathcal N$. In the following, we show that the morphism:
\begin{equation}\label{equ sec 1}
E_K\longrightarrow {\gamma_K}_*({\gamma_K}^*E_K)
\end{equation}
obtained by adjunction formula is injective. Indeed, the coarse moduli space of $\mathcal X_K$ is $X_K=X\times {\rm Spec}(K)$ and $X_K$ is irreducible. By the Remark \ref{remk irreducible}, $\mathcal X_K$ is irreducible. Also, $\mathcal X_K$ is reduced. Then, $\mathcal X_K$ is integral. For every \'etale morphism $f:U\rightarrow\mathcal X$ from an irreducible smooth variety $U$ over $k$ to $\mathcal X$, we have the cartesian diagram:
\begin{equation*}\label{diag 2}
\begin{split}
\xymatrix{
\mathcal X^o_K \ar[r]^{\gamma_K} & \mathcal X_K \\
U^o_K \ar[u]^{f_K^o} \ar[r]^{\gamma^\prime_K} & U_K \ar[u]^{f_K}}
\end{split}\tag{\rm{B}}
\end{equation*}
where $U_K=U\times{\rm Spec}(K)$. Pulling back the homomorphism (\ref{equ sec 1}) to $U_K$, we get
\begin{equation}\label{equ sec 2}
f_K^*E_K\longrightarrow f_K^*{\gamma_K}_*({\gamma_K}^*E_K).
\end{equation}
By the flat base change theorem of stacky version ( Corollary \rm{A}.2.2 in \cite{sb} and \rm{A}.3.4 in \cite{sb1}), we have
\begin{equation}\label{equ sec 3}
f_K^*{\gamma_K}_*({\gamma_K}^*E_K)={\gamma_K^\prime}_*{f^o_K}^*({\gamma_K}^*E_K).
\end{equation}
On the other hand, ${\gamma_K^\prime}^*f_K^*E_K={f^o_K}^*{\gamma_K}^*E_K$. Then, the homomorphism (\ref{equ sec 2}) is
\begin{equation}\label{equ sec 4}
f_K^*E_K\longrightarrow{\gamma_K^\prime}_*{\gamma_K^\prime}^*f_K^*E_K .
\end{equation}
Because $U$ is integral and $f_K^*E_K$ is torsion free, the homomorphism (\ref{equ sec 4}) is injective. Thus, the homomorphism (\ref{equ sec 1}) is injective. So, $i_*{E_K}\longrightarrow i_*{\gamma_K}_*{\gamma_K}^*E_K$ is injective.
Hence, by (\ref{inclu a}), (\ref{inclu b}) and the diagram (\ref{diag 1}), we have the following two short exact sequences with the same middle terms:
\begin{equation}
\xymatrix{
& & 0 \ar[d] \\
& &{(\gamma_R\circ\beta_1)}_*\mathcal M \ar[d] \\
0 \ar[r] & i_*E_K \ar[r] & {{(\gamma_R\circ\beta_1\circ\alpha)}_*}\mathcal N}
\end{equation}
Thus, $E =i_*E_K\cap{(\gamma_1\circ\beta_1)}_*\mathcal M$ is a quasicoherent sheaf on $\mathcal X_R$. We accomplished the first part of the proof.\\
\textbf{Second step}: We have to check the sheaf $E$ which we have defined is a torsion free coherent sheaf. We only need to check this locally in the \'etale topology.
Suppose $\theta:{\rm{Spec}}(A)\rightarrow \mathcal X$ is an \'etale morphism and ${\rm{Spec}}(A)$ is a smooth irreducible variety over $k$. We have the cartesian diagram
\[
\xymatrix{
\mathcal X^o \ar[r] & \mathcal X \\
V \ar[r]\ar[u]^{\phi} & \text{Spec}(A)\ar[u]^{\theta} }
\]
Since $\phi$ is an \'etale morphism of finite type between irreducible smooth varieties, $\phi$ is generically finite dominant map i.e $\phi^{-1}(\xi)$ is a finite set. By exercise 3.7 in page 91 of \cite{Ha},
there is an open dense subset $i_W:W\rightarrow\mathcal X^o$ such that the morphism
$\phi^{\prime}:\phi^{-1}(W)\rightarrow W$ is finite and \[
\xymatrix{
W \ar[r]^{i_W} & \mathcal X^o \ar[r] & \mathcal X \\
W^{\prime}=\phi^{-1}(W) \ar[r] \ar[u]^{\phi^\prime}& V \ar[r] \ar[u]^{\phi} & \text{Spec}(A)\ar[u]^{\theta} }
\]
is a cartesian diagram. Denote $\phi^{-1}(W)$ by $W^{\prime}$. By the base change, we have
\begin{equation}\label{diag 3}
\begin{split}
\xymatrix{
\Xi\ar[r]^{\alpha \qquad}& \text{Spec}(\mathcal O_{\xi}) \ar[r]^{\beta_3} &W\times\text{Spec}(R)\ar[r]^{\qquad i_{W,R}}&\mathcal X_R^o\ar[r]^{\gamma_1}&\mathcal X_R\\
\Xi_1 \ar[u]^{\phi_{\Xi}} \ar[r]^{\alpha^\prime\qquad}&\text{Spec}(\mathcal O_{V_R,\xi^\prime})\ar[r]^{\beta_3^\prime}
\ar[u]^{\phi_{\xi}} & W^{\prime}\times\text{Spec}(R) \ar[r]^{i_{W^\prime,R}} \ar[u]^{\phi^{\prime}_R} &V\times\text{Spec}(R)\ar[r]^{\gamma_1^\prime}\ar[u]^{\phi_R} & \text{Spec}(A\otimes_kR)\ar[u]^{\theta_R}
}
\end{split}\tag{C}
\end{equation}
where $\Xi^{\prime}$ is the generic point of $V_R=V\times{\rm Spec}R$ and $\xi^\prime$ is the generic point of the close subscheme $W^{\prime}\times{\rm Spec}(k)\hookrightarrow W^{\prime}\times{\rm Spec}(R)$.
The first square and the second square are cartesian. Indeed, we may assume $W=\text{Spec}(B)$ and $W^{\prime}=\text{Spec}(C)$. Then ${\phi^\prime}^\sharp:B\rightarrow C$ is an injective finite map. $\xi$ and $\xi^{\prime}$ are the prime ideals $B\otimes_{k}(\pi)$ and $C\otimes_{k}(\pi)$ respectively. Denote the quotient fields of $B$ and $C$ by $K_B$ and $K_C$ respectively. Since the field $k$ is algebraically closed, it follows that $\mathcal O_{\xi}= K_B\otimes_{k}R$ and $\mathcal O_{V_R,\xi^\prime}=K_C\otimes_{k}R$. Then
\[
\mathcal O_{\xi}\otimes_{B\otimes_{k}R}(C\otimes_{k}R)=
(K_B\otimes_kR)\otimes_{B\otimes_kR}(C\otimes_kR)=
(K_B\otimes_{B}(B\otimes_{k}R))\otimes_{B\otimes_{k}R}(C\otimes_{k}R)=
\]
\[
K_B\otimes_{B}(C\otimes_{k}R)=(K_B\otimes_{B}C)\otimes_{k}R=
K_C\otimes_{k}R=\mathcal O_{V_R,\xi^\prime},
\]
where $K_C=K_B\otimes_{B}C$ ($C$ is integral over $B$). Thus, the second square is cartesian. So, the morphism $\phi_\xi$ is finite. Then the first square is cartesian. By the flat base change formula of stacky version and cartesian diagram
\begin{equation*}\label{diag 4}
\begin{split}
\xymatrix{
\Xi \ar[r] &W\times\text{Spec}(K) \ar[r]^{\qquad i_{U,K}} &\mathcal X^o_K \ar[r] &\mathcal X_K \ar[r]^{i} & \mathcal X_R \\
\Xi^{\prime} \ar[r] \ar[u]^{\phi_{\Xi}}&W^\prime\times\text{Spec}(K) \ar[r] \ar[u]^{\phi^\prime_K}
&V\times\text{Spec}(K)\ar[r] \ar[u]^{\phi_K}&
\text{Spec}(A\otimes_{k}K)\ar[r]^{i^\prime} \ar[u]^{\theta_K} &
\text{Spec}(A\otimes_{k}R)\ar[u]^{\theta_R}
}
\end{split}\tag{D}
\end{equation*}
By the last square in diagram (\ref{diag 4}), we have the equation:
\begin{equation}\label{equ sec 5}
\theta_R^*E=\theta_R^*\big(i_*E_K\cap{(\gamma_1\circ\beta_1)}_*\mathcal M\big).
\end{equation}
From the last three square in diagram (\ref{diag 3}) , we get the equation:
\begin{equation}\label{equ sec 6}
\theta_R^*i_*E_K\cap\theta_R^*\big({(\gamma_1\circ\beta_1)}_*\mathcal M\big)=
i_*^\prime\theta_K^*E_K\cap {(\gamma_1^\prime\circ i_{W^\prime,R}\circ\beta_3^\prime)}_*\phi_{\xi}^*\mathcal M.
\end{equation}
Let $\phi_{\xi}^*\mathcal M=\mathcal M^\prime$, $\theta_K^*E_K=E^\prime$ and $\phi_{\Xi}^{*}\mathcal N =
\mathcal N^\prime$. Then,
\begin{enumerate}
\item $E^\prime$ is a torsion free sheaf of rank $r$ and $E^\prime|_{\Xi}=\mathcal N^\prime$;
\item ${\alpha_1^{\prime}}^*\mathcal M^\prime$ = $\mathcal N^\prime$; \item $\mathcal M^\prime$ and $\mathcal N^\prime$ are free sheaves of rank $r$.
\end{enumerate}
Therefore, we only consider the case: $\mathcal X={\rm{Spec}}(A)$, where ${\rm{Spec}}(A)$ is a smooth irreducible affine varieties over $k$. This is exactly Proposition 6 of \cite{lan}.
\end{proof}
\
\begin{remark}\label{rmk of ext}
Assume that $M_1$ and $M_2$ are two free rank $r$ $\mathcal O_{\xi}$ submodules of $(E_K)_{\Xi}$. Denote the corresponding coherent sheaves in Lemma \ref{lemm ext 1} by $E_1$ and $E_2$, respectively. If $M_1\subseteq M_2$, by the proof of Lemma \ref{lemm ext 1}, we have $E_1\subseteq E_2$.
\end{remark}
In the following, we show the first part of Theorem \ref{thm main 1} as \cite{lan}.
\begin{proof}[\textbf{The Proof of the first part in Theorem \ref{thm main 1}}]
Suppose $E_1$ and $E_2$ are two coherent subsheaves of $i_*E_K$ such that $i^*E_1=i^*E_2=E_K$ and $j^*E_1$, $j^*E_2$ are torsion free semistable sheaves on $\mathcal X_k$, at least one of which is stable. Since $\mathcal O_{\xi}$ is a principal ideal domain, $E_{1,\xi}$ and $E_{2,\xi}$ are free $\mathcal O_{\xi}$ modules of rank $r$. Also,
$E_{1,\xi}\otimes_{\mathcal O_{\xi}}\mathcal O_{\Xi}=E_{2,\xi}\otimes_{\mathcal O_{\xi}}\mathcal O_{\Xi}=(E_K)_{\Xi}$. By the elementary divisor theorem (Theorem 7.8 in \cite{sl}), there is a basis $\{e_1,\ldots,e_n\}$ of $E_{1,\xi}$ over $\mathcal O_{\xi}$ such that $\{\pi^{q_1}e_1,\ldots,\pi^{q_r}e_r\}$ is a basis of $E_{2,\xi}$. Since we are trying to prove that $E_1=\pi^pE_2$ for some $p$, we may multiply $E_{2,\xi}$ by $\pi^m$ for some integer $m$, so that all the $q_i$ are nonnegative, and at least one of the $q_i=0$. If all the $q_i=0$ , we are done; hence we may also assume that some $q_i$ is postive. By $E_{2,\xi}\subseteq E_{1,\xi}$ and the Remark \ref{rmk of ext}, we have $E_2\subseteq E_1$. This inclusion induces a homomorphism $\alpha:j^*E_2\rightarrow j^*E_1$ on $\mathcal X_k$. Also, $\text{rk}(j^*E_1)=\text{rk}(j^*E_2)$ and
$a_{n-1}(j^*E_1)=a_{n-1}(j^*E_2)$, for the sake of the Lemma \ref{lemm shp}. Hence, $j^*E_1$ and $j^*E_2$ have the same modified slope. By the construction of $\alpha$, the map $\alpha$ is not zero and not isomorphism in codimension one. Therefore, we have $E_1=\pi^pE_2$, for some integer $p$.
\end{proof}
We state a Lemma about the torsion free modules on a discrete valuation ring.
\begin{lemma}\label{lemm tor dvr}
Suppose $M$ is a finitely generated torsion free module on a discrete valuation ring. Then $M$ is a free module of finite rank.
\end{lemma}
On analogy with \cite{lan}, we introduce Bruhat-Tits complex of the $E_K$.
Assume that $\mathfrak M$ is the set of all free rank $r$ $\mathcal O_\xi$ submodules of $(E_K)_\Xi$. For every $M\in\mathfrak M$, there is a unique torsion free sheaf $E_R$ on $\mathcal X_R$, which is the extension of $E_K$, for the sake of Lemma \ref{lemm ext 1}. An equivalence relation $\sim$ is defined in $\mathfrak M$ by
\begin{center}\label{claim 1 }
For $M,M^\prime\in\mathcal M$, then $M\sim M^\prime$ if and only if $M=\pi^pM^\prime$, for some $p\in\mathbb Z$. \qquad(E)
\end{center}
Let $\mathfrak Q$ be the set of equivalence classes in $\mathfrak M$. Obviously, every equivalence class in $\mathfrak Q$, defines an extension of $E_K$ to coherent sheaf on $\mathcal X_R$, modulo isomorphism. We now define the structure of an $r$-dimensional simplicial complex on $\mathfrak Q$, which we will call the Bruhat-Tits complex. The dimension of $\mathfrak Q$ will be less than or equal to $r$. Two equivalence classes $[M]$ and $[M^\prime]$ in $\mathfrak Q$ are said to be adjacent if $M$ has a direct decomposition $M=N\oplus P$ such that $M^\prime=N+\pi M$. Since $\mathcal O_{\xi}$ is a discrete valuation ring, $M$ has a basis $\{e_1,e_2,\ldots,e_r\}$ over $\mathcal O_\xi$ such that $\{e_1,\ldots,e_s\}$ and $\{e_{s+1},\ldots,e_r\}$, are bases of $N$ and $P$, respectively, by the sake of Lemma \ref{lemm tor dvr}. So, $\{e_1,\ldots,e_s,\pi e_{s+1},\ldots,\pi e_r\}$ is a basis of $M^\prime$ over $\mathcal O_\xi$. Then, $M$ is adjacent to $M^\prime$ if and only if there is a basis $\{e_1,\ldots,e_r\}$ of $M$ such that $\{e_1,\ldots,e_s,\pi e_{s+1},\ldots,\pi e_r\}$ is a basis of $M^\prime$. A chain $0\subset N_1\subset N_2\subset\cdots\subset N_i\subset M$ of submodules such that each $N_i$ is a direct factor of $M$ and $M_i=N_i+aM$, then the $i+1$ mutually adjacent vertices $[M],[M_1],\ldots,[M_i]$ are said to form a $i$-simplex in $\mathfrak Q$. In other words, the vertices $[M],[M_1],\ldots,[M_i]$ are said to form a $r$-simplex in $\mathfrak Q$ if there is a basis $\{e_1,e_2,\ldots,e_r\}$ of $M$ such that $N_k=(e_1,\ldots,e_{s_k})$ and $M_k=(e_1,\ldots,e_{s_k}, \pi e_{s_k+1},\ldots, \pi e_r)$, for $1\leq k\leq i$. From the above argument, it is clear that the proof of the part $2$ in the Theorem \ref{thm main 1} is equivalent to find a vertex $[E_\xi]$ of $\mathfrak Q$ such that the reduction $E_k$ of the corresponding extension $E_R$ is semistable. Start with any vertex $[E_\xi]$ in $\mathfrak Q$. We have the following Proposition, which is the orbifold vertion of Proposition 7 in \cite{lan}.
\begin{proposition}\label{prop b t c}
Assume that $[E_\xi]$ is a vertex in $\mathcal Q$ and $E_k$ is the corresponding sheaf on $\mathcal X_k$. Then, there is a
natural one-to-one correspondence between edges in $\mathcal Q$ at $[E_\xi]$ and proper subbundles of $E_k$.
Furthermore, if $F\subset E_k$ is a subbundle corresponds to the edge $[E_\xi]-[E_\xi^\prime]$, and if $Q^\prime\subset E_k^\prime$ is the subbundle corresponds to the edge $[E_\xi^\prime]-[E_\xi]$ at $[E_\xi^\prime]$, then there are a homomorphism $E_k\rightarrow E_k^\prime$ with kernel $F$ and image $Q^\prime$, and a homomorphism
$E^\prime\rightarrow E_k$ with kernel $Q^\prime$ and image $F$.
\end{proposition}
\begin{proof}
Firstly, let $E_\xi=(e_1,\ldots,e_r)$ be a representative of the given vertex $[E_\xi]$ and let $E_\xi^\prime=(e_1,\ldots,e_s,\pi e_{s+1},\ldots, \pi e_r)$ be a representative of an adjacent vertex. By the Remark \ref{rmk of ext}, we have a natural inclusion of the corresponding extensions $E_R^\prime$ into $E_R$. If $\widehat{E_\xi}$ and $\widehat{E^\prime_\xi}$ are the coherent sheaves on ${\rm Spec}(\mathcal O_\xi)$, defined by $E_\xi$ and $E_\xi^\prime$, respectively. In the proof of Lemma \ref{lemm ext 1}, we have showed that $E_R=i_*E_K\cap(\gamma_R\circ\beta_1)_*\widehat{E_\xi}$ and $E_R^\prime=i_*E_K\cap(\gamma_R\circ\beta_1)_*\widehat{E_\xi^\prime}$. Let $Q_\xi$ be the cokernel of inclusion $E_\xi^\prime\hookrightarrow E_\xi$ and let $\xymatrix@C=0.5cm{
0 \ar[r] & \widehat{E_\xi}\ar[r] & \widehat{E_\xi^\prime} \ar[r] & \widehat{Q_\xi} \ar[r] & 0 }$ be the associated exact sequence of coherent sheaves on ${\rm Spec}(\mathcal O_\xi)$. By the cartesian diagram (\ref{diag 3}), the following sequence:
\[
\xymatrix@C=0.5cm{
0 \ar[r] & (\gamma_R\circ\beta_1)_*\widehat{E_\xi^\prime}\ar[rr] && (\gamma_R\circ\beta_1)_*\widehat{E_\xi}\ar[rr] && (\gamma_R\circ\beta_1)_*\widehat{Q_\xi}\ar[r] & 0 }
\]
is exact. Thus, the cokernel $Q$ of $E_R^\prime\hookrightarrow E_R$ admits an injection $Q\hookrightarrow (\gamma_R\circ\beta_1)_*\widehat{Q_\xi}$. So, $Q$ is a coherent $\mathcal O_{\mathcal X_k}$ module. Restricting to $\mathcal X_k$, we get right exact sequence:
\[
\xymatrix@C=0.5cm{
E_k^\prime \ar[rr] && E_k \ar[rr] && Q \ar[r] & 0 }.
\]
In addition, $Q$ is torsion free on $\mathcal X_k$. Indeed, as the proof of Lemma \ref{lemm ext 1}, we only need to check this, when $\mathcal X$ is an affine irreducible smooth variety. Assume that $\mathcal X={\rm Spec}(A)$. Then, $(\gamma_R\circ\beta_1)_*\widehat{Q_\xi}$ is isomorphic to the quasicoherent sheaf that associated to the direct sum of $(r-s)$-copies $K_A$, where $K_A$ is the quotient field of $A$. Thus, the image $F=\text{Im}(E_k^\prime\rightarrow E_k)$ is a subbundle of $E_k$, with an exact sequence:
\[
\xymatrix@C=0.5cm{
0 \ar[r] & F \ar[rr] && E_k \ar[rr] && Q \ar[r] & 0 }.
\]
Now, we have construct a subbundle $F$ of $E_k$, from an edge at $[E_\xi]$.\\
\hspace*{2em} Conversely, if $F$ is a subbundle of $E_k$ and $Q=E_k/F$, then we have an exact sequence of torsion free sheaves:
\[
\xymatrix@C=0.5cm{
0 \ar[r] & F \ar[rr] && E_k \ar[rr] && Q \ar[r] & 0 }.
\]
On the other hand, there is a natural surjective homomorphism $E_R\rightarrow E_k$. Composing this morphism with
the last morphism in the above exact sequence, we get a surjective homomorphism $E_R\rightarrow Q$ of coherent sheaves and an exact sequence:
\begin{equation}\label{exact seq 1}
\xymatrix@C=0.5cm{
0 \ar[r] & E^\prime_R \ar[rr] && E_R \ar[rr] && Q \ar[r] & 0 }.
\end{equation}
We have to show that the above two procedures are invertible to each other. In fact, by the exact sequence (\ref{exact seq 1}), we have:
\[
\xymatrix{
0 \ar[r] & ({E^\prime_R})_\xi \ar[rr] && ({E_R})_\xi \ar[dr] \ar[rr] && Q_\xi \ar[r] & 0 \\
& && &({E_k})_\xi \ar[ur]&
}
\]
Suppose that $(E_k)_\xi$ is generated by $\{\overline e_1,\ldots,\overline e_r\}$ and $F_\xi$ is
generated by $\{\overline e_1,\ldots,\overline e_s\}$. Also, $\{\overline e_1,\ldots,\overline e_r\}$ lifts to a basis
$\{e_1,\ldots,e_r\}$ of $(E_R)_\xi$. Then, $({E^\prime_R})_\xi$ is generated by $\{e_1,\ldots,e_s, \pi e_{s+1},\ldots, \pi e_r\}$.
And, $({E^\prime_R})_\xi$ represents a vertex of $\mathfrak Q$, which is adjacent to $[E_\xi]$. Pulling back the exact sequence
\[
\xymatrix@C=0.5cm{
0 \ar[r] & E^\prime_R \ar[rr] && E_R \ar[rr] && Q \ar[r] & 0 }
\]
to $\mathcal X_k$, we get
\[
\xymatrix@C=0.5cm{
0 \ar[r] & Q^\prime\ar[r]& E^\prime_k \ar[r] & E_k \ar[r] & Q \ar[r] & 0 },
\]
where $Q^\prime=\text{Tor}_1^{\mathcal O_{\mathcal X_R}}(Q,\mathcal O_{\mathcal X_k})$. Tensoring the exact sequence $\xymatrix@C=0.5cm{0 \ar[r] &
\mathcal O_{\mathcal X_R} \ar[r]^{\pi} &\mathcal O_{\mathcal X_R}\ar[r] & \mathcal O_{\mathcal X_k} \ar[r] & 0 }$ with $Q$, we have
\[
\xymatrix@C=0.5cm{
0 \ar[r] & Q^\prime\ar[r]& Q \ar[r]^{\pi} & Q \ar[r]^{\text{id}} & Q \ar[r] & 0 },
\]
whence $Q^\prime\cong Q$. Thus, we get two exact sequences
\[
\xymatrix@C=0.5cm{
0 \ar[r] & F \ar[rr] && E_k \ar[rr] && Q \ar[r] & 0 };
\]
\[
\xymatrix@C=0.5cm{
0 \ar[r] & Q \ar[rr] && E_k^\prime \ar[rr] && F \ar[r] & 0 }.
\]
Since $Q$ and $F$ are torsion free sheaves, $E_k^\prime$ is torsion free.
Hence, $E_R^\prime$ is the extension of $E_K$ to $\mathcal X_R$,
corresponding to the vertex $[(E_R^\prime)_\xi]$ of $\mathfrak Q$.
On the other hand, we have the following exact sequence:
\[
\xymatrix@C=0.5cm{
0 \ar[r] & \pi E_R \ar[rr] && E_R^\prime \ar[rr] && F\ar[r] & 0 }.
\]
Also, $E_R\overset{\pi}{\rightarrow}\pi E_R$ is an isomorphism. We get a natural homomorphism $E_R\overset{\pi}{\rightarrow}\pi E_R\hookrightarrow E_R$. Pulling back to $\mathcal X_k$, we get a homomorphism $E_k\rightarrow E_k^\prime$ and the image $Q^\prime$ of it is the subbundle corresponding to the edge $[E_\xi^\prime]-[E_\xi]$ at vertex $[E_\xi^\prime]$.
\end{proof}
The subbundle $B$ in Proposition \ref{prop beta bd}, will be called the $\beta$-subbundle of the $E$.
Now assume that we are given a vertex $[E_\xi]$ of $\mathfrak Q$ such that the corresponding $E_k$ on
$\mathcal X_k$ is unstable. Let $B\subset E_k$ be the $\beta$-subbundle of $E_k$. Then, $\beta(B)>0$. By the Proposition \ref{prop b t c}, there is an edge in $\mathfrak Q$ at $[E_\xi]$
corresponding to $B$. Let $[E_\xi^{(1)}]$ be the vertex in $\mathfrak S$ determined by the edge, which corresponds to the subbundle $B$. Let $F_1\subseteq E_k^{(1)}$ be the image
of the canonical homomorphism $E_k\rightarrow E_k^{(1)}$(=the kernel of the homomorphism $E_k^{(1)}\rightarrow E_k$ ).
\begin{lemma}\label{lemm bb l}
If $G\subset E_k^{(1)}$ is a subbundle of $E_k^{(1)}$, then $\beta(G)\leq\beta(B)$, with equality
possible only if $G\vee F_1=E_k^{(1)}$.
\end{lemma}
\begin{proof}
By the argument of Proposition \ref{prop b t c}, there are two exact sequences:
\[
\xymatrix@C=0.5cm{
0 \ar[r] & B \ar[rr] && E_k \ar[rr] && F_1 \ar[r] & 0 };
\]
\[
\xymatrix@C=0.5cm{
0 \ar[r] & F_1 \ar[rr] && E_k^{(1)} \ar[rr] && B \ar[r] & 0 }.
\]
If $G\subseteq F_1$, then there is a subbundle $W\subseteq E_k$,
such that
\[
\xymatrix@C=0.5cm{
0 \ar[r] & B \ar[rr] && W \ar[rr] && G \ar[r] & 0 }.
\]
Thus, $\beta(G)=\beta(W)-\beta(B)\leq 0$ (Proposition \ref{prop pro beta} and Proposition \ref{prop beta bd}). If $F_1\subset G$, then there is a subbundle $W^\prime\subseteq B$, such that
\[
\xymatrix@C=0.5cm{
0 \ar[r] & F_1 \ar[rr] && G \ar[rr] && W^\prime \ar[r] & 0 }.
\]
So, $\beta(G)=\beta(F_1)+\beta(W^\prime)=\beta(W^\prime)-\beta(B)\leq 0$
($\beta(B)+\beta(F_1)=\beta(E_k)=0$ and Proposition \ref{prop pro beta}). For the other case, we have
$\beta(G)\leq\beta(G\vee F_1)+\beta(G\cap F_1)-\beta(F_1)\leq\beta(B)$ and equality
possible only if $G\vee F_1=E_k^{(1)}$.
\end{proof}
Following \cite{lan}, we are now going to define a path $\mathcal P$ in $\mathfrak Q$
which starts at a given vertex $[E_\xi]$ such that the corresponding $E_k$ is unstable.
Let the succeeding vertex be the vertex determined by the edge corresponding to the $\beta$-subbundle $B$ of $E_k$.
If $\mathcal P$ reaches a vertex $[E_\xi^{(m)}]$ such that the corresponding bundle $E_k^{(m)}$ is semistable, then
the process stops automatically. If the path $\mathcal P$ never reaches a vertex corresponding
to a semistable reduction, then the process continuous indefinitely. In the following, We will show that the second alternative is impossible.\\
\hspace*{2em}Denote the $\beta$ subbundle of $E_k^{(m)}$ by $B^{(m)}$ and let $\beta_m=\beta(B^{(m)})$.
By Lemma \ref{lemm bb l}, $\beta_{m+1}\leq\beta_{m}$ and we must have $\beta_m>0$ unless $E_k^{(m)}$ is semistable. Thus, if the path $\mathcal P$ is continuous indefinitely we have $\beta_m=\beta_{m+1}=\cdots$ for sufficiently large $m$. Also, by Lemma \ref{lemm bb l}, for sufficiently large $m$, $B^{(m)}\vee F^{(m)}=E_k^{(m)}$, where $F^{(m)}=\text{Im}(E_k^{(m-1)}\rightarrow E_k^{(m)})$
($\text{Ker}(E_k^{(m)}\rightarrow E_k^{(m-1)})$). So, $\text{rank}(B^{(m)})+\text{rank}(F^{(m)})\geq r$. On the other hand,
$\text{rank}(B^{(m-1)})+\text{rank}(F^{(m)})=r$. Therefore, $\text{rank}(B^{(m)})\geq\text{rank}(B^{(m-1)})$, for sufficiently large $m$.
Since $\text{rank}(B^{(m)})\leq r$, we must have $\text{rank}(B^{(m)})=\text{rank}(B^{(m+1)})=\cdots$, for sufficiently large $m$.
Thus, $\text{rank}(B^{(m)})+\text{rank}(F^{(m)})= r$, $B^{(m)}\cap F^{(m)}=0$. Consequently, the canonical homomorphism
$E_k^{(m)}\rightarrow E_k^{(m-1)}$ induces an injection $B^{(m)}\hookrightarrow B^{(m-1)}$. Also, the canonical homomorphism
$E_k^{(m-1)}\rightarrow E_k^{(m)}$ induces an injection $F^{(m-1)}\hookrightarrow F^{(m)}$. Also, $\beta(B^{(m)})$ and $\text{rank}(B^{(m)})$ are both constant it implies that $\beta(F^{(m)})=\beta(F^{(m+1)})=\cdots$, for $m$ sufficiently large.
\begin{lemma}
Let $R$ be a complete discrete valuation ring and $\mathcal P$ be an infinite path in $\mathfrak Q$, with vertices
$[E_\xi]$, $[E_\xi^{(1)}]$, $[E_\xi^{(2)}]$ $\cdots$. Let $F^{(m)}=\text{Im}(E_k^{(m+1)}\rightarrow E_k^{(m)})$. Assume that
$\text{rank}(F)=\text{rank}(F^{(1)})=\text{rank}(F^{(2)})=\text{rank}(F^{(3)})=\cdots=r$, the canonical homomorphism $E^{(m+1)}\rightarrow E^{(m)}$ induces injection $F^{(m+1)}\hookrightarrow F^{(m)}$, for each $m$, and $a_{n-1}(F)=a_{n-1}(F^{(1)})=a_{n-1}(F^{(2)})=\cdots$. Then $\beta(F)\leq 0$.
\end{lemma}
\begin{proof}
By the Lemma \ref{lemm ext 1}, there is a sequence of extensions of $E_K$ to $\mathcal X_R$, i.e
\[
\cdots\subset E^{(m)}\subset\cdots\subset E^{(1)}\subset E.
\]
Restricting the above inclusions to the special fiber $\mathcal X_k$, we get homomorphisms
\[
\cdots\rightarrow E^{(m)}_k\rightarrow\cdots\rightarrow E^{(1)}_k\rightarrow E_k
\]
and $F^{(m)}=\text{Im}(E_k^{(m+1)}\rightarrow E_k^{(m)})$, for each $m\geq 0$. Let $Q^{(m+1)}=\text{Ker}(E^{(m+1)}\rightarrow E^{(m)})$, for $m\geq 0$. Then the hypothesis that $F^{(m+1)}\hookrightarrow F^{(m)}$ is injective implys that $Q^{(m)}\cap F^{(m)}=(0)$. Let $\cdots\leftarrow E^{(m)}_k\leftarrow\cdots\leftarrow E^{(1)}_k\leftarrow E_k$ be the reverse homomorphisms.
By the Proposition \ref{prop b t c}, $Q^{(m)}=\text{Im}(E^{(m-1)}\rightarrow E^{(m)})$ and $F^{(m)}=\text{Ker}(E^{(m)}\rightarrow E^{(m+1)})$. Since $F^{(m)}\cap Q^{(m)}=(0)$, the induced map $Q^{(m)}\rightarrow Q^{(m+1)}$ is injective. By the exact sequence $\xymatrix@C=0.5cm{0 \ar[r] & F^{(m-1)} \ar[r] & E^{(m)} \ar[r] & Q^{(m)}\ar[r] & 0 }$, we have $a_{n-1}(F^{(m-1)})+a_{n-1}(Q^{(m)})=a_{n-1}(E^{(m)})=a_{n-1}(E_K)$, for $m\geq 1$. Since $a_{n-1}(F)=a_{n-1}(F^{(1)})=a_{n-1}(F^{(2)})=\cdots$, we have $a_{n-1}(Q^{(1)})=a_{n-1}(Q^{(2)})=a_{n-1}(Q^{(3)})=\cdots$.
Hence, the injections $Q^{(m)}\hookrightarrow Q^{(m+1)}$ are isomorphisms in codimension one. Also, $Q^{(m)**}$ are reflexive sheaves, then $Q^{(m)**}$ are determined by their restriction on the codimension one open substack. Thus, we have isomorphisms
\[
Q^{(1)**}\rightarrow Q^{(2)**}\rightarrow Q^{(3)**}\rightarrow Q^{(m)**}\rightarrow\cdots.
\]
So there is a sequence of inclusions:
\[
Q^{(1)}\hookrightarrow Q^{(2)}\hookrightarrow Q^{(3)}\hookrightarrow Q^{(m)}\hookrightarrow\cdots\hookrightarrow Q^{(1)**}.
\]
On the other hand, $Q^{(1)**}$ is a coherent sheaf on $\mathcal X_k$, it follows that
\[
Q^{(m)}\hookrightarrow Q^{(m+1)}\hookrightarrow Q^{(m+3)}\hookrightarrow\cdots.
\]
are isomorphisms, for sufficiently large $m$. Thus, we may assume without loss of generality that
\[
Q^{(1)}\hookrightarrow Q^{(2)}\hookrightarrow Q^{(3)}\hookrightarrow Q^{(m)}\hookrightarrow\cdots
\]
are isomorphisms. Also, we may assume that there is a subbundle $Q\subset E_k$ such that $Q\hookrightarrow Q^{(1)}$ is an isomorphism. Therefore, the exact sequence $\xymatrix@C=0.5cm{0 \ar[r] & F^{(m)} \ar[r] & E^{(m)}_k \ar[r] & Q^{(m+1)}\ar[r] & 0 }$ splits, for each $m\geq 0$, i.e $E^{(m)}_k=F^{(m)}\oplus Q^{(m)}$. So, the exact sequence $\xymatrix@C=0.5cm{ 0 \ar[r] & Q^{(m+1)} \ar[r] & E_k^{(m+1)} \ar[r] & F^{(m)} \ar[r] & 0 }$ yields $F^{(m+1)}\hookrightarrow F^{(m)}$ is an isomorphism, for each $m\geq 0$.\\
\hspace*{1em} Consider the completion $\hat{\mathcal X}_R$ of $\mathcal X_R$ with respect to the special fiber $\mathcal X_k$.
Then $\hat{\mathcal X}_R=\underset{\underset{m}{\longleftarrow}}{\lim}\mathcal X_m$, where $\mathcal X_m=\mathcal X_R\times\text{Spec}(R/(\pi^m))$ is a closed subscheme of $\mathcal X_R$, for each $m\geq 0$. For a coherent sheaf $G$ on $\mathcal X_R$, we denote the restriction of $G$ to $\mathcal X_m$ by $G_m$. Following \cite{lan}, we will construct a coherent subsheaf $\hat{F}$ of $\hat{E}=\underset{\underset{m}{\longleftarrow}}{\lim}E_m$ on $\hat{\mathcal X}_R$. For each $m$, we will construct a coherent subsheaf $F_m$ of $E_m$ as following: Pulling back the inclusion $E^{(m)}\rightarrow E$ to $\mathcal X_m$, we get a homomorphism $E_m^{(m)}\rightarrow E_m$ and let $F_m$ be the image of this homomorphism. Let $j_{m,m^\prime}$ be the closed immersion $\mathcal X_{m^\prime}\hookrightarrow \mathcal X_m$, for $m^\prime\leq m$. Pulling back the homomorphism $E_m^{(m)}\twoheadrightarrow F_m\hookrightarrow E_m$ to $\mathcal X_{m^\prime}$, we get homomorphism $E_{m^\prime}^{(m)}\twoheadrightarrow j^*_{m,m^\prime}F_m\rightarrow E_{m^\prime}$, which fit into a commutative diagram:
\[
\xymatrix{
E_{m^\prime}^{(m)} \ar[d] \ar[r] & E_{m^\prime}^{(m^\prime)}\ar[d] \\
j^*_{m,m^\prime}F_m \ar[r] & E_{m^\prime} }
\]
So, there is a natural homomorphism $j^*_{m,m^\prime}F_m\rightarrow F_{m^\prime}$. We can show this homomorphism is an isomorphism, step by step as the proof of Lemma 2 in \cite{lan}. Thus, we get an inverse system of sheaves $\{F_m\}$ and the inverse limit is a coherent subsheaf of $\hat E$ on $\hat{\mathcal X}_R$.
By the Grothendieck's existence theorem for tame stacks in appendix A of \cite{av}, there exists a coherent subsheaf $F_R$ of $E$ such that $\hat{F}_R=\underset{\underset{m}{\longleftarrow}}{\lim}F_m$. Also, $j^*F_R=F$. Therefore, $a_{n-1}(F)=a_{n-1}(F_K)$, where $F_K=i^*F_R$. Since $E_K$ is semistable, we have $\beta(F)=\beta(F_k)\leq 0$.
\end{proof}
\begin{proof}[\textbf{The Proof of the second part in Theorem \ref{thm main 1}}]
For the case: $R$ is a complete discrete valuation ring, we have complete the proof of the Theorem \ref{thm main 1}.
As in \cite{lan}, the general case can be reduce to above case by considering the completion $\hat{R}$ of $R$. There is the following commutative diagram:
\[
\xymatrix{
\mathcal X_k \ar[d]_{id} \ar[r]^{\hat{j}} & \mathcal X_{\hat{R}} \ar[d]_{p} & \mathcal X_{\hat{K}}\ar[l]_{\hat{i}} \ar[d]^{p^\prime} \\
\mathcal X_k \ar[r]^{j} & \mathcal X_R & \mathcal X_K \ar[l]_{i} }
\]
Suppose that $E_K$ is a torsion free semistable sheaf on $\mathcal X_K$. Then the pullback ${p^\prime}^*E_K$ is a torsion free semistable sheaf on $\mathcal X_{\hat{K}}$(Proposition \ref{pro ex ss}), where $\hat{K}$ is the quotient field of $\hat{R}$. Denote the Bruhat-Tits complexes corresponding to $E_K$ and ${p^\prime}^*E_K$ by $\mathfrak Q_{1}$ and $\mathfrak Q_{2}$ respectively.
For a vertex $[E_\xi]$ in the Bruhat-Tits complex $\mathfrak Q_1$, $[E_{\hat{R},\xi}]$ is the vertex in the Bruhat-Tits complex $\mathfrak Q_2$, where $E_{\hat{R},\xi}=E_\xi\underset{R}{\otimes}\hat{R}$. If $E$ is the torsion free sheaf on $\mathcal X_R$ corresponding to $[E_\xi]$, then $p^*E$ is the torsion free sheaf on $\mathcal X_{\hat{R}}$ corresponding to $[E_{\hat{R},\xi}]$. When $E_k$ is unstable, denote the $\beta$-subbundle of $E_k$ by $F_k$. By the Lemma \ref{prop b t c}, the edge $[E_\xi]-[E_{\xi}^\prime]$ in $\mathfrak Q_1$ corresponding to $F_k$ is constructed as following:
\[
\xymatrix@C=0.5cm{
0 \ar[r] & F_k \ar[rr] && E_k \ar[rr] && Q_k \ar[r] & 0 }
\]
\[
\xymatrix@C=0.5cm{
0 \ar[r] & E^\prime \ar[rr] && E \ar[rr] && Q_k \ar[r] & 0 }.
\]
And, the edge $[p^*E_\xi]-[p^*E_\xi^\prime]$ in $\mathfrak Q_2$ corresponding to $F_k$ is given by
\[
\xymatrix@C=0.5cm{
0 \ar[r] & F_k \ar[rr] && E_k \ar[rr] && Q_k \ar[r] & 0 }
\]
\[
\xymatrix@C=0.5cm{
0 \ar[r] & p^*E^\prime \ar[rr] && p^*E \ar[rr] && Q_k \ar[r] & 0 }.
\]
In $\mathfrak Q_2$, there is a finite path leading to a vertex whose corresponding torsion free sheaf on $\mathcal X_k$ is semistable, so it is in $\mathfrak Q_1$.
\end{proof}
\section*{Acknowledgement}
The author would like to thank Professor Jianxun Hu for his encouragement and help in the procedure of completing this paper. And also,
thanks Professor Yunfeng Jiang for carefully reading this paper and giving some valuable advices.
|
1,477,468,750,605 | arxiv | \section{Introduction}
The estimation of various types of moments of the Riemann zeta-function has been intensely studied for the better part of a century. The zeta-function is given by
\[
\zeta(s)=\sum_{n=1}^{\infty}n^{-s},
\]
where $s=\sigma+it$ denotes a complex variable with real part $\sigma$ and imaginary part $t$. This definition is valid for $\sigma>1$, but $\zeta(s)$ may be continued analytically to the rest of the complex plane except for a simple pole at $s=1$. The specific moments we examine here are defined as
\[
J_k(T):=\frac{1}{N(T)}\sum_{0<\im(\rho)\le T}|\zeta'(\rho)|^{2k}
\]
for $k$ a positive real number. The sum here is over nontrivial zeros $\rho$ of the zeta-function, i.e. those zeros with positive real part. The normalizing factor $N(T)$ is the number of $\rho$ over which we are summing, so $J_k(T)$ is the $2k$-th moment of $|\zeta'(s)|$ on the discrete probability space
\[
\{\rho:\zeta(\rho)=0,\re(\rho)>0\text{ and }0<\im(\rho)\le T\}
\]
equipped with the uniform measure. Consequently $J_k(T)$ is commonly called a discrete moment in the literature. The more information we have regarding $J_k(T)$, the more we can say about the distribution of values of $|\zeta'(\rho)|$.
These discrete moments were first studied by Gonek~\cite{Go84}, who conditionally established the asymptotic formula
\[
J_1(T)\sim \tfrac{1}{12}(\log T)^3
\]
with an explicit error term. Gonek's proof relies inherently on the validity of the Riemann hypothesis (RH), the statement that all nontrivial zeros $\rho$ have real part $\re(\rho)=\frac12$. No asymptotic formulas are known for other values of $k$, even assuming RH, and Gonek's estimate has yet to be proved unconditionally. Gonek~\cite{Go89} and Hejhal~\cite{He} independently conjectured
\be\label{GoHe conjecture}
J_k(T)\asymp(\log T)^{k(k+2)}
\ee
for any real number $k$. This agrees with Gonek's estimate for $J_1(T)$. Ng~\cite{Ng} proved $J_2(T)\asymp (\log T)^8$, so the conjecture also holds for $k=2$.
This conjecture has since been strengthened. Using a random matrix model for $\zeta'(\rho)$, Hughes, Keating, and O'Connell~\cite{HuKeOc} suggested the asymptotic formula
\[
J_k(T)\sim C_k(\log T)^{k(k+2)}.
\]
The constants $C_k$ in their conjecture are explicit, given by
\[
C_k=\frac{G(k+2)^2}{G(2k+3)}\prod_{p\text{ prime}}\left(1-\frac{1}{p}\right)^{k^2}\sum_{m=0}^{\infty}\left(\frac{\Gamma(m+k)}{m!\,\Gamma(k)}\right)^2p^{-m},
\]
where $G(x)$ is the Barnes $G$-function. Furthermore, they provided a heuristic explanation which suggests that the conjectured asymptotic formula of Gonek and Hejhal should fail for $k\le-\frac32$. Hughes, Keating, and O'Connell inserted the product over primes here was inserted in an \emph{ad hoc} manner, namely from a heuristic estimate for the case $k=-1/2$. Bui, Gonek, and Milinovich~\cite{BuGoMi} used a hybrid Euler-Hadamard product model for $\zeta'(\rho)$ to suggest precisely where this product over primes comes from, essentially merging ideas from number theory and random matrix theory in the same way as Gonek, Hughes, and Keating~\cite{GoHuKe} did for moments of $\zeta(\frac12+it)$.
These conjectures remain open, but work has been done toward the implied upper and lower bounds conditionally on RH. Milinovich and Ng~\cite{MiNg} obtained the expected lower bound
\be\label{MiNg}
J_k(T)\gg_k (\log T)^{k(k+2)}
\ee
for any natural number $k$. In the other direction, Milinovich\cite{Mi} showed that
\be\label{Mi}
J_k(T)\ll_{k,\varepsilon} (\log T)^{k(k+2)+\varepsilon}
\ee
for any $\varepsilon>0$. The purpose of this paper is to remove the $\varepsilon$ in the exponent here and prove the following.
\begin{theorem}\label{Thm 1}
Assume RH. Let $k>0$. Then $$J_k(T)\ll_k(\log T)^{k(k+2)}$$
as $T\to \infty$.
\end{theorem}
\noindent Together with \eqref{MiNg}, this shows that
\[
J_k(T)\asymp_k (\log T)^{k(k+2)}
\]
for $k$ a natural number. This proves (on RH) the conjecture of Gonek and Hejhal for $k$ a positive natural number. In fact, the method of Milinovich and Ng used to prove \eqref{MiNg} should be able to cover the case of real $k>0$ using the work of Radziwi\l\l\ and Soundararajan in \cite{RaSo} and \cite{RaSo2}, assuming RH. This would establish the conjecture of Gonek and Hejhal for all real, positive $k$. We remark that the implied constant in Theorem~\ref{Thm 1} grows like $ e^{e^{Ak}}$ for some $A>0$ as $k$ gets large. For comparison, the conjecture of Hughes, Keating, and O'Connell suggests an implied constant $\approx e^{-k^2\log k}$ is permissible.
In the last section of the paper, we shall also indicate how to prove the following result.
\begin{theorem}\label{Thm 2}
Assume RH. Let $k>0$. Let $\alpha$ be a complex number with $|\alpha|\le (\log T)^{-1}$. Then
\[
\frac{1}{N(T)}\sum_{0<\im(\rho)\le T}|\zeta(\rho+\alpha)|^{2k}\ll_k (\log T)^{k^2}
\]
as $T\to \infty$.
\end{theorem}
\noindent This shifted moment of the zeta-function was considered by Milinovich~\cite{Mi} in his proof of \eqref{Mi}, and our Theorem~\ref{Thm 2} is an improvement of Theorem 1.2 in \cite{Mi}. As a consequence of our result, we deduce the following.
\begin{corollary}
Assume RH. Let $k\ge \frac12$ and let $\nu$ be a positive integer. Then
\[
\frac{1}{N(T)}\sum_{0<\im(\rho)\le T}|\zeta^{(\nu)}(\rho)|^{2k}\ll_{k,\nu} (\log T)^{k(k+2\nu)}
\]
as $T\to \infty$.
\end{corollary}
\begin{proof}
This follows from the direct analogue of Lemma 8.1 in \cite{MiNg2}; we provide the details here for the sake of completeness. By Cauchy's integral formula, we have
\be\label{eq:cauchy}
\sum_{0<\im(\rho)\le T}|\zeta^{(\nu)}(\rho)|^{2k}=\Big(\frac{\nu!}{2\pi}\Big)^{2k}\sum_{0<\im(\rho)\le T}\bigg|\int_C\frac{\zeta(\rho+\alpha)}{\alpha^{\nu+1}}d\alpha\bigg|^{2k},
\ee
where $C$ is the positively-oriented circle of radius $(\log T)^{-1}$ centered at the origin. Since
\[
\bigg|\int_C\frac{\zeta(\rho+\alpha)}{\alpha^{\nu+1}}d\alpha\bigg|\le (\log T)^{\nu+1}\int_C|\zeta(\rho+\alpha)|\,|d\alpha|,
\]
it follows from \eqref{eq:cauchy} that
\[
\sum_{0<\im(\rho)\le T}|\zeta^{(\nu)}(\rho)|^{2k}\le \Big(\frac{\nu!}{2\pi}\Big)^{2k}(\log T)^{2k(\nu+1)}\sum_{0<\im(\rho)\le T}\bigg(\int_C|\zeta(\rho+\alpha)|\,|d\alpha|\bigg)^{2k}.
\]
If $k>\frac12$, then H\"older's inequality implies that
\[
\bigg(\int_C|\zeta(\rho+\alpha)|\,|d\alpha|\bigg)^{2k}\le \bigg(\int_C|d\alpha|\bigg)^{2k-1}\bigg(\int_C|\zeta(\rho+\alpha)|^{2k}|d\alpha|\bigg);
\]
when $k=\frac12$, this same bound trivially holds. The first integral on the right-hand side here is precisely $2\pi (\log T)^{-1}$, and so we see that
\[
\sum_{0<\im(\rho)\le T}|\zeta^{(\nu)}(\rho)|^{2k}\le \frac{(\nu !)^{2k}}{2\pi}(\log T)^{2k\nu+1}\int_C\bigg(\sum_{0<\im(\rho)\le T}|\zeta(\rho+\alpha)|^{2k}\bigg)|d\alpha|.
\]
The result follows by dividing both sides of this last inequality by $N(T)$, applying Theorem~\ref{Thm 2} to the sum on the right-hand side, and finally integrating over $\alpha$.
\end{proof}
Lastly, we remark on some connections between these discrete moments and simple zeros of $\zeta(s)$. If $N^*(T)$ counts the number of simple zeros with $0<\im(\rho)\le T$, then Cauchy--Schwarz implies that
\[
N^*(T)\ge \frac{J_k(T)^2}{J_{2k}(T)}.
\]
Montgomery's~\cite{Mo} pair correlation conjecture implies that almost all zeros are simple, and it is generally expected that this is true of all zeros. However, $J_k(T)$ grows too quickly as $T\to \infty$ to obtain even a positive proportion of simple zeros via the inequality above. In order to minimize the loss from Cauchy-Schwarz, Conrey, Ghosh and Gonek~\cite{CoGhGo} used a mollified version of $J_1(T)$ to show that at least $19/27$ of the nontrivial zeros are simple, assuming the generalized Riemann Hypothesis\footnote{The statement of their result actually assumes RH and the generalized Lindel\"of Hypothesis. However, it appears that there is a problem with their proof under these assumptions. This may be resolved by assuming the generalized RH instead. We thank Professors Gonek and Milinovich for bringing this to our attention.} for Dirichlet $L$-functions; Bui and Heath-Brown subsequently proved the same result assuming only RH. It may be of future interest to estimate mollified versions of $J_k(T)$ for $k>1$, though it seems unlikely that this will lead to significant improvements on the proportion of simple zeros.
Another connection between $J_k(T)$ and simple zeros is as follows. The Mertens function $M(x)$ is defined as
\[
M(x):=\sum_{n\le x}\mu(n),
\]
where $\mu$ is the M\"obius function. It is well known that RH is equivalent to the estimate $M(x)\ll x^{1/2+\varepsilon}$ for $\varepsilon>0$. In unpublished work, Gonek proved that RH and the conjectured upper bound $J_{-1}(T)\ll T$ from \eqref{GoHe conjecture} above imply that $M(x)\ll x^{1/2}(\log x)^{3/2}$, which was later shown by Ng~\cite{Ng} as well. Note that the bound $J_{-1}(T)\ll T$ automatically assumes that all zeros are simple, as otherwise $J_{-1}(T)=\infty$ for all sufficiently large $T$. In fact, under these hypotheses, Ng shows that $e^{-y/2}M(e^y)$ has a limiting distribution, with $0\le y\le Y$, as $Y\to \infty$. This, along with some additional assumptions which include an upper bound on $J_{-1/2}(T)$, leads Ng to re-establish the unpublished conjecture of Gonek that
\[
\underline{\overline{\lim}}_{x\to \infty}\frac{M(x)}{\sqrt{x}(\log\log\log x)^{5/4}}=\pm B
\]
for some positive constant $B$. Thus the study of $J_k(T)$ for $k$ small and negative may lead to further insight into the distribution and behavior of $M(x)$.
\section{The idea behind the proof of Theorem~\ref{Thm 1}}
Let
\[
I_{k}(T)=\frac{1}{T}\int_0^T |\zeta(1/2+it)|^{2k} dt.
\]
In 2009 Soundararajan~\cite{So} showed that, on RH,
$I_{k}(T)\ll T(\log T)^{k^2+\epsilon}$ for every $\epsilon >0$. A few years later, Harper~\cite{Ha} devised a method to prove, again on RH, that $I_{k}(T)\ll T(\log T)^{k^2}$, which is the actual conjectured size; moreover, it is the same size (in the $T$ aspect) of the unconditional lower bound proved by Radziwi{\l\l} and Soundararajan~\cite{RaSo}. Our proof of Theorem~\ref{Thm 1} is based on Harper's method, and improves upon Milinovich's upper bound \eqref{Mi} in the same way that Harper's improves upon Soundararajan's. We note that our implied constant is of the same form as that of Harper.
Harper's method relies on two ingredients. The first is an upper bound for $\log|\zeta(\frac12+it)|$ in terms of a Dirichlet polynomial. The second is an estimate for integrals of the form
\be\label{harperintegral1}
\int_T^{2T}\cos(t\log p_1)\cdots \cos(t\log p_m)dt
\ee
for (not necessarily distinct) prime numbers $p_1,\ldots, p_m$. This follows easily from the basic orthogonality estimate
\be\label{harperintegral2}
\int_T^{2T}e^{irt}dt=T\delta_0(r)+O(r^{-1}),
\ee
where $\delta_0$ is a Dirac mass at $0$.
Henceforth we write $\gamma$ in place of $\im(\rho)$, so that $\rho=\frac12+i\gamma$ assuming RH.
To estimate our discrete moments $J_k(T)$, our first ingredient is an upper bound for $\log |\zeta'(\rho)|$; our second, analogous to \eqref{harperintegral1}, is an estimate for sums of the form
\[
\sum_{0<\gamma\le T}\cos(\gamma\log p_1)\cdots \cos(\gamma\log p_m).
\]
The discrete analogue of \eqref{harperintegral2} is given by Gonek's~\cite{Go93} uniform version of Landau's formula. On RH, this says roughly that
\[
\sum_{0<\gamma\le T}e^{ir\gamma}=N(T)\delta_0(r)-Tf(r)+\text{ small error},
\]
where $f$ is a certain nonnegative function. Note that, in comparison with \eqref{harperintegral2}, this has a secondary term. The final contribution to $J_k(T)$ from this secondary term is possibly of the same order as that of the first term, namely $N(T)(\log T)^{k(k+2)}$. However, the secondary term contribution is not positive, so we may ignore it and still obtain an upper bound.
The expectation that $J_k(T)\approx N(T)(\log T)^{k(k+2)}$ can be explained in a few different ways; here we discuss two heuristics. On average (in the sense of mean-square), $|\zeta'(\frac12+it)|$ is roughly $\log T$ larger than $|\zeta(\frac12+it)|$ in the interval $[0,T]$. Ford and Zaharescu~\cite{FoZa} showed that the $\gamma$ are equidistributed (mod $1$), so it seems reasonable to expect that the sum over $\gamma$ is approximated well by the corresponding integral, i.e.
\begin{align*}
J_k(T) &\approx \frac{1}{T}\int_0^T|\zeta'(\tfrac12+it)|^{2k}dt\\
&\approx (\log T)^{2k}I_k(T).
\end{align*}
A second heuristic relies on the expected Gaussian behavior of $\log|\zeta'(\rho)|$. Assuming RH and that the zeros of $\zeta(s)$ do not cluster together often in a particular sense, Hejhal~\cite{He} proved a central limit theorem for $\log|\zeta'(\rho)|$. Roughly speaking, he showed that the values of
$\log |\zeta'(\rho)|$, $0<\gamma\le T$, tend to be distributed like those of a Gaussian with mean $\log\log T$ and variance $\frac12 \log\log T$ as $T$ gets large, i.e. (see Theorem 4 of \cite{He})
\[
\lim_{T\to \infty}\frac{1}{N(T)}\#\left\{0<\gamma\le T:\frac{\log|\zeta'(\rho)|-\log\log T}{\sqrt{\frac12 \log\log T}}\in(a,b)\right\}=\frac{1}{\sqrt{2\pi}}\int_a^be^{-x^2/2}dx.
\]
If we assume that this central limit behavior holds uniformly in $T$, then this suggests that
\begin{align*}
J_k(T)&=\frac{1}{N(T)}\sum_{0<\gamma\le T}e^{2k\log |\zeta'(\rho)|}\\
&\approx \frac{1}{\sqrt{\pi\log\log T}}\int_{-\infty}^{\infty}e^{2kv}e^{-\frac{(v-\log\log T)^2}{\log\log T}}dv.
\end{align*}
After centering the integrand via the substitution $v\mapsto v+\log\log T$, this becomes
\[
\frac{(\log T)^{2k}}{\sqrt{\pi\log\log T}}\int_{-\infty}^{\infty}e^{2kv-\frac{v^2}{\log\log T}}dv.
\]
Completing the square then leads us to conclude that
\[
J_k(T)\approx (\log T)^{2k}(\log T)^{k^2},
\]
which matches the expected size. This should be compared with the estimates for $I_k(T)$ mentioned above, as Selberg's~\cite{Se,Se1} central limit theorem says that $\log |\zeta(\frac12+it)|$ tends to be distributed like a Gaussian with mean $0$ and variance $\frac12\log\log T$. Both heuristics suggest that the factor of $(\log T)^{2k}$ for discrete moments can be attributed to this difference in mean when compared with $I_k(T)$.
\section{An upper bound for $\log|\zeta'(\rho)|$}
\noindent Throughout the paper, we denote prime numbers with the letters $p$ or minor variations such as $\tilde{p}$. When we write $p^h$, it is to be understood that $h$ is a natural number. The von Mangoldt function $\Lambda(n)$ is defined as
\[
\Lambda(n)=
\begin{cases}
\log p &\hbox{if }n=p^h,\\
0 &\hbox{otherwise}.
\end{cases}
\]
We extend the von Mangoldt function to the rest of $\R$ by taking $\Lambda(x)=0$ if $x$ is not a natural number; this will be useful in Lemma~\ref{Lem-Landau} below.
We also define a slight variant of $\Lambda(n)$.
Let $\cL =\log T$. We
set
\[
\Lambda_{\cL}(n)=
\begin{cases}
\Lambda(n)&\hbox{if }n=p\; \text{or}\; p^2 \;\text{and} \; n\le \cL,\\
0&\hbox{otherwise}.
\end{cases}
\]
We now prove the following upper bound for $\log|\zeta'(\rho)|$. This result is similar to upper bounds for $\log|\zeta(\frac12+it)|$ due to Soundararajan~\cite{So} and Harper~\cite{Ha}, and our proof is a modification of their arguments.
\begin{proposition}\label{Prop 1}
Assume RH. Let $T$ be large and let $2\le x\le T^2$. Set $\sigma_x=\frac12+\frac{1}{\log x}$. If $\rho=\frac12+i\gamma$ is a zero of $\zeta(s)$ with $T< \gamma\le 2T$, then
\[
\log|\zeta'(\rho)|\le \re \sum_{n\le x}\frac{\Lambda_{\cL}(n)}{n^{\sigma_x+i\gamma}\log n}\frac{\log (x/n)}{\log x}+\log\log T+\frac{\log T}{\log x}+O(1).
\]
\end{proposition}
\begin{proof}
The inequality is true if $\zeta'(\rho)=0$, since the left-hand side is $-\infty$ and the right-hand side is finite. Thus we may assume that $\rho$ is a simple zero. We begin with the estimate
\be\label{logderivative}
-\re\frac{\zeta'}{\zeta}(\sigma+it)=\tfrac12\log T-\sum_{\tilde{\rho}=\frac12+i\gamma}\frac{\sigma-1/2}{(\sigma-1/2)^2+(t-\tilde{\gamma})^2}+O(1).
\ee
This follows from the Hadamard product formula for $\zeta(s)$ along with Stirling's approximation (see (4) in \cite{So}), and it is valid for $T\le t\le 2T$ as long as $t$ is not the ordinate of a zero of the zeta-function. Integrating $\sigma$ from $\frac12$ to $\sigma_x$ in \eqref{logderivative}, we have
\begin{multline}\label{prop1}
\log|\zeta(\tfrac12+it)|-\log|\zeta(\sigma_x+it)|\\
=(\sigma_x-\tfrac12)\big(\tfrac12\log T+O(1)\big)
-\tfrac12\sum_{\tilde{\rho}=\frac12+i\tilde{\gamma}}\log \frac{(\sigma_x-\tfrac12)^2+(t-\tilde{\gamma})^2}{(t-\tilde{\gamma})^2}.
\end{multline}
Isolating the term corresponding to $\rho$ from the sum over zeros and subtracting $\log |t-\gamma|$ from both sides of \eqref{prop1}, we find that
\begin{multline*}
\log\bigg|\frac{\zeta(\tfrac12+it)}{t-\gamma}\bigg|-\log|\zeta(\sigma_x+it)|\\
=(\sigma_x-\tfrac12)\big(\tfrac12\log T+O(1)\big)-\log|(\sigma_x-\tfrac12)+i(t-\gamma)|-\tfrac12\sum_{\tilde{\rho}\neq \rho}\log\frac{(\sigma_x-\tfrac12)^2+(t-\tilde{\gamma})^2}{(t-\tilde{\gamma})^2}.
\end{multline*}
Since $\rho$ is a simple zero, we may take the limit as $t\to \gamma$ to obtain
\begin{multline}\label{prop2}
\log|\zeta'(\rho)|-\log|\zeta(\sigma_x+i\gamma)|\\
=(\sigma_x-\tfrac12)\big(\tfrac12\log T+O(1)\big)
-\log\big|\sigma_x-\tfrac12\big|-\tfrac12\sum_{\tilde{\rho}\neq \rho}\log \frac{(\sigma_x-\tfrac12)^2+(\gamma-\tilde{\gamma})^2}{(\gamma-\tilde{\gamma})^2}.
\end{multline}
Now define
\be\label{F}
\tilde{F}_x(\rho)=\sum_{\tilde{\rho}\neq \rho}\frac{\sigma_x-\tfrac12}{(\sigma_x-\tfrac12)^2+(\gamma-\tilde{\gamma})^2}.
\ee
Observe that this sum is positive as $\sigma_x=\tfrac12+\frac{1}{\log x}$. Since $\log(1+x)\geq x/(1+x)$ for $x>0$, it follows from \eqref{prop2} that
\be\label{prop3}
\log|\zeta'(\rho)|\le \log |\zeta(\sigma_x+i\gamma)|+\log\log x-\tfrac12(\sigma_x-\tfrac12)\tilde{F}(\rho)+\frac12\frac{\log T}{\log x}+O(1).
\ee
Now we recall Lemma 1 of \cite{So}, which says that
\[
\frac{\zeta'}{\zeta}(s)\log x=-\sum_{n\le x}\frac{\Lambda(n)}{n^s}\log(x/n)-\bigg(\frac{\zeta'}{\zeta}(s)\bigg)'+\frac{x^{1-s}}{(1-s)^2}-\sum_{\tilde{\rho}}\frac{x^{\tilde{\rho}-s}}{(\tilde{\rho}-s)^2}-\sum_{k=1}^{\infty}\frac{x^{-2k-s}}{(2k+s)^2}
\]
for $x\ge 2$ and any $s$ not coinciding with $1$ or a zero of $\zeta(s)$.
The third term on the right-hand side and the last sum here are both $\ll x^{1-\sigma}/T^2$. Thus, after dividing by $\log x$, integrating $\sigma$ from $\infty$ to $\sigma_x$ and taking real parts of the resulting expressions, it follows that
\be\label{log}
\log |\zeta(s_x)|=\re\sum_{n\le x}\frac{\Lambda(n)}{n^s\log n}\frac{\log (x/n)}{\log x}-\frac{1}{\log x}\re \frac{\zeta'}{\zeta}(s_x)+\frac{1}{\log x}\re\sum_{\tilde{\rho}}\int_{\sigma_x}^{\infty}\frac{x^{\tilde{\rho}-s}}{(\tilde{\rho}-s)^2}d\sigma+O(1),
\ee
where $s_x=\sigma_x+i\gamma$. Recalling the definition of $\tilde{F}_x(\rho)$ from \eqref{F} above, we estimate the sum over $\tilde{\rho}\neq \rho$ in \eqref{log} as
\begin{align*}
\bigg|\re\sum_{\tilde{\rho}\neq \rho}\int_{\sigma_x}^{\infty}\frac{x^{\tilde{\rho}-s}}{(\tilde{\rho}-s)^2}d\sigma\bigg| &\le \frac{1}{\log x}\sum_{\tilde{\rho}\neq \rho}\frac{x^{\frac12-\sigma_x}}{|(\sigma_x-\frac12)+i(\gamma-\tilde{\gamma})|^2}\\
&=\frac{x^{\frac12-\sigma_x}}{(\sigma_x-\frac12)\log x}\tilde{F}_x(\rho).
\end{align*}
Also, we may use \eqref{logderivative} to see that
\[
-\re\frac{\zeta'}{\zeta}(s_x)=\tfrac12\log T-\tilde{F}_x(\rho)-\frac{1}{\sigma_x-\frac12}.
\]
Applying both of these estimates to the right-hand side of \eqref{log}, we obtain
\begin{multline}\label{prop4}
\log|\zeta(\sigma_x+i\gamma)|\le \re \sum_{n\le x}\frac{\Lambda(n)}{n^{\sigma_x+i\gamma}\log n}\frac{\log(x/n)}{\log x}-\frac{\tilde{F}_x(\rho)}{\log x}+\frac{1}{\log x}\int_{\sigma_x}^{\infty}\frac{x^{\frac12-\sigma}}{(\sigma-\frac12)^2}d\sigma\\
-\frac{1}{(\sigma_x-\tfrac12)\log x}+\frac{x^{\frac12-\sigma_x}\tilde{F}(\rho)}{(\sigma_x-\frac12)\log^2 x}+\frac{\log T}{\log x}+O(1).
\end{multline}
After a change of variables, the integral in \eqref{prop4} may be expressed as
\[
\log x\int_1^{\infty}\frac{e^{-u}}{u^2}du.
\]
Hence the last term on the first line in \eqref{prop4} is a constant. Using \eqref{prop4} to estimate $\log|\zeta(s_x)|$ in \eqref{prop3} and recalling that $\sigma_x=\frac12+\frac{1}{\log x}$ and $x\le T^2$, we obtain
\[
\log|\zeta'(\rho)|\le \re \sum_{n\le x}\frac{\Lambda(n)}{n^{\sigma_x+i\gamma}\log n}\frac{\log(x/n)}{\log x}+\frac{\tilde{F}(\rho)}{\log x}(e^{-1}-\tfrac32)+\log\log T + \frac{\log T}{\log x}+O(1).
\]
Since $\tilde{F}(\rho)>0$ and $e^{-1}-\frac32<0$, we may omit the second term on the right-hand side here and still have an upper bound for the left-hand side. That is, we have
\be\label{prop5}
\log|\zeta'(\rho)|\le \re\sum_{n\le x}\frac{\Lambda(n)}{n^{\sigma_x+i\gamma}\log n}\frac{\log (x/n)}{\log x}+\log\log T+\frac{\log T}{\log x}+O(1).
\ee
The sum in \eqref{prop5} is supported on prime powers, and the prime powers $n=p^m$ with $m\geq 3$ contribute $O(1)$. Furthermore, as noted by Harper~\cite{Ha}, the sum over $n=p^2$ for $\log T<p\le \sqrt{x}$ is also bounded. Consequently, we conclude that
\[
\log|\zeta'(\rho)|\le \re\sum_{n\le x}\frac{\Lambda_{\cL}(n)}{n^{\sigma_x+i\gamma}\log n}\frac{\log (x/n)}{\log x}+\log\log T+\frac{\log T}{\log x}+O(1).
\]
This completes the proof of Proposition~\ref{Prop 1}.
\end{proof}
\section{Notation and Setup}
\noindent Let $N(T,2T)$ denote the number of $\rho=\frac12+i\gamma$ with $T<\gamma\le 2T$, i.e. $$N(T,2T)=N(2T)-N(T).$$
Our approach is to prove the following upper bound for discrete moments on dyadic intervals.
\begin{proposition}\label{Prop 2}
Assume RH. Let $k>0$. Then
\[\frac{1}{N(T,2T)}\sum_{T<\gamma\le 2T}|\zeta'(\rho)|^{2k}\ll_k(\log T)^{k(k+2)}\]
as $T\to \infty$.
\end{proposition}
\noindent Theorem~\ref{Thm 1} follows from Proposition~\ref{Prop 2}. To see this, first divide the interval $(0,T]$ into dyadic subintervals $(2^{-i}T,2^{1-i}T]$ for $i\geq 1$. Second, note that
\[N(2^{-i}T,2^{1-i}T]\asymp 2^{1-i}N(T);\]
this follows from the Riemann-von Mangoldt formula (see Ch. 15 of \cite{Da}), which says
\[N(T)=\tfrac{T}{2\pi}\log \tfrac{T}{2\pi e}+O(\log T).\]
Applying Proposition~\ref{Prop 2} to each subinterval and summing over $i$ yields the conclusion of Theorem~\ref{Thm 1}.
In order to prove Proposition~\ref{Prop 2}, we begin by defining an increasing geometric sequence $\{\beta_i\}$ of real numbers by
\[
\beta_i=
\begin{cases}
\frac{20^{i-1}}{(\log\log T)^2}&\hbox{if }i\geq 1,\\
\hfil 0 &\hbox{if }i=0.
\end{cases}
\]
We will not need all $i\geq 0$, and we take the upper threshold of the index as
\[
\cI:=\max\{i:\beta_i\le e^{-1000k}\}.
\]
We split $(0,T^{\beta_{\cI}}]$ into disjoint subintervals $I_i=(T^{\beta_{i-1}},T^{\beta_i}]$ for $1\le i\le \cI$ and define $$w_j(n)=\frac{\Lambda_{\cL}(n)}{n^{1/\beta_j\log T}\log n}\frac{\log (T^{\beta_j}/n)}{\log T^{\beta_j}}$$
for $1\le j\le \cI$. Setting
\[
G_{i,j}(t)=\re\sum_{n\in I_i}\frac{w_j(n)}{\sqrt{n}}n^{-it}
\]
for $1\le i\le j\le \cI$, the conclusion of Proposition~\ref{Prop 1} can be written
\be\label{newinequality}
\log |\zeta'(\rho)|\le \re\sum_{i=1}^jG_{i,j}(\gamma)+\log\log T+\beta_j^{-1}+O(1).
\ee
We also need a particular random model for $G_{i,j}(\gamma)$. Let $\{X_p\}$ be a sequence of independent random variables indexed by the primes, where each $X_p$ is uniformly distributed on the unit circle in the complex plane. If $n$ has prime factorization $n=p_1^{h_1}\cdots p_r^{h_r}$, then we define
\[
X_n=X_{p_1}^{h_1}\cdots X_{p_r}^{h_r}.
\]
Thus $X_n$ is a random completely multiplicative function. We then define the random model $G_{i,j}(X)$ as
\[
G_{i,j}(X)=\re \sum_{n\in I_i}\frac{w_j(n)}{\sqrt{n}}X_n.
\]
Next we sort the $\gamma$ in the interval $[T,2T]$ into subsets based on the size of $G_{i,j}(\gamma)$.
First let
\be\label{defT}
\cT=\{T<\gamma\le 2T:|G_{i,\cI}(\gamma)|\le \beta_i^{-3/2}\text{ for } 1\le i\le \cI\}.
\ee
This can be thought of as the \emph{best} set of $\gamma$, those for which $\exp 2k\re G_{i,\cI}(\gamma)$ can be approximated well by a short truncation of its Maclaurin series for every $1\le i\le \cI$ (see Lemma~\ref{Lem-Taylor} below). Similarly we define
\begin{multline}\label{defS(j)}
S(j)=\{T<\gamma\le T:|G_{i,\ell}(\gamma)|\le \beta_i^{-3/4}\text{ for }1\le i\le j\text{ and }i\le \ell\le \cI,\\
\text{ but }|G_{j+1,\ell}(\gamma)|>\beta_{j+1}^{-3/4}\text{ for some }j+1\le \ell\le \cI\}
\end{multline}
for $1\le j\le \cI-1$. The remaining subset $S(0)$ is
\be\label{defS(0)}
S(0)=\{T<\gamma\le 2T:|G_{1,\ell}(\gamma)|>\beta_1^{-3/4}\text{ for some }1\le \ell \le \cI\}.
\ee
In a certain sense, the sets $S(j)$ ($1\le j<\cI$) are not as \emph{good} as $\cT$, but they are not as \emph{bad} as $S(0)$. This is evident in the fact that Lemma~\ref{Lem-Taylor} below does not say anything about $S(0)$. However, we will see in \S 6.3 that the contribution of $\gamma\in S(0)$ in Proposition~\ref{Prop 2} is negligible.
\section{Some lemmas}
\noindent The main ingredient in our proof is a uniform version of Landau's formula~\cite{La}. This was originally proved by Gonek\cite{Go93} and was studied in further detail by many others (e.g. \cite{FoSoZa}\cite{FoZa}\cite{Fu}). The version we use here is essentially the one found in \cite{Ra1}. We recall that we take $\Lambda(x)=0$ if $x$ is not an integer.
\begin{lemma}\label{Lem-Landau}
Assume RH. Let $T$ be large. Suppose $a$ and $b$ are positive integers with $a>b$. Then
\[
\sum_{T<\gamma\le 2T}(a/b)^{i\gamma}=-\frac{T}{2\pi}\frac{\Lambda(a/b)}{\sqrt{a/b}}+O\big(\sqrt{ab}(\log T)^2\big).
\]
\end{lemma}
\noindent If $a<b$, then we take the complex conjugate of the left-hand side above and apply the lemma to $b/a$. This yields a main term of
\[
-\frac{T}{2\pi}\frac{\Lambda(b/a)}{\sqrt{b/a}}
\]
on the right-hand side. The next lemma is an easy consequence of Taylor's theorem.
\begin{lemma}\label{Lem-Taylor}
Let $k>0$ and suppose $\gamma \in \cT$. Then
\[
\exp \bigg(2k\sum_{i=1}^{\cI}G_{i,\cI}(\gamma)\bigg)\ll \prod_{i=1}^{\cI}\left(\sum_{n=0}^{[e^2k\beta_i^{-3/4}]}\frac{(kG_{i,\cI}(\gamma))^n}{n!}\right)^2
\]
as $T\to \infty$. If, instead, $\gamma\in S(j)$ for some $1\le j\le \cI-1$, then
\[
\exp \bigg(2k\sum_{i=1}^jG_{i,j}(\gamma)\bigg)\ll \prod_{i=1}^j\left(\sum_{n=0}^{[e^2k\beta_i^{-3/4}]}\frac{(kG_{i,j}(\gamma))^n}{n!}\right)^2.
\]
The implied constants are independent of $k$.
\end{lemma}
\begin{proof}
We prove the first statement, as the second follows from a similar proof. Recall from \eqref{defT} that $\gamma\in \cT$ means $|G_{i,j}(\gamma)|\le \beta_i^{-3/4}$ for all $i\le \cI$. First suppose $e^2k\beta_{i_0}^{-3/4}<1$ for some $i_0\le \cI$. Then $[e^2k\beta_i^{-3/4}]=0$ for all $i\ge i_0$. In this case, we use the trivial estimate
\[
\exp\left(2k\sum_{i=i_0}^{\cI}G_{i,\cI}(\gamma)\right)\le \exp\left(2k\sum_{i=i_0}^{\cI}\beta_i^{-3/4}\right).
\]
The sum on the right-hand side of this inequality is at most
\[
\frac{\beta_{i_0}^{-3/4}}{1-20^{-3/4}}\le \frac{1}{2k},
\]
as we have assumed $e^2k\beta_{i_0}^{-3/4}<1$. Hence
\be\label{smallk}
\exp\left(2k\sum_{i=i_0}^{\cI}G_{i,\cI}(\gamma)\right)\le e\prod_{i=i_0}^{\cI}\left(\sum_{n=0}^{[e^2k\beta_i^{-3/4}]}\frac{(kG_{i,\cI})^n}{n!}\right)^2,
\ee
since the sums on the right-hand side are identically $1$. If we may take $i_0=1$, then we are done. Thus it suffices to assume $e^2k\beta_i^{-3/4}\ge 1$ for $i<i_0$. By Taylor's theorem with explicit remainder, we have
\be\label{taylor1}
e^x\left(1-\frac{e^{|x|}|x|^{N+1}}{(N+1)!}\right)\le \sum_{n=0}^N\frac{x^n}{n!}
\ee
for $x\in \R$ and any natural number $N$.
We take $x=kG_{i,\cI}(\gamma)$ and $N=[e^2k\beta_i^{-3/4}]$. Using the inequality $n!\geq (n/e)^n$, it can be shown that
\[
\frac{e^{|x|}|x|^{N+1}}{([e^2k\beta_i^{-3/4}]+1)!}\le e^{-k\beta_i^{-3/4}}
\]
for any $i\le \cI$. Using this in \eqref{taylor1}, we find that
\[
e^{kG_{i,\cI}(\gamma)}\Big(1-e^{-k\beta_i^{-3/4}}\Big)\le \sum_{n=0}^{[e^2k\beta_i^{-3/4}]}\frac{(kG_{i,\cI}(\gamma))^n}{n!}.
\]
After squaring both sides of this inequality for all $i<i_0$, it follows that
\be\label{taylor2}
\exp \left(2k\sum_{i=1}^{i_0-1}G_{i,\cI}(\gamma)\right)\prod_{i=1}^{i_0-1}\Big(1-e^{-k\beta_i^{-3/4}}\Big)^2\le \prod_{i=1}^{i_0-1}\left(\sum_{n=0}^{[e^2k\beta_i^{-3/4}]}\frac{(kG_{i,\cI}(\gamma))^n}{n!}\right)^2.
\ee
The product on the left-hand side of \eqref{taylor2} is
\be\label{taylor3}
\ge \exp\left(-2\sum_{i=1}^{i_0-1}\beta_i^{3/4}/k\right);
\ee
this follows from the inequality $1-e^{-u}\ge e^{-1/u}$ for $u>0$.
Since $e^2k\beta_{i_0-1}^{-3/4}\ge 1$, the sum here is
\[
\sum_{i=1}^{i_0-1}\beta_i^{3/4}\le \frac{\beta_{i_0}^{3/4}}{20^{3/4}-1}\le 2\beta_{i_0-1}^{3/4}\le 2e^2k.
\]
This bound with \eqref{taylor2} and \eqref{taylor3} implies
\be\label{taylor4}
\exp\left(2k\sum_{i=1}^{i_0-1}G_{i,\cI}(\gamma)\right)\le e^{4e^2}\prod_{i=1}^{i_0-1}\left(\sum_{n=0}^{[e^2k\beta_i^{-3/4}]}\frac{(kG_{i,j}(\gamma))^n}{n!}\right)^2.
\ee
Combining this inequality with \eqref{smallk}, we conclude
\[
\exp\left(2k\sum_{i=1}^{\cI}G_{i,\cI}(\gamma)\right)\ll \prod_{i=1}^{\cI}\left(\sum_{n=0}^{[e^2k\beta_i^{-3/4}]}\frac{(kG_{i,j}(\gamma))^n}{n!}\right)^2
\]
with implied constant $e^{4e^2+1}$. Lastly, suppose there is no such $i_0$, i.e. $e^2k\beta_{\cI}^{-3/4}\ge 1$. Then the argument used to derive \eqref{taylor4} may be applied to all $i\le \cI$.
\end{proof}
The following lemma gives an upper bound on mixed discrete moments of the $G_{i,j}(\gamma)$ in terms of corresponding mixed moments of the random models $G_{i,j}(X)$.
\begin{lemma}\label{Lem-mixed moments}
Assume RH. Let $k>0$ and let $j$ be a natural number with $j\le \cI$. Let $\hat{\ell}=(\ell_1,\ldots,\ell_j)$ be a $j$-tuple in $\Z_{\geq 0}^j$ whose components satisfy $\ell_i\le 2e^2k\beta_i^{-3/4}$ for $1\le i\le j$. Then
\[
\sum_{0<\gamma\le T}\prod_{i=1}^jG_{i,j}^{\ell_i}(\gamma)\le N(T,2T)\,\E \left[\prod_{i=1}^jG_{i,j}(X)^{\ell_i}\right]+O\big(T^{e/25}(\log T)^2\big).
\]
\end{lemma}
\begin{proof}
We begin with the identity $\re(z)=\frac{1}{2}(z+\overline{z})$ and write
\[
G_{i,j}(\gamma)=\tfrac{1}{2}\sum_{n\in I_i}\frac{w_j(n)}{\sqrt{n}}(n^{-i\gamma}+n^{i\gamma}).
\]
Thus we can expand the $\ell_i$-th power of $G_{i,j}(\gamma)$ as
\[
2^{-\ell_i}\sum_{T<\gamma\le 2T}\sum_{n_{i,1},\ldots,n_{i,\ell_i}\in I_i}\frac{w_j(n_{i,1})\cdots w_j(n_{i,\ell_i})} {\sqrt{n_{i,1}\cdots n_{i,\ell_i}}}\prod_{l=1}^{\ell_i}\Big(n_{i,l}^{-i\gamma}+n_{i,l}^{i\gamma}\Big),
\]
where $n_{i,l}$ denotes the $l$-th entry of the $\ell_i$-tuple $(n_{i,1},\ldots,n_{i,\ell_i})\in \N^{\ell_i}$. Multiplying all such expressions for $i\le j$ together and summing over $\gamma$, we see that
\begin{multline}\label{maingammasum}
\sum_{T<\gamma\le 2T}\prod_{i=1}^jG_{i,j}^{\ell_i}(\gamma)\\
=2^{-(\ell_1+\cdots+\ell_j)}\sum_{T<\gamma\le 2T}\sum_{\hat{n}_1\in (I_1\cap \N)^{\ell_1}}\cdots \sum_{\hat{n}_j\in (I_j\cap \N)^{\ell_j}}\prod_{i=1}^j\prod_{l=1}^{\ell_i}\frac{w_j(n_{i,l})}{\sqrt{n_{i,l}}}\Big(n_{i,l}^{-i\gamma}+n_{i,l}^{i\gamma}\Big).
\end{multline}
Moving the sum over $\gamma$ inside, we ultimately need to consider sums of the form
\be\label{gamma}
\sum_{T<\gamma\le 2T}\prod_{i=1}^j\prod_{l=1}^{\ell_i}\Big(n_{i,l}^{-i\gamma}+n_{i,l}^{i\gamma}\Big).
\ee
Let $\hat{e}$ denote a $(\ell_1+\cdots +\ell_j)$-tuple in $\{-1,1\}^{\ell_1}\times \cdots \times \{-1,1\}^{\ell_j}=\{-1,1\}^{L_j}$,
where $L_j=\ell_1+\cdots +\ell_j$. Then we may expand the double product in \eqref{gamma} as
\be\label{gammasum}
\sum_{\hat{e}\in \{-1,1\}^{L_j}}\prod_{i=1}^j\prod_{l=1}^{\ell_i}n_{i,l}^{ie_{i,l}\gamma},
\ee
where $e_{i,l}$ is the $l$-th entry ($1\le l\le \ell_i$) of the $i$-th piece $\{-1,1\}^{\ell_i}$ of $\hat{e}\in \{-1,1\}^{L_j}$. Alternatively, $e_{i,l}$ is the $(\ell_1+\cdots \ell_{i-1}+l)$-th entry of the full $L_j$-tuple. If
\[
\prod_{i=1}^j\prod_{l=1}^{\ell_i}n_{i,l}^{e_{i,l}}=1,
\]
then summing \eqref{gammasum} over $\gamma$ simply yields $N(T,2T)$, the number of $\gamma$ in $(T,2T]$. For all other terms we may apply Lemma~\ref{Lem-Landau}. Hence \eqref{gamma} is
\begin{multline}\label{postlandau}
N(T,2T)\,\Big(\sum_{\substack{\hat{e}\in \{-1,1\}^{L_j}\\ n_{1,1}^{e_{1,1}}\cdots n_{j,\ell_j}^{e_{j,\ell_j}}=1}}1\Big)-\frac{T}{\pi}\sum_{\substack{\hat{e}\in \{-1,1\}^{L_j}\\ n_{1,1}^{e_{1,1}}\cdots n_{j,\ell_j}^{e_{j,\ell_j}}>1}}\frac{\Lambda(n_{1,1}^{e_{1,1}}\cdots n_{j,\ell_j}^{e_{j,\ell_j}})}{\sqrt{n_{1,1}^{e_{1,1}}\cdots n_{j,\ell_j}^{e_{j,\ell_j}}}}\\
+O\Big(2^{L_j}\sqrt{n_{1,1}\cdots n_{j,\ell_j}}(\log T)^2\Big).
\end{multline}
Here we have grouped together conjugate pairs in the second sum. The term involving the second sum in \eqref{postlandau} is non-positive due to the factor $-T/\pi$. Hence we may omit the whole term to obtain an upper bound. Taking this upper bound and using it in \eqref{maingammasum}, we obtain
\begin{multline}\label{postlandausum}
\sum_{T<\gamma\le 2T}\prod_{i=1}^jG_{i,j}^{\ell_i}(\gamma)\\
\le N(T,2T)\sum_{\substack{\hat{n}_1\in (I_1\cap\N)^{\ell_1}\\ n_{1,l}\in I_1\\ \text{for }1\le l\le \ell_1}}\cdots \sum_{\substack{\hat{n}_j\in (I_j\cap\N)^{\ell_j}\\ n_{j,l}\in I_j\\ \text{for }1\le l\le \ell_j}}\bigg(\prod_{i=1}^j\prod_{l=1}^{\ell_i}\frac{w_j(n_{i,l})}{\sqrt{n_{i,l}}}\bigg)\Big(2^{-L_j}\sum_{\substack{\hat{e}\in \{-1,1\}^{L_j}\\ n_{1,1}^{e_{1,1}}\cdots n_{j,\ell_j}^{e_{j,\ell_j}}=1}}1\Big)\\
+O\Big((\log T)^2 \sum_{\substack{\hat{n}_1\in (I_1\cap\N)^{\ell_1}\\ n_{1,l}\in I_1\\ \text{for }1\le l\le \ell_1}}\cdots \sum_{\substack{\hat{n}_j\in (I_j\cap\N)^{\ell_j}\\ n_{j,l}\in I_j\\ \text{for }1\le l\le \ell_j}}1\Big).
\end{multline}
The error term here is
\[
\ll (\log T)^2\prod_{i=1}^jT^{\beta_i\ell_i}\ll T^{10e^2k e^{-250k}}(\log T)^2\le T^{e/25}(\log T)^2.
\]
To handle the main term, we detect the condition $n_{1,1}^{e_{1,1}}\cdots n_{j,\ell_j}^{e_{j,\ell_j}}=1$ with the expectation
\[
\E\Big[X_{n_{1,1}}^{e_{1,1}}\cdots X_{n_{j,\ell_j}}^{e_{j,\ell_j}}\Big]=
\begin{cases}
1 &\hbox{if }n_{1,1}^{e_{1,1}}\cdots n_{j,\ell_j}^{e_{j,\ell_j}}=1,\\
0 &\hbox{otherwise}.
\end{cases}
\]
Then the leading term in \eqref{postlandausum} may be written
\[
N(T,2T)\sum_{\substack{\hat{n}_1\in (I_1\cap\N)^{\ell_1}\\ n_{1,l}\in I_1\\ \text{for }1\le l\le \ell_1}}\cdots \sum_{\substack{\hat{n}_j\in (I_j\cap\N)^{\ell_j}\\ n_{j,l}\in I_j\\ \text{for }1\le l\le \ell_j}}\bigg(\prod_{i=1}^j\prod_{l=1}^{\ell_i}\frac{w_j(n_{i,l})}{\sqrt{n_{l_i}}}\bigg)\Big(2^{-L_j}\sum_{\hat{e}\in \{-1,1\}^{L_j}}\E\Big[X_{n_{1,1}}^{e_{1,1}}\cdots X_{n_{j,\ell_j}}^{e_{j,\ell_j}}\Big]\Big).
\]
The innermost sum here is
\[
\sum_{\hat{e}\in\{-1,1\}^{L_j}}\E\left[\prod_{i=1}^j\prod_{l=1}^{\ell_i}X_{n_{i,l}}^{e_{i,l}}\right]=\E\left[\prod_{i=1}^j\prod_{l=1}^{\ell_i}\big(X_{n_{i,l}}^{-1}+X_{n_{i,l}}\big)\right].
\]
Moving the expectation outside, we see that our leading term in \eqref{postlandausum} is
\[
\E\bigg[2^{-L_j}\sum_{T<\gamma\le 2T}\sum_{\hat{n}_1\in (I_1\cap \N)^{\ell_1}}\cdots \sum_{\hat{n}_j\in (I_j\cap \N)^{\ell_j}}\prod_{i=1}^j\prod_{l=1}^{\ell_i}\frac{w_j(n_{i,l})}{\sqrt{n_{i,l}}}\big(X_{n_{i,l}}^{-1}+X_{n_{i,l}}\big)\bigg].
\]
Now we reverse our steps leading up to \eqref{maingammasum} with $n_{i,l}^{-i\gamma}$ replaced with $X_{n_{i,l}}$. This completes the proof.
\end{proof}
We are now prepared to prove an upper bound for the average of $\exp (2k\sum_{i=1}^{\cI}G_{i,\cI}(\gamma))$ over $\gamma\in \cT$. By Proposition~\ref{Prop 1}, this is approximately the corresponding average of $|\zeta'(\rho)|^{2k}$ over $\cT$.
\begin{lemma}\label{Lem-T}
Assume RH. Let $k>0$. Then
\[
\sum_{\gamma\in \cT}\exp\left(2k\sum_{i=1}^{\cI}G_{i,\cI}(\gamma)\right)
\ll N(T,2T)\,\E \left[\exp\left(2k\sum_{i=1}^{\cI}G_{i,\cI}(X_p)\right)\right]+e^{2k}T^{e/5}(\log T)^2
\]
as $T\to \infty$.
\end{lemma}
\begin{proof}
By Lemma~\ref{Lem-Taylor} we have
\be\label{presquare}
\sum_{\gamma\in \cT}\exp\left(2k\sum_{i=1}^{\cI}G_{i,\cI}(\gamma)\right)\ll\sum_{\gamma\in \cT}\prod_{i=1}^{\cI}\left(\sum_{n=0}^{[e^2k\beta_i^{-3/4}]}\frac{(kG_{i,\cI}(\gamma))^n}{n!}\right)^2.
\ee
All of the terms here are squared and, hence, nonnegative. Consequently, we may extend the sum to all $T<\gamma\le 2T$ and still have an upper bound. Hence, after expanding the square, we see that the right-hand side of \eqref{presquare} is bounded from above by
\[
\sum_{T<\gamma\le 2T}\prod_{i=1}^{\cI}\left(\sum_{m,n=0}^{[e^2k\beta_i^{-3/4}]}\frac{k^{m+n}G_{i,\cI}(\gamma)^{m+n}}{(m!)(n!)}\right).
\]
We expand the product and move the sum over $\gamma$ inside to get
\be\label{postsquare}
\sum_{m_1,n_1=0}^{[e^2k\beta_1^{-3/4}]}\cdots \sum_{m_{\cI},n_{\cI}=0}^{[e^2k\beta_{\cI}^{-3/4}]}\frac{k^{m_1+n_1+\cdots m_{\cI}+n_{\cI}}}{(m_1!)(n_1!)\cdots (m_{\cI}!)(n_{\cI}!)}\sum_{T<\gamma\le 2T}\prod_{i=1}^{\cI}G_{i,\cI}(\gamma)^{m_i+n_i}.
\ee
By Lemma~\ref{Lem-mixed moments}, the inner-most sum here is
\[
\sum_{T<\gamma\le 2T}\prod_{i=1}^{\cI}G_{i,\cI}(\gamma)^{m_i+n_i}\le N(T,2T)\,\E \left[ \prod_{i=1}^{\cI}G_{i,\cI}(X)^{m_i+n_i}\right]+O(T^{e/25}(\log T)^2).
\]
Therefore \eqref{postsquare} is
\begin{multline}\label{postexp}
\le N(T,2T)\,\E \left[\sum_{m_1,n_1=0}^{[e^2k\beta_1^{-3/4}]}\cdots \sum_{m_{\cI},n_{\cI}=0}^{[e^2k\beta_{\cI}^{-3/4}]}\prod_{i=1}^{\cI}\frac{k^{m_i+n_i}}{(m_i!)(n_i!)}G_{i,\cI}(X)^{m_i+n_i}\right]\\
+O\left(T^{e/25}(\log T)^2 \sum_{m_1,n_1=0}^{[e^2k\beta_1^{-3/4}]}\cdots \sum_{m_{\cI},n_{\cI}=0}^{[e^2k\beta_{\cI}^{-3/4}]}\prod_{i=1}^{\cI}\frac{k^{m_i+n_i}}{(m_i!)(n_i!)}\right).
\end{multline}
The $O$-term may be refactored as
\be\label{postexpoterm}
T^{e/25}(\log T)^2\prod_{i=1}^{\cI}\left(\sum_{n=0}^{[e^2\beta_i^{-3/4}]}\frac{k^n}{n!}\right)^2\le e^{2k}T^{e/25}(\log T)^2.
\ee
For the main term in \eqref{postexp}, note that $\prod_{i=1}^{\cI}\E[G_{i,\cI}(X)^{m_i+n_i}]$ is nonnegative. To see this, recall from \eqref{postlandausum} that it may be expressed as a sum of nonnegative terms. Therefore we may extend the sums to all $m_1,n_1,\ldots,m_{\cI},n_{\cI}\geq 0$ to get an upper bound.
Hence our main term in \eqref{postexp} is
\[
\le N(T,2T)\,\E\left[\sum_{m_1,n_1=0}^{\infty}\cdots \sum_{m_{\cI},n_{\cI}=0}^{\infty}\prod_{i=1}^{\cI} \frac{k^{m_i+n_i}}{(m_i!)(n_i!)}G_{i,\cI}(X)^{m_i+n_i}\right].
\]
This may be refactored as
\[
N(T,2T)\,\E\left[\prod_{i=1}^{\cI}\Big(\sum_{n=0}^{\infty}\frac{k^n}{n!}G_{i,\cI}(X)^n\Big)^2\right]=N(T,2T)\,\E \left[\exp \left(2k\sum_{i=1}^{\cI}G_{i,\cI}(X)\right)\right].
\]
Combining this with \eqref{postexpoterm} and \eqref{postexp}, we obtain the claimed upper bound.
\end{proof}
For the average over $S(j)$, we have to be more careful than we were for $\cT$ in Lemma~\ref{Lem-T}. This is because there are $\cI \asymp \log\log\log T$ subsets $S(j)$. We will exploit the fact that $\gamma \in S(j)$ implies $|G_{j+1,\ell}(\gamma)|\ge \beta_{j+1}^{-3/4}$ for some $j+1\le \ell\le \cI$.
\begin{lemma}\label{Lem-S(j)}
Assume RH. Let $k>0$. For $1\le j\le \cI-1$, we have
\begin{multline*}
\sum_{\gamma\in S(j)}\exp \left(2k\sum_{i=1}^jG_{i,j}(\gamma)\right)
\ll e^{-1/21\beta_{j+1}\log (1/\beta_{j+1})}N(T,2T)\E\left[\exp\left(2k\re\sum_{i=1}^jG_{i,j}(X)\right)\right]\\
+T^{(e+5)/25}(\log T)^2
\end{multline*}
as $T\to \infty$. We also have
\[
\#S(0)\ll N(T,2T)e^{-(\log\log T)^2/10}+e^{2k}T^{(e+5)/25}(\log T)^2.
\]
\end{lemma}
\begin{proof}
As in the previous proof, we apply Lemma~\ref{Lem-Taylor} to see that
\be\label{jpresquare}
\sum_{\gamma\in S(j)}\exp\left(2k\sum_{i=1}^jG_{i,j}(\gamma)\right)\ll \sum_{\gamma\in S(j)}\prod_{i=1}^j\left(\sum_{n=1}^{[e^2k\beta_i^{-3/4}]}\frac{(kG_{i,j}(\gamma))^n}{n!}\right)^2.
\ee
This is valid for $0\le j< \cI$ if we take the empty sum to be $0$. By \eqref{defS(j)} and \eqref{defS(0)}, $\gamma\in S(j)$ implies $1\le \beta_{j+1}^{3/4}|G_{j+1,\ell}(\gamma)|$ for some $j+1\le \ell\le \cI$. Hence \eqref{jpresquare} is
\be\label{jexploit}
\le \sum_{\ell=j+1}^{\cI} \sum_{\gamma \in S(j)}\prod_{i=1}^j\left(\sum_{n=1}^{[e^2k\beta_i^{-3/4}]}\frac{(kG_{i,j}(\gamma))^n}{n!}\right)^2\Big(\beta_{j+1}^{3/4}G_{j+1,\ell}(\gamma)\Big)^{2[1/10\beta_{j+1}]}.
\ee
Because of the squared terms, we may extend the sum over $\gamma\in S(j)$ to $T<\gamma\le 2T$. Expand the squares and product as we did in the proof of Lemma~\ref{Lem-T}. Thus \eqref{jexploit} is bounded from above by
\begin{multline*}
\sum_{\ell=j+1}^{\cI}(\beta_{j+1}^{3/4})^{2[1/10\beta_{j+1}]}\sum_{m_1,n_1=0}^{[e^2k\beta_1^{-3/4}]}\cdots \sum_{m_j,n_j=0}^{[e^2k\beta_j^{-3/4}]}\frac{k^{m_1+n_1+\cdots m_j+n_j}}{(m_1!)(n_1!)\cdots (m_j!)(n_j!)}\\
\times\sum_{T<\gamma\le 2T}G_{j+1,\ell}(\gamma)^{2[1/10\beta_{j+1}]}\prod_{i=1}^jG_{i,j}(\gamma)^{m_i+n_i}.
\end{multline*}
We can get an upper bound for this new expression by carefully following the proof of Lemma~\ref{Lem-T} for each $\ell$. Namely, it is
\begin{multline}\label{jpostexp}
\ll (\beta_{j+1}^{3/4})^{2[1/10\beta_{j+1}]}\sum_{\ell=j+1}^{\cI}N(T,2T)\E \left[G_{j+1,\ell}(X)^{2[1/10\beta_{j+1}]}\exp\left(2k\sum_{i=1}^jG_{i,j}(X)\right)\right]\\
+T^{e/25+1/5}(\log T)^2.
\end{multline}
There is no $e^{2k}$ in the $O$-term in this case, since
\[
\big(\beta_{j+1}^{3/4}\big)^{2[1/10\beta_{j+1}]}\le e^{-750k/20}\le e^{-2k}.
\]
Consider the expectation in \eqref{jpostexp}. If $T$ is large, then we certainly have $T^{\beta_1}>\log T$. Likewise we also have $p^2\le T^{\beta_1}$ if $p<\log T$. By the definition of $\Lambda_{\cL}$, it follows that $G_{j+1,\ell}(X)$ and $G_{i,j}(X)$ are independent for $i\le j$ if $j\ge 1$. Thus the expectation in \eqref{jpostexp} is
\be\label{independent}
\E\Big[G_{j+1,\ell}(X)^{2[1/10\beta_{j+1}]}\Big]\cdot \E\left[\exp\left(2k\sum_{i=1}^jG_{i,j}(X)\right)\right].
\ee
This still holds for $j=0$, as the second expectation here is precisely $1$. We estimate the first expectation in \eqref{independent} for $j\ge 0$ as follows.
For $j\ge 1$, this first expectation is
\[
\frac{(2[1/10\beta_{j+1}])!}{2^{2[1/10\beta_{j+1}]}([1/10\beta_{j+1}])!}\bigg(\sum_{p\in I_{j+1}}\frac{w_{\ell}(p)^2}{p}\bigg)^{[1/10\beta_{j+1}]}\ll \bigg(\frac{1}{10e\beta_{j+1}}\sum_{T^{\beta_j}<p\le T^{\beta_{j+1}}}\frac{1}{p}\bigg)^{[1/10\beta_{j+1}]}
\]
by Stirling's approximation.
The sum of reciprocal primes here is $\le 5$ for large $T$.
It follows that
\be\label{expbound}
\big(\beta_{j+1}^{3/4}\big)^{2[1/10\beta_{j+1}]}\E\Big[G_{j+1,\ell}(X)^{2[1/10\beta_{j+1}]}\Big]\ll e^{-\frac{1}{2}[1/10\beta_{j+1}]\log(1/\beta_{j+1})}
\ee
for sufficiently large $T$ and any $\ell\ge j+1\ge 2$. Now, there are $\cI-j$ terms in the sum over $\ell$ in \eqref{jpostexp}. Observe that $\beta_{\cI}\le 1$ implies
\[
\cI-j\le =\frac{\log (\beta_{\cI}/\beta_j)}{\log 20}\ll \log (1/\beta_{j+1}).
\]
This bound along with \eqref{expbound} implies that \eqref{jpostexp} is
\[
\ll e^{-1/21\beta_{j+1}\log (1/\beta_{j+1})}N(T,2T)\,\E\left[\exp\left(2k\sum_{i=1}^jG_{i,j}(X)\right)\right]+T^{(e+5)/25}(\log T)^2.
\]
for $j\ge 1$. This proves the first claim. For $j=0$, the expectation
\[
\E\Big[G_{j+1,\ell}(X)^{2[1/10\beta_{j+1}]}\Big]
\]
from \eqref{jexploit} is slightly more complicated than the $j\ge 1$ case. This is because $G_{1,\ell}(X)$ includes some nonzero terms corresponding to squared primes, and clearly $X_p$ and $X_{p^2}=X_p^2$ are not independent.
Since $w_{\ell}(n)\le 1$, the sum over squared primes in $G_{1,\ell}(X)$ is at most
\[
\sum_{p\le \log T}\frac{w_{\ell}(p^2)}{p}\le 2\log\log\log T
\]
for large $T$. Thus
\[
G_{1,\ell}(X)^{2[1/10\beta_1]}
\ll 2^{2[1/10\beta_1]}\bigg(\re \sum_{p\in I_1}\frac{w_{\ell}(p)}{\sqrt{p}}X_p\bigg)^{2[1/10\beta_1]}+2^{4[1/10\beta_1]}(\log\log\log T)^{2[1/10\beta_1]}.
\]
Insert this bound in \eqref{jpostexp}. This yields the upper bound
\[
\ll N(T,2T)\bigg\{\bigg(\frac{4\beta_1^{1/2}}{10e}\sum_{p\in I_1}\frac{w_{\ell}(p)}{p}\bigg)^{[1/10\beta_1]}+(4\beta_1^{3/4}\log\log\log T)^{2[1/10\beta_1]}\bigg\}
+T^{(e+5)/25}(\log T)^2.
\]
Since $4\log\log\log T\le (\log\log T)^{1/2}$ for large $T$, the main term here is
\[
\ll N(T,2T)\Big(e^{-[1/10\beta_1]}+e^{-2[1/10\beta_1]\log\log\log T}\Big)\ll N(T)e^{-(\log\log T)^2/10}
\]
as claimed.
\end{proof}
\section{Proof of Proposition~\ref{Prop 2}}
\noindent Observe that
\[
J_k(T)=\sum_{\gamma\in \cT}|\zeta'(\rho)|^{2k}+\sum_{j=1}^{\cI-1}\sum_{\gamma\in S(j)}|\zeta'(\rho)|^{2k}+\sum_{\gamma\in S(0)}|\zeta'(\rho)|^{2k}.
\]
It suffices to estimate each of these pieces individually.
\subsection{The sum over $\cT$}
Using the inequality \eqref{newinequality} with $j=\cI$, we have
\[
\sum_{\gamma\in \cT}|\zeta'(\rho)|^{2k}\ll_k(\log T)^{2k}\sum_{\gamma\in \cT}\exp\left(2k\sum_{i=1}^{\cI}G_{i,\cI}(\gamma)\right);
\]
we have included the factor $e^{2k/\beta_{\cI}}$ in the implied constant, since $\beta_{\cI}\approx e^{-1000k}$.
By Lemma~\ref{Lem-T}, the right-hand side is
\be\label{cT}
\ll_k N(T,2T)\, (\log T)^{2k}\E\left[\exp 2k\sum_{i=1}^{\cI}G_{i,\cI}(X)\right]+T^{(e+5)/25}(\log T)^{2k+2}.
\ee
For large $T$, no two intervals $I_1,\ldots,I_{\cI}$ contain powers of the same prime. That is, we may use independence of the random variables $X_p$ to write the expectation in \eqref{cT} as
\be\label{cTexp}
\prod_{i=1}^{\cI}\E\big[\exp 2kG_{i,\cI}(X)\big].
\ee
For $i\ge 2$, we recall that
\[
G_{i,\cI}(X)=\sum_{p\in I_i}\frac{w_{\cI}(p)}{\sqrt{p}}X_p.
\]
By a standard calculation, we have
\[
\E\big[\exp\big(2kG_{i,\cI}(X)\big)\big]=\prod_{p\in I_i}I_0\bigg(2k\frac{w_{\cI}(p)}{\sqrt{p}}\bigg),
\]
where $I_0(z)=\sum_{n=0}^{\infty}\frac{(z/2)^{2n}}{(n!)^2}$ is the modified Bessel function of the first kind.
Now consider the $i=1$ term in \eqref{cTexp}. For any prime $p>\log T$, the calculation is exactly as it was above for $i\ge 2$. For $p\le \log T$, we have both $X_p$ and $X_p^2$ appearing in the expression for $G_{1,\cI}(X)$. For these, we must consider
\[
\E\left[\exp \left(2k\re\frac{w_{\cI}(p)}{\sqrt{p}}X_p+2k\re \frac{w_{\cI}(p^2)}{p}X_p^2\right)\right].
\]
Note that we may replace $\re (X_p^2)$ with $2(\re X_p)^2-1$; this is a consequence of the double angle formula for cosine. Hence the above may be written
\be\label{exp1}
\E \left[\exp \left(2k\frac{w_{\cI}(p)}{\sqrt{p}}\re X_p\right)\cdot\exp \left(4k\frac{w_{\cI}(p^2)}{p}(\re X_p)^2\right)\right]\exp\left(-2k\frac{w_{\cI}(p^2)}{p}\right).
\ee
The expectation here is
\begin{align*}
&= \sum_{m=0}^{\infty}\frac{(2k)^m}{m!}\left(\frac{w_{\cI}(p)}{\sqrt{p}}\right)^m\sum_{n=0}^{\infty}\frac{(4k)^n}{n!}\left(\frac{w_{\cI}(p^2)}{p}\right)^n\E[(\re X_p)^{m+2n}]\\
&= \sum_{h=0}^{\infty}\frac{k^{2h}}{(2h)!}\left(\frac{w_{\cI}(p)}{\sqrt{p}}\right)^{2h}\sum_{n=0}^{\infty}\frac{k^n}{n!}\left(\frac{w_{\cI}(p^2)}{p}\right)^n\binom{2(h+n)}{h+n}.
\end{align*}
This follows from direct calculation of the moments
\[
\E[(\re X_p)^{m+2n}]=
\begin{cases}
\binom{2(h+n)}{h+n}2^{-2(h+n)}&\hbox{if } m=2h,\\
0 &\hbox{if } m \text{ is odd}.
\end{cases}
\]
Note that $w_{\cI}(p)\le 1$ and $w_{\cI}(p^2)\le \frac12$. Isolating the first three terms (those for which $h,n\le 1$) of the double sum and using the elementary inequality $\binom{2(h+n)}{h+n}\le 2^{2(h+n)}$ for the rest of the terms, we see that the expectation in \eqref{exp1} is
\[
1+\frac{k^2w_{\cI}(p)}{p}+\frac{2kw_{\cI}(p^2)}{p}+O\bigg(\frac{e^{3k}}{p^2}\bigg)=I_0\bigg(2k\frac{w_{\cI}(p)}{\sqrt{p}}\bigg)\exp\bigg(2k\frac{w_{\cI}(p^2)}{p}\bigg)\big(1+O_k(1/p^2)\big).
\]
It follows that \eqref{exp1} is
\[
\le I_0\Big(2k\frac{w_{\cI}(p)}{\sqrt{p}}\Big)\big(1+O_k(1/p^2)\big),
\]
and so
\[
\E\bigg[\exp\bigg(2k\re\sum_{n\le T^{\beta_{\cI}}}\frac{w_{\cI}(n)}{\sqrt{n}}X_n\bigg)\bigg]\le \prod_{p\le T^{\beta_j}}I_0\left(2k\frac{w_{\cI}(p)}{\sqrt{p}}\right)\cdot \prod_{\tilde{p}\le \log T}\big(1+O_k(1/\tilde{p}^2)\big).
\]
Note that, as $T\to \infty$, the product over $\tilde{p}$ converges to some constant depending only on $k$. Since $I_0(2x)\le e^{x^2}$, $w_{\cI}(p)\le 1$ and $w_{\cI}(p^2)\le \frac{1}{2}$, we conclude that this expectation is
\[
\ll_k\exp\bigg(k^2\sum_{p\le T^{\beta_{\cI}}}\frac{1}{p}\bigg)\ll_k (\log T)^{k^2}.
\]
This last bound along with \eqref{cTexp} implies
\[
\sum_{\gamma\in\cT}|\zeta'(\rho)|^{2k}\ll_k(\log T)^{k(k+2)}+T^{(e+5)/25}(\log T)^{2k+2}.
\]
\subsection{The sums over $S(j)$}
Consider $1\le j\le \cI-1$. We proceed as we did for the sum over $\cT$, but we use Lemma~\ref{Lem-S(j)} instead of Lemma~\ref{Lem-T}. By \eqref{newinequality} we have
\be\label{sj1}
\sum_{\gamma\in S(j)}|\zeta'(\rho)|^{2k}\ll_k e^{2k\beta_j^{-1}}(\log T)^{2k}\sum_{\gamma\in S(j)}\exp\left(2k\sum_{i=1}^jG_{i,j}(\gamma)\right).
\ee
Lemma~\ref{Lem-S(j)} implies that the right-hand side is
\begin{multline*}
\ll_k e^{2k\beta_j^{-1}}e^{-1/21\beta_{j+1}\log(1/\beta_{j+1})}N(T,2T)\, (\log T)^{2k}\E\left[\exp\left(2k\sum_{i=1}^jG_{i,j}(X)\right)\right]\\
+e^{2k\beta_j^{-1}}T^{(e+5)/25}(\log T)^{2k+2}.
\end{multline*}
The expectation here is estimated like it was in the case $j=\cI$. We also recall that $\beta_{j+1}=20\beta_j$. It follows that the right-hand side of \eqref{sj1} is
\[
\ll_k e^{2k\beta_j^{-1}}e^{-1/420\beta_j\log(1/\beta_{j+1})}N(T,2T)(\log T)^{k^2}\\
+e^{2k\beta_j^{-1}}T^{(e+5)/25}(\log T)^{2k+2}.
\]
Note that $$2k-\frac{1}{420}\log(1/\beta_{j+1})\le-\frac{8k}{21},$$
since $\beta_{j+1}\le e^{-1000k}$ for $j\le \cI-1$. Hence we see that
\be\label{sj2}
\sum_{\gamma \in S(j)}|\zeta'(\rho)|^{2k}\ll_k e^{-8k/21\beta_j}N(T,2T)(\log T)^{k^2}+e^{2k\beta_j^{-1}}T^{(e+5)/25}(\log T)^{2k+2}.
\ee
Observe that
\[
\sum_{j=1}^{\cI-1}e^{-8k/21\beta_j}\ll_k 1.
\]
Thus summing \eqref{sj2} over $1\le j\le \cI-1$ yields
\[
\sum_{j=1}^{\cI-1}\sum_{\gamma\in S(j)}|\zeta'(\rho)|^{2k}
\ll_k N(T,2T)(\log T)^{k^2}+T^{(e+5)/25}(\log T)^{2k+2}.
\]
\subsection{The sum over $S(0)$}
By H\"older's inequality, we have
\[
\sum_{\gamma\in S(0)}|\zeta'(\rho)|^{2k}\le \Big(\sum_{\gamma\in S(0)}1\Big)^{1/q}\Big(\sum_{T<\gamma\le 2T}|\zeta'(\rho)|^{2[2k+1]}\Big)^{1/p},
\]
where $p=[2k+1]/k$, $q=1-\frac1p$ and $[x]$ is the greatest integer less than or equal to $x$.
We estimate the first sum on the right-hand side with the second part of Lemma~\ref{Lem-S(j)}. For the second sum we use \eqref{Mi} with
$\varepsilon=1$. Note that $p\ge 2$ and $q\le \frac12$. It follows that
\begin{align*}
\sum_{\gamma\in S(0)}|\zeta'(\rho)|^{2k}&\ll_k N(T,2T)\,e^{-(\log\log T)^2/10q}(\log T)^{([2k+1]^2+1)/p}\\
&\le N(T,2T)\,\exp\big(-(\log\log T)^2/5+2([k+2]^2+1)\log\log T\big),
\end{align*}
which is $\ll N(T,2T)$ as $T\to \infty$. Combining this with the estimates for the sums over $\cT$ and the other $S(j)$ completes the proof.
\section{Sketch of the proof of Theorem~\ref{Thm 2}}
\noindent Here we describe how to modify the proof of Theorem~\ref{Thm 1} in order to prove Theorem~\ref{Thm 2}. Similar to Proposition~\ref{Prop 2}, we consider
\[
\sum_{T<\gamma\le 2T}|\zeta(\rho+\alpha)|^{2k}
\]
for a complex number $\alpha$ with $|\alpha|\le (\log T)^{-1}$. By the functional equation for the zeta-function, it suffices to assume $\re(\alpha)\ge 0$ (see the proof of Theorem 1.2 in \cite{Mi}). The approach is largely the same as it was for moments of $|\zeta'(\rho)|$. We modify the weight $w_j(n)$ introduced in Section 4 by defining
\[
w_j(n;\alpha)=\frac{w_j(n)}{n^{\re(\alpha)}}.
\]
This leads us to define
\[
G_{i,j}(t;\alpha)=\re\sum_{n\in I_i}\frac{w_j(n;\alpha)}{\sqrt{n}}n^{-i(t+\im(\alpha))}
\]
and similarly
\[
G_{i,j}(X;\alpha)=\re\sum_{n\in I_i}\frac{w_j(n;\alpha)}{\sqrt{n}}X_n.
\]
We use the inequality
\[
\log |\zeta(\rho+\alpha)|\le \sum_{i=1}^jG_{i,j}(\gamma;\alpha)+\frac{\log T}{\log x}+O(1),
\]
which is essentially Harper's~\cite{Ha} Proposition~\ref{Prop 1}. The proof of the upper bound
\[
\sum_{T<\gamma\le 2T}|\zeta(\rho+\alpha)|^{2k}\ll_k(\log T)^{k^2}
\]
claimed at the end of \S 2 relies on the obvious analogue of Lemma~\ref{Lem-mixed moments}, where $G_{i,j}(\gamma;\alpha)$ takes the place of $G_{i,j}(\gamma)$. The key difference, in comparison with \eqref{gamma} and \eqref{gammasum}, is that we must consider sums of the form
\be\label{alphasum}
\sum_{T<\gamma\le 2T}\prod_{i=1}^j\prod_{l=1}^{\ell_i}n_{i,l}^{ie_{i,l}(\gamma+\im(\alpha))}.
\ee
The diagonal terms are still those for which
\[
\prod_{i=1}^j\prod_{l=1}^{\ell_i}n_{i,l}^{e_{i,l}}=1,
\]
so we obtain the same main term as in the proof of Lemma \ref{Lem-mixed moments}. Again, we use Lemma~\ref{Lem-Landau} to handle the off-diagonal terms, i.e., those with
\[
\prod_{i=1}^j\prod_{l=1}^{\ell_i}n_{i,l}^{e_{i,l}}\neq1;
\]
as in the proof of Lemma~\ref{Lem-mixed moments}, we can assume this double product is $>1$ by combining terms which are complex conjugates. Separating the $n_{i,l}^{ie_{i,l}\im (\alpha)}$ factors from the $n_{i,l}^{ie_{i,l}\gamma}$, we see that \eqref{alphasum} is
\be\label{alphalandau}
=-\frac{T}{2\pi}\frac{\Lambda(n_{1,1}^{e_{1,1}}\cdots n_{j,\ell_j}^{e_{j,\ell_j}})}{\sqrt{n_{1,1}^{e_{1,1}}\cdots n_{j,\ell_j}^{e_{j,\ell_j}}}}\prod_{i=1}^j\prod_{l=1}^{\ell_i}n_{i,l}^{ie_{i,l}\im(\alpha)}\\
+O\Big(\sqrt{n_{1,1}^{e_{1,1}}\cdots e_{j,\ell_j}^{e_{j,\ell_j}}}(\log T)^2\Big).
\ee
Take real parts like in \eqref{postlandau}. The sign of the leading term here depends on the sign of
\be\label{cos}
\re \prod_{i=1}^j\prod_{l=1}^{\ell_i}n_{i,l}^{ie_{i,l}\im(\alpha)}=\cos \bigg(\im(\alpha) \log \prod_{i=1}^j\prod_{l=1}^{\ell_i}n_{i,l}^{e_{i,l}}\bigg).
\ee
Recall that $n_{i,l}\le T^{\beta_i}$ and $\ell_i\le 2e^2k\beta_i^{-3/4}$. This implies
\[
\prod_{i=1}^j\prod_{l=1}^{\ell_i}n_{i,l}^{e_{i,l}}\le T^{10e^2ke^{-250k}}.
\]
Note that $10e^ke^{-250k}$ is at most $e/25$. Since $|\im(\alpha)|\le (\log T)^{-1}$, we see that
\[
\bigg|\im(\alpha) \log \prod_{i=1}^j\prod_{l=1}^{\ell_i}n_{i,l}^{e_{i,l}}\bigg|< \frac{\pi}{2}.
\]
It follows that \eqref{cos} is positive, and consequently the leading term in \eqref{alphalandau} is negative. Thus we may ignore the leading term and still obtain an upper bound. The rest of the proof proceeds as before.
\begin{acknowledgements}\label{ackref}
The work herein comprised part of the author's Ph.D. thesis at the University of Rochester. The author is grateful to Prof. Steven Gonek for inspiring work on the problem and would also like to thank the referees for their helpful comments and suggestions.
The author also extends their sincere gratitude to the Leverhulme Trust (RPG-2017-320) for postdoctoral fellowship support through the research project grant ``Moments of $L$-functions in Function Fields and Random Matrix Theory".
\end{acknowledgements}
|
1,477,468,750,606 | arxiv | \section{INTRODUCTION}
Radio-loud active galactic nuclei (AGNs) that are characterised by extreme
variability in their radio cores, high and variable polarization,
superluminal jet speeds and compact radio emission, are collectively
referred to as `blazars' \citep{AngelStockman80}. Blazars comprise flat-spectrum
radio-loud quasars and BL~Lac objects. Their extreme characteristics have commonly
been understood to be a consequence of relativistic beaming effects due
to a fast jet aligned close to our line of sight \citep{BlandfordKonigl79}.
The radio-loud unified scheme postulates that quasars and BL~Lacs are the
beamed end-on counterparts of the Fanaroff-Riley \citep[FR,][]{FanaroffRiley74}
type II and type I radio galaxies, respectively \citep{UrryPadovani95}.
The relatively lower radio luminosity FRI radio galaxies are referred to as
``edge-darkened'' since their brightest lobe emission lies closer to the
radio cores and fades further out, while the higher radio luminosity
FRII radio galaxies are ``edge-brightened'' because of the presence of bright
and compact hot spots where the kpc-scale jets terminate. This definition
turns out to be useful for FRI-FRII classification when hot spots are not
clearly delineated ($cf.$ \S3.2).
The Fanaroff-Riley dichotomy has been proposed to arise due to
differences in one or more of the following properties: host galaxy environment
\citep{PrestagePeacock88} and jet-medium interaction \citep{Bicknell95},
jet composition \citep{Reynolds96}, black hole mass \citep{Ghisellini01}, black
hole spin \citep{Meier99}, accretion rate and/or mode \citep{Baum95,Ghisellini01}.
Many recent findings are, however, posing challenges for the standard
unified scheme. These include the discovery of FRI quasars \citep{Heywood07},
FRII BL~Lacs \citep{Landt06}, and ``hybrid'' radio morphology sources with an
FRI jet on one side of the core and an FRII jet on the other \citep{Gopal-Krishna00}.
One of the primary cited differences between quasars and BL~Lacs
has been the presence of strong, broad emission lines in the quasar spectra
and their apparent absence in BL~Lacs. \citet{Stickel91} and \citet{Stocke91}
have defined the distinction between BL~Lacs and quasars at an emission line
equivalent width of 5$\AA$. However, this distinction has been questioned
\citep[e.g.,][]{ScarpaFalomo97,Urry99,Landt04}, and it is known that some
BL~Lacs have a broad-line region
\citep[e.g.,][]{Miller78,Stickel91,Vermeulen95,Corbett96}.
In this paper, we examine the unified scheme and FR dichotomy in the
MOJAVE\footnote{Monitoring Of Jets in Active galactic nuclei with VLBA
Experiments. http://www.physics.purdue.edu/MOJAVE/} sample of blazars.
MOJAVE is a long-term program to monitor radio brightness and
polarization variations in the jets associated with active galaxies visible
in the northern sky on parsec-scales with Very Long Baseline Interferometry
(VLBI) \citep{Lister09}.
The {MOJAVE} sample consists of 135 sources satisfying the
following criteria: (1) J2000.0 declination $>-20\degr$;
(2) galactic latitude $|b|>2.5\degr$;
(3) VLBA 2~cm correlated flux density exceeding 1.5 Jy (2 Jy for declination
south of $0\degr$) at any epoch between 1994.0 and 2004.0.
Of the 135 AGNs, 101 are classified as quasars, 22 as BL~Lac objects,
and eight as radio galaxies of mostly the FRII-type. Four radio sources are
yet to be identified on the basis of their emission line spectra and have no
redshift information. Overall, eight sources (four BL Lacs and four
unidentified) have unknown or unreliable redshifts.
Based on the synchrotron peak in the spectral energy distributions (SEDs),
BL Lacs have been divided into low, high, and intermediate energy
peaked classes \citep[LBLs, HBLs, IBLs,][]{PadovaniGiommi95,Laurent99}.
All but three MOJAVE BL~Lacs are classified as LBLs
\citep[see][NED]{Nieppola06}.
The three BL~Lacs, $viz.,$ 0422+004, 1538+149 and 1807+698, are classified as IBLs.
There are no HBLs in the MOJAVE sample.
The compact flux density selection criteria of the MOJAVE sample biases it
heavily toward highly beamed blazars with high Lorentz factors
and small viewing angles.
While relativistic boosting effects are likely to dominate the source
characteristics, the extensive multi-epoch and multi-wavelength data
available for the MOJAVE sample on both parsec- and kiloparsec-scales,
provides us with unique constraints to test the unified scheme.
Throughout the paper, we adopt the cosmology in which
$H_0$=71 km s$^{-1}$ Mpc$^{-1}$, $\Omega_m$=0.27 and $\Omega_{\Lambda}$=0.73.
Spectral index, $\alpha$, is defined such that, flux density
$S_\nu$ at frequency $\nu$, is $S_\nu\propto\nu^{-\alpha}$.
\section{DATA REDUCTION AND ANALYSIS}
We observed seven MOJAVE quasars at 1.46 GHz with the Very Large Array
\citep[VLA,][]{Napier83} in the A-array configuration (typical synthesized beam
$\sim1.5\arcsec$) on June 30, 2007 (Program ID:AC874).
Sixty MOJAVE sources were previously observed by us with the VLA A-array
and presented in \citet{Cooper07}. Data for the remaining sources were
reduced directly using archival VLA A-array data, or obtained
through published papers. The data reduction was carried out
following standard calibration and reduction
procedures in the Astronomical Image Processing System (AIPS).
\begin{figure}[ht]
\centerline{
\includegraphics[width=9cm]{figure1a.ps}
\includegraphics[width=9cm]{figure1b.ps}}
\caption{\small 1.4~GHz VLA images of the quasars 0730+504 (Left)
and 0805$-$077 (Right).
The contours are in percentage of the peak surface brightness and increase
in steps of 2. The lowest contour levels and peak surface brightness are
(Left) $\pm$0.042, 672 mJy~beam$^{-1}$ and
(Right) $\pm$ 0.010, 1.43 Jy~beam$^{-1}$.}
\label{fig:0733}
\end{figure}
\begin{figure}[ht]
\centerline{
\includegraphics[width=9cm]{figure2a.ps}
\includegraphics[width=9cm]{figure2b.ps}}
\caption{\small 1.4~GHz VLA images of 1036+054 (Left) and 1045$-$188 (Right).
The contours are in percentage of the peak surface brightness and increase
in steps of 2. The lowest contour levels and peak surface brightness are
(Left) $\pm$0.021, 892 mJy~beam$^{-1}$ and
(Right) $\pm$ 0.042, 724 mJy~beam$^{-1}$.}
\label{fig:1038}
\end{figure}
\begin{figure}[ht]
\centerline{
\includegraphics[width=9cm]{figure3a.ps}
\includegraphics[width=9cm]{figure3b.ps}}
\caption{\small 1.4~GHz VLA images of 1213$-$172 (Left) and 1219+044 (Right).
The contours are in percentage of the peak surface brightness and increase
in steps of 2. The lowest contour levels and peak surface brightness are
(Left) $\pm$0.021, 1.66 Jy~beam$^{-1}$ and
(Right) $\pm$ 0.042, 549 mJy~beam$^{-1}$.}
\label{fig:1215}
\end{figure}
Our new observations included some new expanded VLA (EVLA) antennas and the
observations were made in a spectral line (4-channel pseudo-continuum) mode,
which resulted
in a total effective bandwidth of ($2\times21.875=$) 43.75~MHz. Four $\approx9$
minute scans of each source were interspersed with 1 minute scans of a
suitable phase calibrator. 0713+438 was used as the bandpass
calibrator for the experiment.
After the initial amplitude and phase calibration, AIPS tasks CALIB and
IMAGR were used iteratively to self-calibrate \citep{Schwab80}
and image the sources.
The resultant {\it rms} noise in the maps was typically of the
order of 0.06~mJy~beam$^{-1}$.
The quasar 1124$-$186 was observed with an incorrect source position, which
resulted in significant beam smearing. We instead obtained archival VLA data
for this source.
The new radio images of the six sources are presented in Figures
\ref{fig:0733}, \ref{fig:1038}, and \ref{fig:1215}.
Since most of the MOJAVE sources have highly compact radio structures,
most of the archival data comprised of snapshot observations
where the sample sources were observed as phase calibrators.
By choosing archival datasets with exposure times $\gtrsim$10 minutes, we were
able to obtain maps with typical {\it rms} noise levels of
$\sim0.15$~mJy~beam$^{-1}$.
We present previously unpublished images of 21 blazars in Appendix A.
A compilation of the basic parameters for each source is given in
Table~\ref{tabsample}. The integrated flux densities for the cores were
obtained in AIPS using the Gaussian-fitting task JMFIT, while the total
flux densities were obtained by putting a box around the source, using the
AIPS verbs TVWINDOW and IMSTAT.
{The extended radio flux density was obtained by subtracting the core flux
density from the total radio flux density.
All the maps were created with uniform weighting using a ROBUST parameter
of 0 in the AIPS task IMAGR. We also obtained core and lobe flux densities
from maps with different weighting schemes (by using ROBUST parameters
$-$5 and +5, respectively)} {for a fraction of the sample}. We found
that the integrated core flux density estimates differed typically
by less than 1\%, between different weighting schemes.
The extended flux density values differed typically by $<2\%$, with
a large majority of sources showing less than a 5\% difference.
Only in a handful of sources, where there seemed to be an unresolved
compact component close to the core (e.g., 1038+064, 1417+385),
was the difference between different weighting schemes significant
(10\%$-$20\%). However, following our radio galaxy study \citep{Kharb08a},
we have found that nearly 15\%$-$20\% of the extended flux density could be
getting lost in A-array observations, compared to combined-array observations.
Therefore, we conclude that the lack of short spacings in the A-array
observations is far more detrimental to the determination of accurate
extended flux values than (not) adopting different weighting schemes on the
A-array data to obtain core and extended flux densities.
The weighting scheme approach does suggest that
the errors in the extended flux density values
are typically of the order of 2\%$-$5\%, but could be of the order
of 10\%$-$20\% for sources where a compact component close to the core
is not clearly resolved.
\section{RESULTS}
\subsection{Extended Radio Power}
The 1.4~GHz extended radio luminosities for the MOJAVE sources are
plotted against redshift in Figure~\ref{fig:z}.
Sources with no discernible extended
emission are represented as upper limits.
The solid lines indicate the FRI$-$FRII divide (extrapolated from 178~MHz
to 1.4 GHz assuming a spectral index, $\alpha= 0.8$), following
\citet{LedlowOwen96} and \citet{Landt06}. The right hand panel of
Figure~\ref{fig:z} demonstrates the close relation between the radio core
and extended luminosity (Table~\ref{tabcorrel}).
Using the partial correlation regression analysis routine in
IDL (P$\_$CORRELATE), we found that the linear correlation
between log$L_{core}$ and log$L_{ext}$ is strong even with the effects of
luminosity distance (log$D_{L}$) removed (partial correlation coefficient,
$r_{XY.Z}$, = 0.303, $t$ statistic = 3.41, two-tailed probability that the variables
are not correlated{\footnote{Calculated using the VassarStats statistical
computation website,
http://faculty.vassar.edu/lowry/VassarStats.html}}, $p$ = 0.0009).
We note that the 18 BL Lacs (ones with redshift information) alone fail to show
a correlation between log$L_{core}$ and log$L_{ext}$, when the effects of
luminosity distance are removed. The implication of this finding
is discussed ahead in \S3.3.
\begin{figure}[ht]
\centerline{
\includegraphics[width=9cm]{figure4a.eps}
\includegraphics[width=9cm]{figure4b.eps}}
\caption{\small (Left) 1.4~GHz extended luminosity versus redshift.
(Right) 1.4~GHz core versus extended luminosity.
Open and filled
circles denote quasars and BL~Lacs, respectively, while open squares denote
radio galaxies. Core-only sources are represented as upper limits.}
\label{fig:z}
\end{figure}
The salient features of the extended radio emission in the MOJAVE blazars
are:
\begin{enumerate}
\item[(i)] Six of the 22 BL Lacs ($\sim27\%$) have FRII radio powers
(log$L_{ext}^{1.4} > 26$). Four of these
($viz.,$ 0235+164, 0716+714, 0808+019, 2131$-$021) also have hot spots
like quasars.
\item[(ii)] Five BL Lacs ($\sim23\%$) fall in the FRI/II power range
($24.5 <$ log$L_{ext}^{1.4} < 26$). Two of these ($viz.,$ 0823+033, 1803+784)
also appear to have hot spots.
\item[(iii)] Seven BL Lacs have extended luminosity
log$L_{ext}^{1.4} < 24.5$ and could be regarded as the true beamed counterparts
of FRI radio galaxies.
However, three of these ($viz.,$ 0754+100, 0851+202, 1807+698) also appear to exhibit
hot~spot-like features. Hot spots may also be present in all four BL~Lacs with
no redshift information. Overall, nearly 60\% of the MOJAVE BL~Lacs
appear to have hot spots.
{The hot spots in some BL~Lacs are however not as
bright or compact as those observed in quasars.}
\item[(iv)] Excluding the upper limits, 22 quasars ($\sim22\%$) fall in the FRI/II power range.
\item[(v)] 10 sources ($\sim7\%$, 9 quasars, 1 BL Lac) do not show any
extended emission $-$ these are discussed in \S3.4.
\end{enumerate}
Based on the extended emission at 1.4~GHz, we can conclude that
a substantial fraction of the MOJAVE BL Lacs have both radio powers and
radio morphologies like FRIIs or quasars. A substantial fraction of the
MOJAVE quasars lie in the intermediate (FRI/II) luminosity range.
These results are consistent with a number of previous radio studies.
Using a large sample of radio core-dominated AGNs, \citet{Murphy93} observed that
many high redshift ($z>0.5$) BL~Lacs lie above the FRI/FRII luminosity
division. Using a sample of 17 BL~Lacs, mostly belonging to the 1-Jy sample,
\citet{Kollgaard92} suggested that about 12\% $-$ 30\% of BL Lacs
in a radio flux-limited sample may be bona fide FRIIs.
Based on extended radio emission and strong emission lines at one or more
epochs, \citet{Rector01} concluded that many radio-selected BL~Lacs
belonging to the 1 Jy sample \citep{Stickel91} cannot be beamed FRIs, but
are more likely to be beamed FRIIs.
\citet{Cara08} found similar intrinsic parent luminosity functions for
the MOJAVE sample (consistent with FRIIs) irrespective of whether the
BL~Lacs were included or not.
\subsection{Radio Morphology}
We observe from Figures \ref{fig:0733}--\ref{fig:1215} that
all but two sources show two-sided radio structures. The variety of radio
structures observed here are reasonably representative of the entire sample
\citep[e.g., see][]{Cooper07}, as well as those
previously observed in other quasar surveys
\citep[e.g.,][]{Gower84,Antonucci85,PearsonReadhead88,Murphy93}.
The quasars often show two-sided radio structures with hot spots on
one or both sides of the core. However, one-sided morphologies with
no discernable compact or diffuse emission on the other side of the core, are
also common.
While determining if the blazars had an FRII type radio morphology,
we relied initially on the presence of one or more compact hot
spots at the leading edge of the radio lobes. When no clear hot spots
were visible on either side of the cores, we resorted to the traditional
``edge-brightened'' definition for FRII sources, $i.e.,$ when the brightest
radio emission was furthest from the core, we classified the source as an FRII.
The remaining sources with no clear hot spots, and the brightest extended emission
closest to the cores, were classified as FRI types. Even then, it was sometimes
difficult to consign the sources to FRI or FRII classes,
and we have listed more than one morphology for some sources in
Table~\ref{tabsample}.
Many of the MOJAVE sources show distinctly curved kiloparsec-scale jets.
Straight jets are a rarity in the sample. Many
MOJAVE quasars exhibit large parsec-to-kiloparsec jet misalignments.
While any intrinsic curvature in the jets is likely to be highly exaggerated
by projection effects in these low-viewing angle blazars, many sources seem
intrinsically distorted.
We discuss in \S4.1 how the MOJAVE selection criteria might be preferentially
picking up bent jets.
Apart from highly curved jets, many sources show distinct hot spot-like
features both closer to the core and at the jet termination points, similar to
the wide angle tail (WAT) radio galaxies \citep[e.g.,][]{O'Donoghue90}.
Four MOJAVE blazars have not yet been identified as quasars or BL~Lacs
(Table~\ref{tabsample}). A new VLA image of one of the unidentified sources
($viz.,$ 1213$-$172) is presented in Figure~\ref{fig:1215}. Another unidentified
source (0648$-$165) has a similar core-dominant radio morphology \citep{Cooper07}.
A third source (2021+317) has a distinct core-halo morphology resembling a
BL Lac object (see Appendix A), while the fourth (0446+112) has a straight jet with
a possible hot spot at the end \citep{Cooper07}.
\subsection{Parsec-scale Jet Speeds and Extended Luminosity}
\begin{figure}[ht]
\centerline{
\includegraphics[width=9cm]{figure5a.eps}
\includegraphics[width=9cm]{figure5b.eps}}
\caption{\small (Left) 1.4~GHz core luminosity versus the apparent jet speed.
The aspect curve assumes $\gamma=52$, $L_{int}=5\times10^{24}$ and $p\approx$2.
(Right) 1.4 GHz extended luminosity versus the apparent jet speed.
Open and filled
circles denote quasars and BL~Lacs, respectively, while open squares denote
radio galaxies. Core-only sources are represented as upper limits.}
\label{fig:beta}
\end{figure}
The MOJAVE and 2~cm survey programs have provided apparent parsec-scale
jet speeds ($\beta_{app}$) for nearly the entire MOJAVE sample, by
monitoring structural changes over timescales of approximately a decade.
These have been previously tabulated by \citet{Kellermann04}, and most
recently by \citet{Lister09a}. For our analysis below, we use a
$\beta_{app}$ value that represents the fastest moving feature in
that source.
The parsec-scale apparent jet speeds and kiloparsec-scale radio core luminosity
(Fig.~\ref{fig:beta}) are related by standard beaming relations. The aspect
curve in Figure~\ref{fig:beta} assumes: $L=L_{int}\times\delta^p$,
$\beta_{app}=\beta$~sin~$\theta/(1-\beta$~cos~$\theta)$, where the
Doppler factor, $\delta=1/(\gamma~(1-\beta$~cos~$\theta))$
and $\beta=v/c$. The best-fit values for the curve are
$\gamma=52$, $L_{int}=5\times10^{24}$ and $p\approx2$.
The relation between parsec-scale apparent jet speeds and parsec-scale
core luminosity for the MOJAVE sample is discussed in greater detail by
\citet{Cohen07}, and \citet{Lister09a}.
We find that the parsec-scale apparent jet speeds are correlated significantly
with the extended radio luminosity (Fig.~\ref{fig:beta}, Table~\ref{tabcorrel}).
Partial regression analysis shows that the linear correlation between
log$L_{ext}$ and $\beta_{app}$ with the effects of luminosity distance removed,
is still highly significant ($r_{XY.Z}$ = 0.273, $t$ statistic = 3.05,
two-tailed probability, $p$ = 0.0028).
This implies that more radio powerful sources have faster radio jets.
Since extended radio luminosity is correlated with the core
luminosity (Fig.~\ref{fig:z}, Table~\ref{tabcorrel}), and the core luminosity
is related to the apparent jet speed, we tested the correlation with a
partial correlation test between extended radio luminosity and apparent jet speeds,
after removing the effects of radio core luminosity.
This too yielded a statistically significant
correlation between extended radio luminosity and parsec-scale apparent jet
speeds ($r_{XY.Z}$ = 0.246, $t$ statistic = 2.72, $p$ = 0.0075).
The significant implication of this result is that faster jets are
launched in AGNs with larger kiloparsec-scale lobe luminosities. It would
therefore appear that the fate of the AGN is decided at its conception.
This result undermines the role of the kiloparsec environment on the radio
power of a given source. It also indicates that the 1.4~GHz extended emission
is indeed related to jet kinetic power, contrary to some
suggestions in the literature \citep[e.g.,][]{Birzan04}. Given that there is a
large overlap in radio powers between quasars and BL~Lac objects, it can be
concluded that most quasars have faster jets than most BL~Lac objects.
The second noteworthy inference that can be drawn from Figure~\ref{fig:beta}
is that there is a continuous distribution of parsec-scale jet speeds going
from BL~Lacs to quasars. This supports the idea that the distinction
between the BL~Lac and quasar population, at an equivalent width of
5$\AA$, is essentially an arbitrary one at least in terms of radio jet
properties \citep[see also][]{ScarpaFalomo97}.
However, we note that inspite of the two blazar classes displaying
a smooth continuation in properties (as is evident in Fig.~\ref{fig:beta} and
other plots), they sometimes reveal differences in the correlation test results
when considered separately. In Table~\ref{tabcorrel} we have listed
the correlation test results separately for quasars and BL Lacs, if
they differed from results for the combined blazar population.
BL~Lacs considered alone sometimes failed to exhibit the correlations
observed in quasars (also see \S3.1).
Apart from the possible effects of small number statistics, a
simple interpretation of this finding could be
that unlike the quasars, the BL~Lacs do not
constitute a homogeneous population. As has been
previously suggested in the literature \citep[e.g.,][]{Owen96,Cara08,Landt08},
the BL Lacs
might constitute the beamed population of both FRI and FRII radio galaxies.
We note that \citet{Giroletti04} have demonstrated that HBLs (which are
absent in MOJAVE) conform to the standard unification scheme with respect to FRI
radio galaxies. This again supports the proposed inhomogenity within the
BL Lac class.
Preliminary Monte Carlo simulations of a population of AGN similar to that
of MOJAVE also show a correlation between apparent jet speed and
extended luminosity (Cooper et al., in preparation).
The simulations are based on the luminosity function of \citet{PadovaniUrry92},
and assume that the extended luminosity is unbeamed and proportional
to the intrinsic unbeamed parsec-scale luminosity. We are currently
incorporating the luminosity function derived from the MOJAVE sample by
\citet{Cara08} into the simulations.
\subsection{Parsec-to-kiloparsec Jet Misalignment}
Apparent misalignment in the jet direction from parsec to kiloparsec scales
is commonly observed in blazars \citep[e.g.,][]{PearsonReadhead88}.
This misalignment could either be due to actual large bends in the jets,
or due to small bends that are amplified by projection.
We present the parsec- to kiloparsec-scale jet misalignment for the MOJAVE sources
in Figure~\ref{fig:Delta}.
The procedure for estimating the parsec-scale jet position angles (PAs)
is described in \citet{Lister09}. Kiloparsec-scale jet position angles were
determined for the FRII sources using the brightest hot spots, especially when
no jet was clearly visible,
under the assumption that the brightest hot spot indicates the approaching
jet direction due to Doppler boosting.
The AIPS procedure TVMAXFIT was used to obtain accurate peak pixel positions
of the core and hot spot, which were then used in the AIPS verb IMDIST
to obtain the jet position angle.
For the FRI-type sources we used TVMAXFIT for the core pixel position, and
the AIPS verb CURVALUE to get the pixel position roughly towards the center
and end of the broad jet/lobe. Finally IMDIST was used to obtain the jet
position angle.
Note that unlike regular radio galaxies, FRI blazars are typically one-sided,
which makes the approaching jet direction easy to identify.
Due to the wide jet/lobes in FRIs, the kiloparsec-scale jet
position angle could be uncertain by 10$\degr$ to 15$\degr$.
When a jet feature appeared only partially resolved ($viz.,$ 1038+064,
1417+385), a two Gaussian component model was used in JMFIT to obtain pixel
positions of the core and jet feature.
We note that our approach ($viz.,$ of using the brightest hot spot to
determine the approaching jet direction, when no jet was clearly visible)
differs from that adopted by some
other authors \citep[e.g.,][]{Xu94} who assume that, when no jet is visible, the
hot spot on the side of the parsec-scale jet, represents the kiloparsec-scale
jet direction. Our approach has led us to identify many
more sources ($\sim30\%$) with
misalignment angles greater than 90$\degr$.
\begin{figure}[ht]
\centerline{
\includegraphics[width=9cm]{figure6.eps}}
\caption{\small Misalignment angle between the parsec-scale and the
kiloparsec-scale jet for the MOJAVE sources.
There are signatures of a weak $90\degr$ bump.}
\label{fig:Delta}
\end{figure}
For an otherwise straight jet having a single bend,
the apparent bending angle ($\eta$) is related to the intrinsic
bending angle ($\psi$) through the relation,
$\cot~\eta=\frac{\cot~\psi~\sin~\theta - \cos~\theta~\cos~\phi}{\sin~\phi}$,
where $\theta$ is the angle to line of sight, and $\phi$ is the azimuth of
the bend \citep{Appl96}.
The fact that we see a majority of sources having close to zero apparent
misalignment implies one of the following scenarios:
(i) the intrinsic misalignment angle is small;
(ii) both the intrinsic misalignment and jet inclination angles are large;
(iii) the azimuth of the bend is close to 180$\degr$, $i.e.,$ the plane of the
bend is perpendicular to the sky plane. Since scenario (ii) is unlikely for
these blazars, possibilities (i) and (iii) are more favorable.
In \S4, we explain how the MOJAVE selection criteria could make it
biased towards bent jets.
\citet{PearsonReadhead88} and \citet{ConwayMurphy93} reported an unexpected
secondary peak at $\Delta$PA = $90\degr$ in the misalignment angle
distribution. The signatures of a weak $90\degr$ bump appear to be
present in Figure~\ref{fig:Delta}.
\citet{ConwayMurphy93} concluded that while the largely
aligned population could be explained by straight parsec-scale jets and small
intrinsic bends between parsec- and kiloparsec-scales, the secondary peak sources
must have gently curving (low-pitch) helical parsec-scale jets.
This helical distortion could arise due to Kelvin-Hemlholtz instabilities
or precession of the jet ejection axes.
We reach a similar conclusion on the jet misalignments, following a
different approach in \S4.3.
\section{DISCUSSION}
\subsection{Selection Effects in MOJAVE}
The MOJAVE survey appears to select many sources with intermediate radio powers
and radio morphology (e.g., high radio power BL~Lacs, some with hot spots, and
low radio power quasars). This could be a result of the MOJAVE selection criteria,
which are based on the relativistically-boosted, high radio frequency (15~GHz)
parsec-scale flux densities.
This would make it more likely to encompass a much larger range in
intrinsic radio powers, compared to other samples selected on the basis of
total kiloparsec-scale flux-densities (e.g., the 3CR sample).
The intrinsic radio luminosity function derived for the
MOJAVE sample by \citet{Cara08} also supports this view.
Furthermore, as the MOJAVE survey picks up bright radio cores, most of the
sources could either be fast jets pointed towards us, or curved/bent
jets that have at least some portion of the jet aligned directly into
our line of sight, at at least one epoch \citep[e.g.,][]{Alberdi93}.
The MOJAVE survey could therefore be biased towards bent jets.
Note that in the wide angle tail quasars, the hot spots closer to the cores could
be produced at the base of the plumes \citep[e.g.,][]{HardcastleSakelliou04}, or
indicate a sharp bend in the jet \citep[e.g.,][]{Alberdi93,Jetha06}.
We discuss the possibility of beamed WAT quasars in the MOJAVE sample further
in \S4.3.
\subsection{Orientation Effects in the Sample}
Since the MOJAVE sample consists almost entirely of blazars, the sources are
expected to have jets aligned close to our line of sight. Our
examination of orientation effects in the sample, however, using two different
statistical orientation indicators, $R_c$ and $R_v$, reveals
that a range of orientations must in fact be present in order to
account for several observed trends. $R_c$ and $R_v$ also reveal, to some extent,
dissimilar trends. We finally reach the conclusion that $R_v$ is a better
indicator of orientation for this blazar sample.
\begin{figure}[ht]
\centerline{
\includegraphics[width=9cm]{figure7a.eps}
\includegraphics[width=9cm]{figure7b.eps}}
\caption{\small The orientation indicators, $R_c$ (Left) and $R_v$ (Right)
versus the 1.4 GHz core luminosity.
Open and filled
circles denote quasars and BL~Lacs, respectively, while open squares denote
radio galaxies. Core-only sources are represented as upper limits.}
\label{fig:RcCore}
\end{figure}
\begin{figure}[ht]
\centerline{
\includegraphics[width=9cm]{figure8a.eps}
\includegraphics[width=9cm]{figure8b.eps}}
\caption{\small The orientation indicators, $R_c$ (Left) and $R_v$ (Right) versus
the 1.4 GHz extended luminosity.
Open and filled
circles denote quasars and BL~Lacs, respectively, while open squares denote
radio galaxies. Core-only sources are represented as upper limits.}
\label{fig:RcExt}
\end{figure}
The ratio of the beamed radio core flux density ($S_{core}$) to the unbeamed
extended radio flux density ($S_{ext}$), $viz.,$ the radio core prominence parameter
($R_c$), has routinely been used as a statistical
indicator of Doppler beaming and thereby orientation
\citep{OrrBrowne82,KapahiSaikia82,Kharb04}. The $k$-corrected $R_c$
($=\frac{S_{core}}{S_{ext}}(1+z)^{\alpha_{core} - \alpha_{ext}}$, with
$\alpha_{core}=0$, $\alpha_{ext}=0.8$) is plotted against the
radio core luminosity in
Figure~\ref{fig:RcCore}. As expected, $R_c$ is correlated with the radio
core luminosity (Table~\ref{tabcorrel}). $R_c$ however, shows a
significant anti-correlation with respect to the extended radio luminosity
(Fig.~\ref{fig:RcExt}).
$R_c$ does not show a correlation with jet misalignment (Table~\ref{tabcorrel}).
In fact the quasars considered alone showed
a weak anti-correlation between $R_c$ and jet misalignment (implying
that the more core-dominant sources have smaller jet misalignments).
These results are unexpected, because
in the former case, the extended radio luminosity is expected to be
largely unbeamed, while in the latter, the jet misalignment is affected by
projection and therefore orientation
\citep[in the sense that the more core-dominant sources
show larger jet misalignments, e.g.,][]{KapahiSaikia82}.
As an aside, \citet{Brotherton96} have noted that the anti-correlation
between log$R_c -$ log$L_{ext}$ may be suggesting that core-dominated quasars are
intrinsically fainter than their lobe-dominated counterparts. This would
be consistent with the idea that the MOJAVE sources span a large range in
intrinsic radio powers ($cf.$ \S4.1).
We also examined the relationship between $R_c$ and apparent jet speed.
While both $R_c$ and $\beta_{app}$ depend on the Lorentz factor and orientation,
many of the viewing angles may be inside the critical angle for maximum
superluminal speed, which would spoil any correlation. However, the quasars
considered alone do show a significant anti-correlation, contrary to
expectations. But as we see below, the alternate orientation indicator, $R_v$,
does show the expected behavior.
\citet{WillsBrotherton95} defined $R_v$ as the ratio of the radio core
luminosity to the $k$-corrected absolute V-band magnitude ($M_{abs}$):
$log R_v = log \frac{L_{core}}{L_{opt}} = (log L_{core}+ M_{abs}/2.5) - 13.7$,
where $M_{abs}=M_V - k$, and the $k$-correction is,
$k=-2.5~log~(1+z)^{1-\alpha_{opt}}$ with the optical spectral index,
$\alpha_{opt} = 0.5$. $R_v$ is suggested to be
a better orientation indicator then $R_c$ since the optical luminosity is
likely to be a better measure of intrinsic jet power
\citep[e.g.,][]{Maraschi08,Ghisellini09} than extended radio
luminosity. This is due to the fact that the optical continuum luminosity is
correlated with the emission-line luminosity over four orders of magnitude
\citep{YeeOke78}, and the emission-line luminosity is
tightly correlated with the total jet kinetic power \citep{RawlingsSaunders91}.
The extended radio luminosity, on the other hand, is suggested to be
affected by interaction with the environment on kiloparsec-scales.
Figure~\ref{fig:RcCore} suggests that $R_v$ is indeed a better indicator of
orientation as the correlation with radio core luminosity gains in prominence
(see Table~\ref{tabcorrel}). $R_v$ shows a weak positive correlation with
the extended radio luminosity (Fig.~\ref{fig:RcExt}).
Partial regression analysis however, shows that the linear correlation
between log$R_v$ and log$L_{ext}$ is no longer significant when the effects of
log$L_{core}$ are removed ($r_{XY.Z}$ = 0.078, $t$ statistic = 0.84,
$p$ = 0.4013). This implies that the extended radio emission
is largely unbeamed.
$R_v$ is correlated with the
parsec-to-kiloparsec jet misalignment, suggesting that orientation does play
a role in the observed jet misalignments, at least for the quasars. The lack
of such a correlation for the BL~Lacs could again be interpreted to be
consequence of their comprising of not only beamed FRIs (which could have
intrinsically distorted jets) but beamed FRIIs as well. We discuss this point
again in \S4.5. Finally, there appears to be no correlation
between $R_v$ and $\beta_{app}$, as is expected if many blazar viewing angles
are inside the critical angle for maximum superluminal speeds, which would
then result in smaller apparent speeds.
This result is again consistent with $R_v$
being a better indicator of orientation than $R_c$.
Here we would like to note that the correlation between apparent jet
speed and absolute optical magnitude does not seem to be as significant as the
one observed between apparent jet speed and extended radio luminosity (Table 2).
This could in principle undermine the suggestion that the optical luminosity is
a better indicator of jet kinetic power than extended radio luminosity.
However, since the optical luminosity is more likely to be affected by
strong variability, than the extended radio luminosity, the lack of a strong
correlation can perhaps be accounted for, in these non-contemporaneous radio and optical
observations. Nevertheless, this is an important result to bear in mind, that
requires further testing.
\subsection{Environmental Effects}
\begin{figure}[ht]
\centerline{
\includegraphics[width=9cm]{figure9a.eps}
\includegraphics[width=9cm]{figure9b.eps}}
\caption{\small (Left) 1.4~GHz extended radio luminosity versus the absolute
optical magnitude. (Right) The environment indicator versus redshift. Open and filled
circles denote quasars and BL~Lacs, respectively, while open squares denote
radio galaxies. Core-only sources are represented as upper limits.}
\label{fig:opt}
\end{figure}
It has been suggested that apart from projection and relativistic beaming
effects, the complex radio structures in quasars could arise due to
interactions with the surrounding medium \citep{PearsonReadhead88,Barthel88}.
As is observed in Figure~\ref{fig:opt}, the extended radio luminosity
appears to be correlated with absolute optical luminosity.
However, a partial regression analysis shows that the linear correlation
becomes weak or non-existent, after the effects
of luminosity distance are removed (for quasars, $r_{XY.Z}$ = $-$0.169, $t$
statistic = $-$1.62, $p$ = 0.1088; for BL Lacs, $r_{XY.Z}$ = 0.079, $t$ statistic
= 0.31, $p$ = 0.7608). The lack of a strong correlation may again be attributable
to strong optical variability in these blazars.
If the extended radio luminosity is indeed affected by interaction with the
kiloparsec-scale environment, and the optical luminosity is closely related to
the AGN power, the ratio [log~$L_{ext}]/M_{abs}$, can serve as a probe for
environmental effects on kiloparsec-scales
\citep[as suggested by][]{WillsBrotherton95}.
Henceforth we will refer to this ratio as the ``environment indicator'' (we will
drop the ``log'' term in the ratio for convenience, in the following text).
A higher value of the ratio could indicate greater jet-medium interaction,
(as say, from an asymetrically dense source environment).
Dense environments can decrease expansion losses in the source, but
increase radiative losses, making the sources brighter at low radio frequencies
\citep[e.g., in Cygnus A, see][]{Barthel96}.
Note that the optical magnitude in blazars is expected to be related to the
AGN (accretion disk or jet), rather than to starlight.
This might not be true for the nearby radio galaxies, where
starlight might indeed be a significant contributor to the optical magnitude.
The $L_{ext}/M_{abs}$ ratio might therefore not be a good environment proxy
for the radio galaxies. We have therefore excluded radio galaxies
from the correlation tests related to the environment proxy indicator.
We plot the environment proxy ratio with respect to redshift
in the right hand panel of Figure~\ref{fig:opt} and
find a strong positive correlation.
The ``alignment effect'' between the radio source axis and
the emission line region in high redshift radio {\it sources} demonstrates
that local galactic asymmetries (on scales of several kiloparsecs)
increase with redshift \citep{McCarthy93,Best99}.
This result therefore, is consistent with the suggestion of the
environmental dependence of the extended luminosity.
We note that the BL Lacs considered alone fail to show a strong
correlation (Table~\ref{tabcorrel}), which could be a consequence of an
inhomogeneous parent population, as discussed in \S3.3.
\begin{figure}[ht]
\centerline{
\includegraphics[width=9cm]{figure10.eps}}
\caption{\small Apparent parsec-scale jet speed versus the environment indicator.
Open and filled
circles denote quasars and BL~Lacs, respectively, while open squares denote
radio galaxies.}
\label{fig:envbeta}
\end{figure}
\begin{figure}[ht]
\centerline{
\includegraphics[width=9cm]{figure11a.eps}
\includegraphics[width=9cm]{figure11b.eps}}
\caption{\small (Left) Jet misalignment versus the environment indicator.
(Right) Misalignment angle versus redshift.
Open and filled
circles denote quasars and BL~Lacs, respectively, while open squares denote
radio galaxies.}
\label{fig:DeltaBeta}
\end{figure}
We find that while the parsec-scale jet speeds appear to be tightly
correlated with the extended radio luminosity (Fig.~\ref{fig:beta}),
and weakly correlated with the optical luminosity, they seem
to not show any correlation with the kpc-scale environment indicator
(Fig.~\ref{fig:envbeta}). This could imply that the
parsec-scale jet speeds are not driven by the kiloparsec-scale environment, but
rather by factors intrinsic to the AGN, or the AGN's local parsec-scale
environment. For instance, the jet speeds could be related to the
black hole spins \citep[e.g.,][]{Meier99}.
However, like other results pertaining to optical luminosity, this
lack of a correlation needs further testing with contemporaneous optical and
radio data.
We find that the parsec-to-kiloparsec jet misalignment is not correlated
with redshift (Fig.~\ref{fig:DeltaBeta}). This goes against the idea
that the bending of the jets is influenced by interaction with the medium;
an effect which should get more prominent with increasing redshift. This result
is consistent with the suggestions of \citet{Hintzen83} and \citet{Appl96}.
We note that \citet{Hintzen83} used the
kiloparsec-scale jet-to-counterjet misalignment in their study. \citet{Kharb08a}
found a similar lack of correlation between jet-to-counterjet misalignment
and redshift for an FRII radio galaxy sample.
Thus, the jet misalignment could also be related
to factors closely associated with the AGN itself. For instance, the presence
of binary black holes or kicks imparted to the black hole via a black hole merger,
could switch the ejection direction on a short timescale
\citep[e.g.,][]{Begelman80,Merritt02}. Black hole re-alignment due to a
warped accretion disk could also give rise to jet precession \citep{Pringle97}.
We note that a few sources have morphologies resembling fast, precessing
jets (``banana'' shaped) as modelled by \citet{Gower82}.
Alternatively, the jet misalignment could be influenced by the AGN's local
parsec-scale environment.
The jet misalignment is inversely correlated with the kpc-scale environment
indicator (see right hand panel of Fig.~\ref{fig:DeltaBeta}). The inverse relation
suggests that jets with smaller misalignments
($i.e.,$ straighter jets) show signatures of greater environmental effects
on their lobe emission.
This could either be suggestive of a uniformly dense confining medium on
different spatial scales around the source, or, of projection effects in
the sample, which could boost the optical emission and decrease the
$L_{ext}/M_{abs}$ ratio in sources with larger jet misalignments.
If the latter is true, then this is an important caveat $-$ the environment
indicator could also be affected by source orientation in blazars. However,
the orientation-dependence alone cannot satisfactorily explain the
correlation between the environment indicator and redshift (Fig.~\ref{fig:opt}).
Many MOJAVE blazars exhibit radio structures resembling those of {\it wide angle
tail} radio galaxies. Observations have indicated that WAT quasars could be members
of rich galaxy clusters \citep[e.g.,][]{Hintzen83,Harris83,Blanton00}.
Using the NASA/IPAC extragalatic database (NED) we found that nearly $7\%$ of the
MOJAVE sources have a galaxy cluster $\sim5\arcmin$ away, while
$14\%$ have a galaxy cluster $\sim10\arcmin-15\arcmin$ away.
While this fraction appears to be small, we must keep in mind that most of the
blazars are at high redshifts, while the cluster information is highly
incomplete even at redshifts $z>0.1$ \citep[e.g.,][]{Ebeling98,Bohringer04}.
Thus the possibility remains that a large number of MOJAVE blazars inhabit
galaxy clusters. This suggestion has interesting ramifications, as
discussed below. Furthermore, this could help us understand some of the
results discussed in this section.
\subsection{Relation Between Radio Power and Emission-line Luminosity}
\citet{RawlingsSaunders91} demonstrated a tight correlation between emission-line
luminosity and total jet kinetic power in
a sample of radio-loud AGNs. This correlation was tighter than the one
observed between emission-line luminosity and radio luminosity. \citet{Landt06}
discovered a large number of ``blue quasars'' with strong emission lines,
but relatively low radio powers in the Deep X-ray Radio Blazar Survey (DXRBS) and
the RGB samples, and suggested that this went against the close relationship
between emission-line and jet power. These and other observations
\citep[e.g.,][]{Xu99} could be suggesting that the mechanism for the production of
the optical-UV photons that ionise the emission-line clouds is decoupled
from the mechanism that produces powerful radio jets.
Many MOJAVE blazars also seem to not follow the relation between
emission-line and radio luminosity; that is, some quasars and BL~Lacs
have relatively low and high radio powers, respectively, and the suggestion
of decoupled mechanisms might hold true. However, an alternate
hypothesis by which their emission-line luminosity could still be correlated with
their intrinsic jet kinetic power, could come about if the outlying BL~Lac
objects
were present in dense confining environments, while the outlying
quasars were old,
field sources. Confinement would increase the synchrotron radiative losses due to the
amplification of the magnetic field strength, making the sources brighter
\citep[see][]{Barthel96}, while expansion losses and gradual radiative
losses could
presumably reduce the low frequency radio emission in the old, field quasars.
It has been suggested that a large fraction ($\sim$50\%) of the high redshift
radio sources could be GPS sources \citep[e.g.,][]{ODea91}, which are
likely to be present in confined environments. One of the highest
redshift MOJAVE sources, 0742+103 ($z=2.624$), is a GPS quasar.
There are several other MOJAVE sources that have peaked spectra, but
do not show the low-variability or the double-lobed parsec-scale structures that are
characteristic of the GPS class.
These could be ``masquerading GPS" sources, whose overall spectrum happens to
be dominated by an unusually bright feature in the jet, located downstream
from the core.
\subsection{Blazar Division Based on Radio Power}
In order to gain more insight into the primary factors that drive the FR
dichotomy, in this section we disregard the emission-line division
(i.e., quasars, BL~Lacs),
and divide sources into FRI, FRI/II and FRII classes based solely on
their 1.4~GHz extended powers (following Fig.~\ref{fig:z}).
Figure~\ref{fig:Env} plots the environment indicator, $L_{ext}/M_{abs}$,
with respect to redshift, using the new classification. A quick comparison with
Fig.~\ref{fig:opt} reveals
that many of the low redshift quasars fall into the intermediate
FRI/II class. The low redshifts of these sources discounts the possibility
of a $(1+z)^4$ surface brightness dimming effect in them,
which could potentially reduce their extended luminosity, making them fall
in the intermediate luminosity class. This finding is consistent with the
discovery of ``blue quasars'' by \citet{Landt06},
which have radio powers and spectral energy distributions (SEDs)
similar to the high energy peaked BL Lacs.
\begin{figure}[ht]
\centerline{
\includegraphics[width=9cm]{figure12.eps}}
\caption{\small The environment indicator versus redshift, with sources
divided on the basis of extended radio luminosity into FRI (=1), FRII (=2),
and FRI/II (=filled circles).}
\label{fig:Env}
\end{figure}
Most beamed FRIs lie at redshifts below 0.5 in Fig.~\ref{fig:Env}.
This could be a limitation of the flux limited sample.
Interestingly, the FRI/II sources lie at all redshift ranges,
and throughout the $L_{ext}/M_{abs}$ distribution. This suggests that
these ``intermediate'' luminosity sources are not produced as a result of different
environmental conditions compared to, say, other FRIIs.
Similarly, sources divided on the basis of the presence or absence of
hot spots, show no discernable difference in the $L_{ext}/M_{abs}$ vs. $z$ plane.
It is important to note that environmental radio boosting, as
mentioned in \S4.4 \citep[also][]{Barthel96} could be instrumental in
blurring the Fanaroff-Riley dividing line in the MOJAVE blazars.
Lastly, the orientation indicator, $R_v$, seems to be
correlated with misalignment only for FRIIs, but not for FRIs or FRI/IIs
(Table~\ref{tabcorrel}).
This implies that projection is playing a significant role in the jet
misalignment of FRII sources, but the jets are intrinsically more
distorted in FRIs and FRI/IIs. A corollary to this is that the differences between
FRIs and FRIIs could be related to factors affecting the black hole spin direction,
since we have previously concluded that jet mislignments appear to be
largely unrelated to kiloparsec-scale environmental effects.
\subsection{Extremely Misaligned Sources}
Six MOJAVE sources ($viz.,$ 0224+671, 0814+425, 1219+044, 1510$-$089,
1828+487, 2145+067) have jet misalignments greater than 160$\degr$.
Such extremely misaligned sources could be bent jets viewed nearly head-on, as
suggested for the gamma-ray blazar PKS 1510$-$089 \citep{Homan02}.
An alternate scenario could be that some of these misaligned sources are ``hybrid''
morphology sources \citep[e.g.,][]{Gopal-Krishna00}.
Three of six sources with large misalignments ($viz.,$ 0224+671, 1219+044,
1510$-$089) indeed seem to possess radio morphologies that could be classified
as ``hybrid''.
The large misalignment then makes sense: the VLBI jet (used to determine the
parsec-scale jet position angle) could be on the FRI side, while the
sole hot spot (used to determine the kiloparsec-scale jet position angle)
could be on the FRII side.
One interesting source that shows extreme misalignment ($\sim135\degr$) is M87
(1228+126). M87 appears to exhibit a weak hot spot on the counterjet
side \citep[e.g.,][]{Owen90,Sparks92}. Since the brightest emission on the
counterjet side is at the extremity of the radio lobe (like in FRIIs), M87 could be
classified as a hybrid radio morphology source.
Its radio luminosity ($L_{178}=1\times10^{25}$ W~m$^{-2}$) also places it close
to the FRI/FRII division \citep{Biretta99}.
\citet{Kovalev07} have
observed a very short, weak radio counterjet on parsec-scales in M87, but
perhaps that is just a slower outer sheath that quickly terminates.
If indeed some of the extremely misaligned sources are ``hybrid'' sources, the
lack of a correlation between jet misalignment and environment appears to contradict
the suggestion that asymmetries in galactic environments, which increase with
redshift \citep[e.g.,][]{Best99}, give rise to hybrid morphology sources
\citep[e.g.,][]{Gopal-Krishna00,Miller09}.
Moreover, the fraction of such sources would be higher in the MOJAVE ($6\%-8\%$),
than that reported for the FIRST survey \citep[1\%,][]{Gawronski06},
suggesting that hybrid sources may not be as rare as previously supposed.
It is also interesting to note that not many MOJAVE ``hybrid'' morphology
sources fall in the FRI/II luminosity range, but rather have FRII luminosities.
In other words, the MOJAVE ``hybrid'' morphology sources are not necessarily
``hybrid'' in terms of radio power, but mostly fall into the FRII luminosity class.
Chandra X-ray observations of these sources could resolve some of these
apparent contradictions.
\subsection{Core-only Sources}
About 7\% of the MOJAVE sources do not show any extended radio emission in
our 1.4 GHz VLA A-array images.
However the ratio of their 5~GHz radio flux density to their B-band
optical flux density is of the order of a few hundred, placing them firmly in
the radio-loud AGN class \citep[e.g.,][]{Kellermann89}.
The dynamic range achieved in the radio images of these sources varied from
about 6000:1
($e.g.,$ 0955+476) to 30,000:1 ($e.g.,$ 0202+149), being typically of the
order of 10,000:1. While some sources, like 1749+096, failed to reveal any
extended emission in a number of different datasets
\citep[see also][]{Rector01}, we did detect faint halo-like emission
around 1548+056, which was listed as a core-only source by \citet{Murphy93}.
Note that observations with the VLA A-array-only are likely to miss
very diffuse radio structure, due to the lack of short baselines.
We believe therefore that these core-only sources probably have associated
faint emission which require more sensitive observations to detect.
The observed fraction of core-only sources in MOJAVE is smaller than
that reported for other quasar surveys \citep[$20\%$,][]{Gower84,Padrielli88}.
While the core-only sources could have been affected by the redshift surface
brightness dimming effect, our discussion in the previous section argues
against this conjecture.
Alternatively, these sources could just be normal quasars at very small angles to
line of sight, so that the VLA resolution is insufficient to delineate
the various components. This appears to be the case in 1038+064,
and 1417+385, where a two Gaussian component model fits the ``core''
emission better than a single one.
The hypothesis that at least some of these quasars
could be beamed FRIs also cannot be ruled out \citep[e.g.,][]{Heywood07}.
\subsection{Optical Polarization Subclasses}
Low optical polarization radio quasars (LPRQs) consistently reveal
core optical fractional polarization, $m_{opt}<3\%$, while the high optical
polarization quasars (HPQs) routinely reveal highly polarized
optical cores with $m_{opt}\ge3\%$ \citep{AngelStockman80}. The classification
of the MOJAVE sources into these categories is listed in Table~\ref{tabsample},
and were obtained from \citet{Impey91} and \citet{Lister00}.
In all, there are 38 HPQs and 16 LPRQs in the MOJAVE sample.
We find no statistically significant differences in the core or
extended radio luminosity between LPRQs and HPQs. In \citet{Kharb08}
we had noted a difference in the integrated (single-dish) 1.4~GHz radio
luminosity between LPRQs and HPQs for redshifts less than one. We do not
find a difference in the total radio luminosity using the present higher resolution
VLA observations. Furthermore, we do not find a statistically significant
difference in jet misalignment between LPRQs and HPQs.
The parsec-scale apparent jet speeds of HPQs, however, are systematically
higher than those in LPRQs (Kolmogorov-Smirnov test probability that
the speeds in HPQs and LPRQs are similar, is $p$=0.03).
Additional optical polarimetry information on
the sample is needed to further investigate these trends.
\section{SUMMARY AND CONCLUSIONS}
\begin{enumerate}
\item We describe the results from a study of the extended emission
at 1.4~GHz in the MOJAVE sample of 135 blazars. New VLA A-array images of
six MOJAVE quasars, and previously unpublished VLA A-array archival images of
21 blazars, are presented. The kiloparsec-scale structures
are varied, with many sources displaying peculiar, distorted morphologies.
Many blazar jets show parsec-to-kiloparsec-scale jet misalignments greater
than $90\degr$. These characteristics have been reported in other quasar
surveys as well.
The $90\degr$ bump in the jet misalignment distribution
(of which we observe a weak signature) has been suggested to be a result of
low-pitch helical parsec-scale jets in the literature.
\item A substantial number of MOJAVE quasars ($\sim22\%$) and
BL~Lacs ($\sim23\%$) display radio powers that are intermediate between FRIs
and FRIIs. Many BL~Lacs have extended luminosities ($\sim27\%$) and hot spots
($\sim60\%$) like quasars. The hypothesis that at least some of the
core-only quasars are in fact FRI quasars, cannot be discarded.
In terms of radio properties alone, it is difficult to draw a sharp dividing
line between BL~Lacs and quasars, in the MOJAVE sample.
This could be a result of the MOJAVE selection effects which naturally pick
sources with a large range in intrinsic radio power (low power quasars and high
power BL Lacs), and preferentially bent jets.
While the quasars and BL Lacs display a smooth continuation in all their properties,
the correlation test results differ sometimes when they are considered separately.
This can be understood if the BL Lacs did not constitute a homogeneous population
like the quasars, but had both FRI and FRII radio galaxies as their parent population.
These findings challenge the simple radio-loud unified scheme, which links FRI
sources to BL~Lac objects and FRIIs to quasars.
\item We find that there is a significant correlation between extended radio
luminosity and apparent parsec-scale jet speeds. This implies that most
quasars have faster jets than most BL~Lac objects. The large overlap between many
properties of quasars and BL~Lacs (e.g., 1.4~GHz radio power, parsec-scale jet
speeds) however suggests that, at least in terms of radio jet properties,
the distinction between these two AGN classes, at an
emission-line equivalent width of $5\AA$, is essentially an arbitrary one.
\item These observations suggest that the mechanism for the production
of optical-UV photons that ionise the emission-line clouds is decoupled from the
mechanism that produces powerful radio jets \citep[e.g.,][]{Xu99,Landt06}. An
alternate hypothesis could be that,
BL~Lacs with high radio powers are present in dense confining
environments, which would increase the radiative losses in them and make them
brighter, while the radio quasars with low radio powers are old, field sources.
Environmental radio boosting \citep{Barthel96} could also explain why the
Fanaroff-Riley dividing line is blurred in the MOJAVE sample.
X-ray cluster studies could be used to test these ideas.
\item The ratio of the radio core luminosity to the $k$-corrected optical
luminosity ($R_v$) appears to be a better indicator of jet orientation than the
traditionally used radio core prominence parameter ($R_c$). This seems to be a
consequence of the environmental contribution to the extended radio luminosity,
used in $R_c$. Trends with $R_v$ reveal that even though the sample comprises
largely of blazars, there is a reasonable range of orientations present in
the sample.
\item Trends with the ``environment'' proxy indicator, $L_{ext}/M_{abs}$,
reveal that jet speeds seem to not depend on the environment on
kiloparsec scales, while the jet misalignments seem to be inversely correlated.
The jet misalignments are also not correlated with redshift.
It appears that the parsec-to-kiloparsec jet misalignment, the parsec-scale
jet speeds and to an extent the extended emission (which is related to jet speed)
are controlled by factors intrinsic to the AGN. Black hole spins could dictate
the jet speeds, while the presence of binary black holes, or kicks imparted to
black holes via black hole mergers could influence the jet direction. This is
consistent with radio morphologies similar to precessing jet models of
\citet{Gower82} present in the MOJAVE sample, and the signature of the 90$\degr$
bump in the jet misalignment distribution, which is attributed to low-pitch helical
parsec-scale jets.
\item If some of the highly misaligned sources (jet misalignment $\sim180\degr$)
are ``hybrid'' FRI+II morphology sources, then
the fraction of such sources is higher in the MOJAVE survey ($6\%-8\%$), than that
reported for the FIRST survey \citep[1\%,][]{Gawronski06}. Furthermore, the
lack of a correlation between jet misalignment and the environmental
indicator would appear to contradict the suggestion that hybrid morphology is a
result of the jet interacting with an asymmetric environment.
X-ray observations of hybrid sources are needed to resolve these issues.
\item About $7\%$ of the MOJAVE quasars show no extended radio emission.
We expect more sensitive radio observations to detect faint emission in these
sources, as we have detected in the case of 1548+056, previously classified as
a core-only quasar. The hypothesis that at least some of these quasars
could be beamed FRIs cannot be ruled out.
\item While the extended radio power or jet misalignments do not show
statistically significant differences between the two quasar optical polarization
subclasses, $viz.,$ LPRQs
and HPQs, the parsec-scale apparent jet speeds of HPQs are systematically
higher than those in LPRQs.
\end{enumerate}
\begin{deluxetable}{llllccllllllllllllllllll}
\tabletypesize{\tiny}
\tablecaption{The MOJAVE sample}
\tablewidth{0pt}
\tablehead{
\colhead{Source}&\colhead{z}&\colhead{$V$}&\colhead{Type}&\colhead{Opt}&\colhead{$I_{peak}$}&\colhead{$I_{rms}$}&\colhead{Freq}&\colhead{$\beta_{app}$}&\colhead{$S_{core}$}&\colhead{$S_{ext}$}&\colhead{Radio}&\colhead{PcjetPA}&\colhead{KpcjetPA}&\colhead{Ref}\\
\colhead{name}&\colhead{}&\colhead{mag}&\colhead{}&\colhead{Pol}&\colhead{Jy~beam$^{-1}$}&\colhead{Jy~beam$^{-1}$}&\colhead{GHz}&\colhead{}&\colhead{Jy}&\colhead{mJy}&\colhead{Morph}&\colhead{Degree}&\colhead{Degree}&\colhead{}\\
\colhead{(1)}&\colhead{(2)}&\colhead{(3)}&\colhead{(4)}&\colhead{(5)}&\colhead{(6)}&\colhead{(7)}&\colhead{(8)}&\colhead{(9)}&\colhead{(10)}&\colhead{(11)}&\colhead{(12)}&\colhead{(13)}&\colhead{(14)}&\colhead{(15)}}
\startdata
0003$-$066 & 0.347 & 18.50 & B & Y &2.605 &1.6$\times10^{-4}$& 1.4000& 2.89 & 2.66 & 43.9 & 2 & 282 &15 & AL634$\ast$ \\
0007$+$106 & 0.0893 & 15.40 & G & ... &0.075 &4.0$\times10^{-5}$& 1.4000& 0.97 & 0.08 & 17.6 & 2hs & 284 &239 & AL634$\ast$ \\
0016$+$731 & 1.781 & 19.00 & Q & N &0.398 &4.2$\times10^{-4}$& 1.4000& 6.74 & 0.40 & 7.8 & 1hs & 132 &2 & AL634$\ast$ \\
0048$-$097 & .... & 17.44 & B & Y &0.562 &1.8$\times10^{-4}$& 1.4250& .... & 0.57 & 139.7 & 1hs+2, 2hs & 337 &190 & AB1141 \\
0059$+$581 & 0.644 & 17.30 & Q & ... &1.554 &1.1$\times10^{-4}$& 1.4000& 11.08 & 1.57 & 19.4 & ch+1 & 236 &.... & AL634$\ast$ \\
0106$+$013 & 2.099 & 18.39 & Q & Y &2.760 &1.4$\times10^{-4}$& 1.4000& 26.50 & 2.81 & 530.6 & 1hs, 2 & 238 &184 & AL634$\ast$ \\
0109$+$224 & 0.265 & 15.66 & B & Y &0.361 &7.8$\times10^{-5}$& 1.4000& .... & 0.36 & 3.9 & 1 & 82 &93 & AL634$\ast$ \\
0119$+$115 & 0.57 & 19.70 & Q & ... &1.188 &9.8$\times10^{-5}$& 1.4000& 17.09 & 1.24 & 113.7 & 1hs, ch & 6 &33 & AL634$\ast$ \\
0133$+$476 & 0.859 & 19.50 & Q & Y &1.872 &9.6$\times10^{-5}$& 1.4000& 12.98 & 1.88 & 8.7 & 1hs & 331 &340 & AL634$\ast$ \\
0202$+$149$^c$& 0.405 & 21.00 & Q & Y &3.808 &1.4$\times10^{-4}$& 1.4000& 6.41 & 3.86 & 1.1 & c & 319 &.... & AL634$\ast$ \\
0202$+$319 & 1.466 & 17.40 & Q & N &0.648 &8.4$\times10^{-5}$& 1.4000& 8.30 & 0.65 & 11.7 & 1hs & 357 &2 & AL634$\ast$ \\
0212$+$735 & 2.367 & 19.00 & Q & Y &2.458 &1.0$\times10^{-4}$& 1.4000& 7.63 & 2.47 & 1.7 & 1hs, 2hs & 122 &253 & AL634$\ast$ \\
0215$+$015 & 1.715 & 16.09 & B & Y &0.416 &6.9$\times10^{-5}$& 1.4000& 34.16 & 0.45 & 71.1 & 2hs & 104 &92 & AL634$\ast$ \\
0224$+$671 & 0.523 & 19.50 & Q & ... &1.476 &1.0$\times10^{-4}$& 1.4000& 11.63 & 1.48 & 149.2 & 2hs & 355 &184 & AL634$\ast$ \\
0234$+$285 & 1.213 & 19.30 & Q & Y &2.274 &1.0$\times10^{-4}$& 1.4000& 12.31 & 2.33 & 99.9 & 1hs & 349 &331 & AL634$\ast$ \\
0235$+$164 & 0.94 & 15.50 & B & Y &1.501 &5.8$\times10^{-5}$& 1.4000& .... & 1.51 & 25.5 & 1hs+2 & 325 &343 & AL634$\ast$ \\
0238$-$084 & 0.005 & 12.31 & G & ... &1.091 &6.0$\times10^{-5}$& 1.4000& 0.34 & 1.11 & 111.3 & 2 & 66 &275 & AL634$\ast$ \\
0300$+$470 & .... & 16.95 & B & Y &1.166 &5.7$\times10^{-5}$& 1.4000& .... & 1.18 & 60.1 & 1, ch & 148 &225 & AL634$\ast$ \\
0316$+$413 & 0.0176 & 12.48 & G & N &20.76 &1.0$\times10^{-3}$& 1.3121& 0.31 & 21.40& 1103.0 & 2hs & 178 &158 & BT024 \\
0333$+$321 & 1.263 & 17.50 & Q & N &2.994 &1.0$\times10^{-4}$& 1.4000& 12.78 & 3.02 & 71.8 & 1hs & 126 &149 & AL634$\ast$ \\
0336$-$019 & 0.852 & 18.41 & Q & Y &2.908 &1.5$\times10^{-4}$& 1.4000& 22.36 & 2.92 & 70.3 & 1hs & 67 &344 & AL634$\ast$ \\
0403$-$132 & 0.571 & 17.09 & Q & Y &4.043 &3.3$\times10^{-4}$& 1.4000& 19.69 & 4.33 & 9.1 & 1hs+2 & 155 &202 & AL634$\ast$ \\
0415$+$379 & 0.0491 & 18.05 & G & ... &0.542 &5.7$\times10^{-4}$& 1.4899& 5.86 & 0.57 & 2700.0 & 2hs & 71 &62 & AR102 \\
0420$-$014 & 0.914 & 17.00 & Q & Y &2.887 &1.0$\times10^{-4}$& 1.4000& 7.34 & 2.91 & 70.2 & 2hs, 1hs+2 & 183 &189 & AL634$\ast$ \\
0422$+$004 & .... & 16.98 & B & Y &1.083 &1.3$\times10^{-4}$& 1.4000& .... & 1.09 & 6.1 & 2hs & 357 &200 & AL634$\ast$ \\
0430$+$052 & 0.033 & 15.05 & G & ... &2.845 &1.7$\times10^{-4}$& 1.4649& 5.38 & 2.95 & 159.5 & 1, 2 & 252 &293 & AB379 \\
0446$+$112 & .... & 20.00 & U & ... &1.527 &1.2$\times10^{-4}$& 1.4000& .... & 1.56 & 15.4 & 1hs & 103 &207 & AL634$\ast$ \\
0458$-$020 & 2.286 & 18.06 & Q & Y &1.629 &3.8$\times10^{-4}$& 1.4899& 16.51 & 1.66 & 148.1 & 1hs, 2hs & 321 &230 & AN047 \\
0528$+$134 & 2.06 & 20.00 & Q & ... &2.204 &1.2$\times10^{-4}$& 1.4000& 19.14 & 2.24 & 60.1 & 2hs, 1hs+2 & 19 &261 & AL634$\ast$ \\
0529$+$075 & 1.254 & 19.00 & Q & ... &1.518 &5.7$\times10^{-5}$& 1.4000& 12.66 & 1.54 & 126.8 & 2hs, 1hs+2 & 353 &221 & AL634$\ast$ \\
0529$+$483 & 1.162 & 19.90 & Q & ... &0.641 &6.7$\times10^{-5}$& 1.4000& 19.78 & 0.65 & 21.5 & 1hs+2 & 32 &75 & AL634$\ast$ \\
0552$+$398$^c$& 2.363 & 18.30 & Q & ... &1.539 &1.0$\times10^{-4}$& 1.4000& 0.36 & 1.55 & 1.2 & c & 290 &.... & AL634$\ast$ \\
0605$-$085 & 0.872 & 17.60 & Q & ... &2.344 &2.8$\times10^{-4}$& 1.4250& 19.79 & 1.20 & 123.9 & 1hs & 109 &99 & AD298 \\
0607$-$157$^c$& 0.324 & 18.00 & Q & ... &3.011 &1.4$\times10^{-4}$& 1.4000& 3.94 & 3.02 & 1.1 & c & 52 &.... & AL634$\ast$ \\
0642$+$449$^c$& 3.396 & 18.49 & Q & ... &0.644 &7.5$\times10^{-5}$& 1.4000& 0.75 & 0.65 & 1.4 & c & 90 &.... & AL634$\ast$ \\
0648$-$165 & .... & .... & U & ... &2.085 &1.4$\times10^{-4}$& 1.4000& .... & 2.12 & 11.2 & c, 1 & 273 &272 & AL634$\ast$ \\
0716$+$714 & 0.31 & 15.50 & B & Y &0.645 &6.8$\times10^{-5}$& 1.4000& 10.06 & 0.69 & 376.4 & 2hs, 1hs+2 & 40 &300 & AQ006 \\
0727$-$115 & 1.591 & 20.30 & Q & ... &3.182 &9.4$\times10^{-5}$& 1.4250& .... & 3.22 & 2.4 & c, 1hs & 292 &.... & AC874$\dagger$\\
0730$+$504 & 0.72 & 19.30 & Q & ... &0.676 &6.0$\times10^{-5}$& 1.4250& 14.06 & 0.69 & 82.5 & 2hs & 210 &82 & AC874$\dagger$\\
0735$+$178 & .... & 16.22 & B & Y &1.92 &0.7$\times10^{-3}$& 1.4650& .... & 1.91 & 20.6 & 1 & 77 &165 & Murphy \\
0736$+$017 & 0.191 & 16.47 & Q & Y &2.327 &3.2$\times10^{-4}$& 1.4899& 14.44 & 2.34 & 40.9 & 2hs & 273 &297 & AA025 \\
0738$+$313 & 0.631 & 16.92 & Q & ... &2.16 &0.4$\times10^{-3}$& 1.4650& 10.75 & 2.16 & 65.0 & 2hs & 156 &166 & Murphy \\
0742$+$103 & 2.624 & 24.00 & Q & ... &3.600 &2.3$\times10^{-4}$& 1.4899& .... & 3.62 & 5.8 & 1, 2 & 354 &81 & AY012 \\
0748$+$126 & 0.889 & 18.70 & Q & ... &1.43 &0.5$\times10^{-3}$& 1.4650& 18.36 & 1.43 & 27.0 & 2hs & 110 &125 & Murphy \\
0754$+$100 & 0.266 & 15.00 & B & Y &2.07 &0.4$\times10^{-3}$& 1.4650& 14.40 & 2.07 & 6.7 & 2hs & 10 &293 & Murphy \\
0804$+$499 & 1.436 & 19.20 & Q & Y &0.64 &0.3$\times10^{-3}$& 1.4650& 1.82 & 0.64 & 5.3 & 1hs & 134 &.... & Murphy \\
0805$-$077 & 1.837 & 19.80 & Q & ... &1.439 &5.8$\times10^{-5}$& 1.4250& 50.60 & 1.58 & 59.8 & 1hs & 332 &278 & AC874$\dagger$\\
0808$+$019 & 1.148 & 17.20 & B & Y &0.449 &9.1$\times10^{-5}$& 1.4899& 12.99 & 0.46 & 18.2 & 2hs & 182 &200 & AA025 \\
0814$+$425 & 0.245 & 18.18 & B & Y &1.015 &3.1$\times10^{-4}$& 1.5159& 1.70 & 1.03 & 76.8 & 2hs & 136 &322 & AK460 \\
0823$+$033 & 0.506 & 16.80 & B & Y &1.33 &0.4$\times10^{-3}$& 1.4650& 17.80 & 1.32 & 4.1 & 1hs & 15 &.... & Murphy \\
0827$+$243 & 0.94 & 17.26 & Q & ... &0.743 &1.4$\times10^{-4}$& 1.5100& 21.97 & 0.76 & 62.7 & 1hs+2 & 135 &199 & AV150 \\
0829$+$046 & 0.174 & 16.40 & B & Y &0.788 &1.9$\times10^{-4}$& 1.4000& 10.10 & 0.80 & 150.8 & 2, 1hs+2 & 60 &151 & AG618 \\
0836$+$710 & 2.218 & 17.30 & Q & N &3.168 &1.7$\times10^{-4}$& 1.4000& 25.38 & 3.34 & 73.6 & 1 & 211 &205 & AL634$\ast$ \\
0838$+$133 & 0.681 & 18.15 & Q & ... &0.287 &8.0$\times10^{-5}$& 1.4250& 12.93 & 0.35 & 2216.6 & 2hs & 76 &94 & AH480 \\
0851$+$202 & 0.306 & 15.43 & B & Y &1.563 &1.4$\times10^{-4}$& 1.5315& 15.17 & 1.57 & 10.7 & 1hs, 1 & 260 &251 & AS764 \\
0906$+$015 & 1.024 & 17.31 & Q & Y &1.00 &0.4$\times10^{-3}$& 1.4650& 20.66 & 1.00 & 38.0 & 1hs & 46 &54 & Murphy \\
0917$+$624 & 1.446 & 19.50 & Q & ... &1.11 &0.4$\times10^{-3}$& 1.4650& 15.57 & 1.11 & 6.4 & 1hs+2 & 341 &234 & Murphy \\
0923$+$392 & 0.695 & 17.03 & Q & N &2.399 &3.1$\times10^{-4}$& 1.5524& 4.29 & 2.83 & 361.8 & 1hs+2 & 98 &83 & AB310 \\
0945$+$408 & 1.249 & 18.05 & Q & N &1.23 &0.9$\times10^{-3}$& 1.4650& 18.60 & 1.23 & 95.0 & 1hs & 114 &32 & Murphy \\
0955$+$476$^c$& 1.882 & 18.65 & Q & ... &0.606 &1.0$\times10^{-4}$& 1.4899& 2.48 & 0.62 & 1.1 & c & 124 &270 & AP188 \\
1036$+$054 & 0.473 & .... & Q & ... &0.916 &4.9$\times10^{-5}$& 1.4250& 6.14 & 0.94 & 57.0 & 1hs+2 & 350 &226 & AC874$\dagger$\\
1038$+$064 & 1.265 & 16.70 & Q & ... &1.465 &1.5$\times10^{-4}$& 1.6649& 11.86 & 1.49 & 11.0 & 1 & 146 &166 & GX007A \\
1045$-$188 & 0.595 & 18.20 & Q & ... &0.726 &6.6$\times10^{-5}$& 1.4250& 8.57 & 0.76 & 509.4 & 1hs+2 & 146 &125 & AC874$\dagger$\\
1055$+$018 & 0.89 & 18.28 & Q & Y &2.675 &1.5$\times10^{-4}$& 1.4250& 10.99 & 2.70 & 230.8 & 2hs & 300 &186 & AB631 \\
1124$-$186 & 1.048 & 18.65 & Q & ... &0.652 &1.9$\times10^{-4}$& 1.4250& .... & 0.66 & 12.3 & 1hs & 170 &134 & AD337 \\
1127$-$145 & 1.184 & 16.90 & Q & ... &4.460 &2.3$\times10^{-4}$& 1.4649& 14.17 & 4.58 & 59.3 & 1hs & 66 &44 & AB379 \\
1150$+$812 & 1.25 & 19.40 & Q & ... &1.874 &6.5$\times10^{-4}$& 1.4000& 7.08 & 1.89 & 89.2 & 1hs+2, 2 & 163 &257 & AL634$\ast$ \\
1156$+$295 & 0.729 & 14.41 & Q & Y &1.433 &2.4$\times10^{-4}$& 1.4899& 24.85 & 1.55 & 196.1 & 2, 1hs+2 & 14 &344 & AA025 \\
1213$-$172 & .... & 21.40 & U & ... &1.676 &1.0$\times10^{-4}$& 1.4250& .... & 1.85 & 119.3 & 1 & 117 &266 & AC874$\dagger$\\
1219$+$044 & 0.965 & 17.98 & Q & ... &0.548 &5.9$\times10^{-5}$& 1.4250& 2.34 & 0.60 & 155.5 & 1hs+2 & 172 &0 & AC874$\dagger$\\
1222$+$216 & 0.432 & 17.50 & Q & ... &0.930 &8.0$\times10^{-5}$& 1.4000& 21.02 & 1.10 & 956.4 & 2hs & 351 &76 & AL634$\ast$ \\
1226$+$023 & 0.158 & 12.85 & Q & N &34.26 &1.5$\times10^{-3}$& 1.3659& 13.44 & 34.89& 17671 & 1hs & 238 &222 & AE104 \\
1228$+$126 & 0.0044 & 12.86 & G & ... &3.558 &2.3$\times10^{-4}$& 1.6649& 0.032 & 3.88 & 115012 & 2 & 291 &67 & BC079 \\
1253$-$055 & 0.536 & 17.75 & Q & Y &10.17 &6.4$\times10^{-4}$& 1.6649& 20.58 & 10.56& 2095.0 & 2hs & 245 &201 & W088D5 \\
1308$+$326 & 0.996 & 15.24 & Q & Y &1.321 &1.5$\times10^{-4}$& 1.4000& 27.14 & 1.33 & 69.1 & 2hs, 1hs+2 & 284 &359 & AL634$\ast$ \\
1324$+$224 & 1.4 & 18.90 & Q & ... &1.128 &9.6$\times10^{-5}$& 1.4000& .... & 1.14 & 20.4 & 1hs & 343 &237 & AL634$\ast$ \\
1334$-$127 & 0.539 & 19.00 & Q & Y &1.992 &9.9$\times10^{-5}$& 1.4899& 10.26 & 2.07 & 151.0 & 1hs, 1hs+2 & 147 &106 & AD176 \\
1413$+$135 & 0.247 & 20.50 & B & ... &1.056 &1.1$\times10^{-4}$& 1.4899& 1.79 & 1.08 & 5.8 & 2 & 55 &256 & AC301 \\
1417$+$385 & 1.831 & 19.69 & Q & ... &0.506 &6.4$\times10^{-5}$& 1.4000& 15.44 & 0.52 & 2.5 & 1 & 164 &123 & AL634$\ast$ \\
1458$+$718 & 0.905 & 16.78 & Q & N &5.760 &8.7$\times10^{-4}$& 1.4000& 7.05 & 7.57 & 68.9 & c, 1 & 164 &11 & AL634$\ast$ \\
1502$+$106 & 1.839 & 18.56 & Q & Y &1.795 &7.9$\times10^{-5}$& 1.4000& 14.77 & 1.82 & 38.3 & 1hs & 116 &159 & AL634$\ast$ \\
1504$-$166 & 0.876 & 18.50 & Q & Y &2.354 &4.0$\times10^{-4}$& 1.4899& 4.31 & 2.39 & 11.4 & 1hs, 2hs & 165 &164 & AY012 \\
1510$-$089 & 0.36 & 16.54 & Q & Y &1.380 &1.0$\times10^{-4}$& 1.4899& 20.15 & 1.45 & 180.2 & 1hs, 2hs & 328 &163 & AY012 \\
1538$+$149 & 0.605 & 17.30 & B & Y &1.479 &6.0$\times10^{-5}$& 1.4000& 8.73 & 1.67 & 71.4 & ch, 1 & 323 &320 & AL634$\ast$ \\
1546$+$027 & 0.414 & 17.45 & Q & Y &1.15 &0.4$\times10^{-3}$& 1.4650& 12.07 & 1.15 & 18.8 & 1hs+2 & 170 &162 & Murphy \\
1548$+$056 & 1.422 & 19.50 & Q & Y &2.106 &1.6$\times10^{-4}$& 1.5524& 11.56 & 2.21 & 42.9 & ch & 8 &.... & AB310 \\
1606$+$106 & 1.226 & 18.70 & Q & ... &1.35 &1.3$\times10^{-3}$& 1.4650& 18.90 & 1.35 & 26.5 & 1hs & 332 &271 & Murphy \\
1611$+$343 & 1.397 & 18.11 & Q & N &2.83 &0.5$\times10^{-3}$& 1.4650& 14.09 & 2.83 & 20.6 & 2hs & 166 &194 & Murphy \\
1633$+$382 & 1.814 & 18.00 & Q & Y &2.17 &0.4$\times10^{-3}$& 1.4650& 29.46 & 2.17 & 32.0 & 1hs+2 & 284 &176 & Murphy \\
1637$+$574 & 0.751 & 16.90 & Q & N &0.996 &1.1$\times10^{-4}$& 1.5524& 10.61 & 1.01 & 71.2 & 1hs & 198 &282 & AB310 \\
1638$+$398 & 1.666 & 19.37 & Q & ... &1.147 &2.0$\times10^{-4}$& 1.4899& 12.27 & 1.17 & 27.6 & 2hs & 300 &150 & AR250 \\
1641$+$399 & 0.593 & 16.62 & Q & Y &7.153 &1.3$\times10^{-3}$& 1.5149& 19.27 & 7.95 & 1476.9 & 1hs+2, 2hs & 288 &327 & AS396 \\
1655$+$077 & 0.621 & 20.00 & Q & Y &1.199 &1.4$\times10^{-4}$& 1.5524& 14.44 & 1.27 & 199.1 & 1hs+2 & 314 &269 & AB310 \\
1726$+$455 & 0.717 & 18.10 & Q & ... &0.993 &1.3$\times10^{-4}$& 1.4899& 1.81 & 1.00 & 55.3 & 1hs+2 & 271 &328 & AK139 \\
1730$-$130 & 0.902 & 19.50 & Q & ... &6.101 &3.4$\times10^{-4}$& 1.4250& 35.69 & 6.13 & 517.8 & 2hs, 1hs+2 & 2 &273 & AL269 \\
1739$+$522 & 1.375 & 18.70 & Q & Y &1.571 &1.5$\times10^{-4}$& 1.4899& .... & 1.61 & 27.6 & 1hs+2 & 15 &262 & AC301 \\
1741$-$038$^c$& 1.054 & 20.40 & Q & Y &1.677 &1.5$\times10^{-4}$& 1.4250& .... & 1.70 & 3.5 & c & 201 &134 & AL269 \\
1749$+$096$^c$& 0.322 & 16.78 & B & Y &1.039 &1.6$\times10^{-4}$& 1.4899& 6.84 & 1.05 & 4.9 & c & 38 &.... & AC301 \\
1751$+$288 & 1.118 & 19.60 & Q & ... &0.265 &1.7$\times10^{-4}$& 1.4350& 3.07 & 0.27 & 8.1 & 2, 1 & 349 &40 & AS659 \\
1758$+$388 & 2.092 & 17.98 & Q & ... &0.327 &1.7$\times10^{-4}$& 1.5149& 2.38 & 0.33 & 3.2 & 1hs+2 & 266 &270 & AS396 \\
1800$+$440 & 0.663 & 17.90 & Q & ... &0.465 &2.1$\times10^{-4}$& 1.5149& 15.41 & 0.50 & 246.6 & 1hs+2 & 203 &239 & AS396 \\
1803$+$784 & 0.68 & 15.90 & B & Y &1.969 &1.0$\times10^{-4}$& 1.4000& 8.97 & 1.98 & 20.8 & 2hs & 266 &192 & AL634$\ast$ \\
1807$+$698 & 0.051 & 14.22 & B & Y &1.133 &1.1$\times10^{-4}$& 1.5149& 0.10 & 1.20 & 368.7 & 1hs & 261 &242 & AB700 \\
1823$+$568 & 0.664 & 19.30 & B & Y &0.873 &6.1$\times10^{-4}$& 1.4250& 20.85 & 0.95 & 137.4 & 2 & 201 &94 & AM672 \\
1828$+$487 & 0.692 & 16.81 & Q & N &5.140 &6.3$\times10^{-4}$& 1.5524& 13.65 & 8.21 & 5431.2 & 2hs & 310 &119 & AB310 \\
1849$+$670 & 0.657 & 16.90 & Q & ... &0.468 &1.1$\times10^{-4}$& 1.4000& 30.63 & 0.47 & 101.0 & 1hs+2 & 308 &209 & AL634$\ast$ \\
1928$+$738 & 0.302 & 16.06 & Q & N &3.186 &1.6$\times10^{-4}$& 1.4000& 8.43 & 3.22 & 356.5 & 1hs+2, 2hs & 164 &165 & AL634$\ast$ \\
1936$-$155 & 1.657 & 20.30 & Q & Y &1.064 &1.8$\times10^{-4}$& 1.4899& 2.59 & 1.08 & 10.5 & 1hs+2, 2 & 180 &214 & AD167 \\
1957$+$405 & 0.0561 & 15.10 & G & ... &33.56$^\ddagger$&6.5$\times10^{-3}$& 1.5245& 0.21& 0.65 & 1.4$\times10^6$ & 2hs & 282 &291 & AC166 \\
1958$-$179$^c$& 0.65 & 18.60 & Q & Y &1.789 &1.5$\times10^{-4}$& 1.4899& 1.89 & 1.82 & 9.4 & c & 207 &.... & AA072 \\
2005$+$403 & 1.736 & 19.00 & Q & ... &2.420 &5.1$\times10^{-4}$& 1.6657& 12.21 & 2.47 & 10.6 & 1hs+2 & 127 &137 & AK4010 \\
2008$-$159 & 1.18 & 18.30 & Q & ... &0.546 &1.0$\times10^{-4}$& 1.4899& 7.98 & 0.55 & 7.5 & 1hs & 7 &5 & AK240 \\
2021$+$317 & .... & .... & U & ... &2.968 &6.8$\times10^{-4}$& 1.6657& .... & 3.06 & 132.9 & ch, 1 & 201 &.... & AK4010 \\
2021$+$614 & 0.227 & 19.00 & G & N &2.647 &2.9$\times10^{-4}$& 1.4899& 0.41 & 2.67 & 1.3 & 2 & 33 &.... & AM178 \\
2037$+$511 & 1.686 & 21.00 & Q & ... &4.604 &6.4$\times10^{-4}$& 1.6657& 3.29 & 4.97 & 657.6 & 1hs & 214 &327 & AK4010 \\
2121$+$053 & 1.941 & 20.40 & Q & Y &1.08 &0.6$\times10^{-3}$& 1.4650& 13.28 & 1.08 & 4.8 & 1, 1hs & 271 &.... & Murphy \\
2128$-$123 & 0.501 & 16.11 & Q & N &1.393 &1.3$\times10^{-4}$& 1.4899& 6.94 & 1.42 & 39.8 & 1hs+2 & 209 &190 & AY012 \\
2131$-$021 & 1.285 & 19.00 & B & Y &1.320 &1.3$\times10^{-4}$& 1.5149& 20.03 & 1.37 & 151.9 & 2hs & 108 &137 & AB700 \\
2134$+$004 & 1.932 & 17.11 & Q & N &4.82 &0.4$\times10^{-3}$& 1.4650& 5.93 & 4.82 & 6.6 & 1hs & 266 &298 & Murphy \\
2136$+$141$^c$& 2.427 & 18.90 & Q & ... &1.131 &1.4$\times10^{-4}$& 1.5524& 5.43 & 1.14 & 0.8 & c & 205 &.... & AB310 \\
2145$+$067 & 0.99 & 16.47 & Q & N &2.840 &1.7$\times10^{-4}$& 1.4000& 2.49 & 2.87 & 27.7 & 1hs, 1 & 119 &313 & AL634$\ast$ \\
2155$-$152 & 0.672 & 18.30 & Q & Y &2.627 &1.5$\times10^{-4}$& 1.4000& 18.11 & 2.70 & 304.7 & 1hs+2 & 214 &195 & AL634$\ast$ \\
2200$+$420 & 0.0686 & 14.72 & B & Y &1.970 &7.0$\times10^{-5}$& 1.4000& 10.57 & 1.99 & 14.2 & 2, 1hs+2 & 182 &143 & AL634$\ast$ \\
2201$+$171 & 1.075 & 19.50 & Q & ... &0.820 &5.8$\times10^{-5}$& 1.4000& 2.54 & 0.87 & 74.6 & 1hs, 2hs & 45 &267 & AL634$\ast$ \\
2201$+$315 & 0.295 & 15.58 & Q & N &1.517 &1.3$\times10^{-4}$& 1.4000& 7.87 & 1.54 & 378.4 & 2hs & 219 &251 & AL634$\ast$ \\
2209$+$236$^c$& 1.125 & 20.66 & Q & ... &0.425 &4.9$\times10^{-5}$& 1.4000& 3.43 & 0.43 & 0.9 & c & 54 &.... & AL634$\ast$ \\
2216$-$038 & 0.901 & 16.38 & Q & ... &1.722 &1.1$\times10^{-4}$& 1.4000& 5.62 & 1.76 & 312.7 & 2hs & 190 &143 & AL634$\ast$ \\
2223$-$052 & 1.404 & 18.39 & Q & Y &6.769 &3.4$\times10^{-4}$& 1.4000& 17.33 & 7.13 & 91.6 & 1 & 99 &335 & AL634$\ast$ \\
2227$-$088 & 1.56 & 17.43 & Q & Y &0.920 &4.2$\times10^{-5}$& 1.4000& 8.14 & 0.93 & 8.4 & 1hs & 348 &314 & AL634$\ast$ \\
2230$+$114 & 1.037 & 17.33 & Q & Y &6.633 &3.7$\times10^{-4}$& 1.4000& 15.41 & 6.99 & 148.0 & 1 & 147 &144 & AL634$\ast$ \\
2243$-$123 & 0.632 & 16.45 & Q & Y &2.247 &8.4$\times10^{-5}$& 1.4000& 5.49 & 2.27 & 27.7 & 1hs & 29 &44 & AL634$\ast$ \\
2251$+$158 & 0.859 & 16.10 & Q & Y &13.88 &4.9$\times10^{-4}$& 1.4000& 14.19 & 14.09& 822.0 & 1hs & 297 &311 & AL634$\ast$ \\
2331$+$073 & 0.401 & 16.04 & Q & ... &0.602 &4.3$\times10^{-5}$& 1.4000& 4.47 & 0.61 & 38.4 & 1hs+2 & 225 &251 & AL634$\ast$ \\
2345$-$167 & 0.576 & 18.41 & Q & Y &1.921 &8.1$\times10^{-5}$& 1.4000& 13.45 & 1.99 & 142.7 & 1hs & 141 &227 & AL634$\ast$ \\
2351$+$456 & 1.986 & 20.60 & Q & ... &2.260 &6.3$\times10^{-5}$& 1.4000& 27.09 & 2.35 & 6.9 & c, 1hs & 278 &260 & AL634$\ast$ \\
\enddata
\tablecomments
{{\scriptsize Col.~1 \& 2: IAU B1950 names of the MOJAVE sources and their redshifts.
$c$ = sources do not show any discernable extended emission.
Col.~3: Apparent V-band magnitude obtained from the AGN catalog of
\citet{Veron06} or NED.
Col.~4: Blazar classification $-$ Q = quasar, B = BL~Lac object, G = radio
galaxy, U = Unidentified.
Col.~5: Highly optically polarized $-$ Y = Yes, N = No, .... = no data
available.
Col.~6 \& 7: Peak and $rms$ intensity in radio map, respectively.
$I_{rms}$ for the sources in \citet{Murphy93} are the CLEV values in their Table 2.
$\ddagger$ for Cygnus A, the peak intensity corresponds to the south-eastern
hot spot.
Col.~8: Central observing frequency in GHz.
Col.~9: Apparent parsec-scale speed of the fastest jet feature.
Col.~10: Integrated core flux density at $\sim$1.4 GHz obtained with JMFIT.
Col.~11: Extended flux density obtained by subtracting the core
from the total radio flux density.
Col.~12: Radio morphology $-$ c = core only, ch = core-halo, 1hs = 1 hot spot,
2hs = 2 hot spots, 1 = 1 sided, 2 = 2 sided, 1hs+2 = 2 sided structure with
hot spot on one side. Alternate morphologies are listed
separated by commas.
Col.~13: Parsec-scale jet position angle.
Col.~14: Kiloparsec-scale jet position angle.
Col.~15: References and VLA archive Project IDs for the radio data $-$
$\dagger$=New data presented in this paper.
$\ast$=Data presented in \citet{Cooper07}. Murphy = \citet{Murphy93}.}}
\label{tabsample}
\end{deluxetable}
\begin{deluxetable}{llllllll}
\tabletypesize{\scriptsize}
\tablecaption{Correlation Results}
\tablewidth{0pt}
\tablehead{
\colhead{Param 1} & \colhead{Param 2} & \colhead{Class}& \colhead{Spearman} &
\colhead{Spearman}& \colhead{Kendall $\tau$} & \colhead{Kendall $\tau$} &
\colhead{Correl ?}\\
\colhead{} & \colhead{} &\colhead{} & \colhead{Coeff} & \colhead{Prob} &
\colhead{Coeff}& \colhead{Prob}& \colhead{} \\
\colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} &
\colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)}}
\startdata
$L_{core}$ & $L_{ext}$ &ALL& 0.61 & 1.6$\times10^{-13}$ &0.45 &$<1\times10^{-5}$ & YES \\
$M_{abs}$ & $L_{ext}$ &ALL&$-$0.39 & 9.8$\times10^{-6}$ &$-$0.27 &1.1$\times10^{-6}$ & YES \\
$L_{ext}$ & $\beta_{app}$ &ALL& 0.44 & 5.2$\times10^{-7}$ & 0.31 &5.3$\times10^{-7}$ & YES \\
& &Q&0.36&0.0004&0.24&0.0005& YES \\
& &B&0.32&0.193&0.24&0.148&NO \\
$M_{abs}$ & $\beta_{app}$ &ALL&$-$0.27 & 0.0025&$-$0.18&0.0031& YES\\
$R_c$ & $L_{core}$ &ALL&0.26 & 0.003 &0.19 & 0.002 & YES \\
& &Q& 0.39 & 8.5$\times10^{-5}$ & 0.27 &8.8$\times10^{-5}$ & YES \\
& &B&$-$0.02& 0.912 &$-$0.01 &0.909 & NO \\
$R_c$ & $L_{ext}$ &ALL&$-$0.49 & 1.5$\times10^{-8}$ &$-$0.35 &$<1\times10^{-5}$ & YES \\
$R_c$ & $\beta_{app}$ &ALL&$-$0.18&0.046&$-$0.12&0.044&YES?\\
& &Q&$-$0.28& 0.006 &$-$0.19 & 0.006 & YES \\
& &B&0.02 & 0.922 &0.01 & 0.939 & NO \\
$R_c$ & $\Delta$PA &ALL&$-$0.13& 0.146 &$-$0.09 & 0.142 & NO \\
& &Q&$-$0.18& 0.087 &$-$0.12 & 0.082 & YES? \\
& &B&$-$0.15& 0.579 &$-$0.10 & 0.589 & NO \\
$R_v$ & $L_{core}$ &ALL& 0.62 & 6.5$\times10^{-15}$ & 0.45 &$<1\times10^{-5}$ & YES \\
$R_v$ & $\beta_{app}$ &ALL& 0.11 & 0.200 & 0.07 &0.226 & NO \\
$R_v$ & $\Delta$PA &ALL&0.23&0.011&0.15&0.017&YES?\\
& &Q& 0.26 & 0.011 & 0.17 &0.015 & YES? \\
& &B& 0.38 & 0.143 & 0.23 &0.207 & NO \\
& &FRII&0.24&0.029&0.16&0.032&YES?\\
& &FRI/II&0.18&0.277&0.11&0.313&NO\\
& &FRI&0.12&0.658&0.07&0.701&NO\\
$\Delta$PA & $z$ &ALL& 0.06 & 0.514 & 0.04 & 0.498 & NO \\
$L_{ext}/M_{abs}$&$z$ &Q+B&0.21& 0.023&0.14&0.021&YES?\\
& &Q& 0.27 & 0.007 & 0.19 &0.007 & YES \\
& &B& 0.31 & 0.209 & 0.16 &0.343 & NO \\
$L_{ext}/M_{abs}$&$\beta_{app}$&Q+B&$-$0.02 & 0.810 &$-$0.01 &0.768 & NO \\
$L_{ext}/M_{abs}$&$\Delta$PA &Q+B&$-$0.34 & 0.0003 &$-$0.23 & 0.0005 & YES\\
\enddata
\tablecomments{
Cols.~1 \& 2: Parameters being examined for correlations. Results from a
partial regression analysis are cited in the main text.
The ten core-only sources have been excluded from correlations with
extended luminosity. Including the upper limits in extended luminosity as
detections, does not alter any of the observed trends.
Col.~3: The correlation test results have been presented separately
if they were different for quasars and BL Lacs considered alone.
ALL = quasars + BL Lacs + radio galaxies, Q = only quasars,
B = only BL Lacs, Q+B = quasars + BL Lacs. FRII, FRI/II, and FRI are as defined
in \S4.5.
Cols.~4 \& 5: Spearman rank correlation coefficient and
chance probability. Cols.~6 \& 7: Kendall Tau correlation coefficient and
chance probability. Col.~8: Indicates if the parameters are significantly
correlated. `YES?' indicates a marginal correlation.
We note that at the 95\% confidence level, 1.7 spurious correlations could
arise from the $\approx$35 correlations that we tested.}
\label{tabcorrel}
\end{deluxetable}
\acknowledgments
We would like to thank the anonymous referee for a careful assessment
of the manuscript, which has led to significant improvement.
PK would like to thank John Peterson for useful discussions on galaxy clusters,
and Chris O'Dea for insightful suggestions.
The MOJAVE project is supported under National Science Foundation grant
0807860-AST and NASA-Fermi grant NNX08AV67G.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which
is operated by the Jet Propulsion Laboratory, California Institute of
Technology, under contract with the National Aeronautics and Space Administration.
The National Radio Astronomy Observatory is a facility of the National
Science Foundation operated under cooperative agreement by Associated
Universities, Inc.
|
1,477,468,750,607 | arxiv | \section{Introduction}
\label{section:intro}
Machine learning inform an increasingly large number of critical decisions in diverse settings. They assist medical diagnosis~\citep{mckinney2020international}, guide policing~\citep{meijer2019predictive}, and power credit scoring systems~\citep{tsai2008using}. While they have demonstrated their value in many sectors, they are prone to unwanted biases, leading to discrimination against protected subgroups within the population. For example, recent studies have revealed biases in predictive policing and criminal sentencing systems~\citep{meijer2019predictive,Chouldechova17}. The blossoming body of research in algorithmic fairness aims to study and address this issue by introducing novel algorithms guaranteeing a certain level of non-discrimination in the predictions. Each such algorithm relies on a specific definition of fairness, which falls into one of two categories: Individual fairness~\citep{Dwork2012,Zemel13} or group fairness~\citep{Calders10, Kamishima11, Hardt16}. The vast majority of the algorithmic group fairness literature has focused on the simplest case where there are only two groups. In this paper, we consider the more nuanced case of group fairness with respect to multiple groups.
The simplest setting is the {\em independent} case, with only one sensitive attribute which can take multiple values, e.g., race only. The presence of multiple sensitive attributes (e.g., race {\em and} gender simultaneously) leads to non-equivalent definitions of group fairness. On the one hand, fairness can be considered independently per sensitive attribute, leading to overlapping subgroups. For example, consider a model restricted to demographic parity between subgroups defined by ethnicity. Simultaneously, the model can be constrained to fulfill demographic parity between subgroups defined by gender. We term fairness in this situation \textit{independent group fairness}. On the other hand, one can consider all subgroups defined by intersections of sensitive attributes (e.g., ethnicity and gender), leading to \emph{intersectional group fairness}.
A given algorithm can be \textit{independently group fair}, e.g., when considering race and gender in isolation, but not \textit{intersectionally group fair}, e.g., when considering intersections of racial and gender groups. For example, \citet{Buolamwini18}, showed how facial recognition software had a particularly poor performance for black women.
This phenomenon, called \emph{fairness gerrymandering}, has been studied by~\citet{Kearns18}. Intersectional fairness is often considered ideal. However, it comes with major statistical and computational hurdles such as data scarcity at intersections of minority groups, and the potentially exponential number of subgroups. Indeed, current algorithms consist of either brute force enumeration or searching via a cost-sensitive classification problem, and intersectional groups are often empty with finite samples~\citep{Kearns18}.
On the other hand, independent group fairness still provides a broad measure of fairness and is much easier to enforce.
We seek to { {\em design unifying statistically consistent strategies for group fairness and to clarify the relationship between the existing definitions.}}
Our main results and algorithms apply to arbitrary overlapping group definitions.
Our contributions are summarized in the following.
\begin{itemize}
\item {\bf Probabiistic results}. We characterize the population optimal (also known as the Bayes-optimal) prediction procedure for multiclass classification, where all the metrics are general linear functions of the confusion matrix. We consider both overlapping (independent, gerrymandering) and non-overlapping (unrestricted, intersectional) group fairness.
\item {\bf Algorithms and statistical results.} Inspired by the population optimal, we propose simple plugin and weighted empirical risk minimization (ERM) approaches for algorithmically fair classification, and prove their consistency, i.e., the empirical estimator converges to the population optimal with sufficiently large samples. Our general approach recovers existing results for plugin and weighted ERM group-fair classifiers.
\item {\bf Comparisons.} We compare independent group fairness to the overlapping case. We show that
intersectional fairness implies overlapping group fairness under weak conditions. However, the converse is not true, i.e., overlapping fairness may not imply intersectional fairness. This result formalizes existing observations on the dangers of gerrymandering.
\item {\bf Evaluation.}
Empirical results are provided to highlight our theoretical claims.
\end{itemize}
Taken together, our results unify and advance the state of the art with respect to the probabilistic, statistical, and algorithmic understanding of group-fair classification. The generality of our approach gives significant flexibility to the algorithm designer when constructing algorithmically-fair learners.
\section{Problem Setup and Notation}
\label{section:framework}
Throughout the paper, we use uppercased bold letters to represent matrices, and lowercased bold letters to represent vectors. Let $e_i$ represent the $i$th standard basis whose $i$th dimension is 1 and 0 otherwise $e_i=(0,\cdots,1,\cdots,0)$. We denote $\vec 1$ as the all-ones vector with dimension inferred from context. Given two matrices ${\mathbf{A}},{\mathbf{B}}$ of same dimension, $\ip{{\mathbf{A}},{\mathbf{B}}} = \sum_{i,j} a_{ij}b_{ij}$ is the Frobenius inner product. For any quantity $q$, $\hat q$ denotes an empirical estimate. Due to limited space, proofs are presented in the appendix.
{\bf Group notation.}
We assume $M$ sensitive attributes, where each attribute is indicated by a group $\{\mathcal{A}_m\}_{m \in [M]}$. For example, $\mathcal{A}_1$ may correspond to race, $\mathcal{A}_2$ may correspond to gender, and so on. Combined, the sensitive group indicator is represented by a $M$-dimensional vector ${\mathbf{a}} \in \mathcal{A} = \mathcal{A}_1 \times \mathcal{A}_2 \times \cdots \mathcal{A}_M$. In other words, each instance is associated with $M$ subgroups simultaneously.
{\bf Probabilistic notation.}
Consider the multiclass classification problem where $\mathcal{Z}$ denotes the instance space and $\mathcal{Y} = \left[K\right]$ denotes the output space with $K$ classes. We assume the instances, outputs and groups are samples from a probability distribution $\mathbb{P}$ over the domain $\mathcal{Y}\times\mathcal{Z}\times\mathcal{A}$. A dataset is given by $n$ samples $(y^{(i)}, z^{(i)}, a^{(i)}) \overset{\text{i.i.d}}{\sim} \mathbb{P}, i\in[n]$.
To simplify notation, let $\mathcal{X} = \mathcal{Z} \times \mathcal{A}$, so ${\mathbf{x}} = ({\mathbf{z}},{\mathbf{a}})$.
Define the set of randomized classifiers $\mathcal{H}_r=\{\mathbf{h}: \mathcal{X} \times \mathcal{A} {\,\rightarrow\,} (\Delta^K) \}$, where $\Delta^q = \set{\mathbf{p}\in [0,1]^q: \sum_{i=1}^q p_i = 1}$ is the $q-1$ dimensional probability simplex. A classifier $\vect h$ is associated with the random variable $h\in [K]$ defined by $\P(h=k|{\mathbf{x}}) = h_k({\mathbf{x}})$. If $\vect h$ is deterministic, then we can write $\vect h({\mathbf{x}}) = e_{h({\mathbf{x}})}$.
{\em Confusion matrices.}
For any multiclass classifier, let $\vect{\eta}({\mathbf{x}}) \in \Delta^{K}$ denote the class probabilities for any given instance ${\mathbf{x}}$ and sensitive attribute ${\mathbf{a}}$, whose $k$th element is the conditional probability of the output belonging to class $k$, i.e., $\eta_k({\mathbf{x}}) = \P(Y = k \mid X = {\mathbf{x}})$. The population confusion matrix is ${\mat{C}}\in [0,1]^{K\times K}$, with elements defined for $k,\ell \in[K]$ as ${\mat{C}}_{k,\ell} = \P(Y=k, h=\ell)$, or equivalently,
\begin{align*}
{\mat{C}}_{k,\ell} = \int_{{\mathbf{x}}} \vect{\eta}_k({\mathbf{x}})h_\ell({\mathbf{x}})\,d\P({\mathbf{x}}).
\label{eq:defC}
\end{align*}
{\em Group-specific confusion matrices.} Let $\mathcal{G}$ represent a set of subsets of the instances, i.e., potentially overlapping partitions of the instances $\mathcal{X}$. We leave $\mathcal{G}$ as generic for now, and will specify cases specific to fairness in the following. Given any group $g \in \mathcal{G}$, we can define the group-specific confusion matrix ${\mat{C}^g}\in [0,1]^{K\times K}$, with elements defined for $k,\ell\in[K]$, where
\begin{align*}
{\mat{C}}^g_{k,\ell} = \int_{{\mathbf{x}}} \vect{\eta}_k({\mathbf{x}})h_{\ell}({\mathbf{x}})\,d\P({\mathbf{x}}|{\mathbf{x}}\in g).
\end{align*}
We will abbreviate the event $\{{\mathbf{x}} \in g\}$ to simply $g$ when it is clear from context.
Let $\pi_{g} = \P(X\in g)$ be the probability of group $g$. It is clear that when the groups $\mathcal{G}$ form a partition, i.e., $a \cap b = \emptyset \, \forall a, b \in \mathcal{G}$ and $\bigcup_{g\in\mathcal{G}} g = \mathcal{X}$, the population confusion may be recovered by a weighted average of group confusions, $\mat{C} = \sum_{g \in \mathcal{G}} \pi_{g} \mat{C}^{g}.$
Let $\omega_{k} = \P(Y=k) = \sum_{\ell} {\mat{C}}_{k,\ell} $ be the probability of label $k$, and $\omega_{k}^g = \P(Y=k | X\in g) = \sum_{\ell} {\mat{C}}^g_{k,\ell} $ be the probability of label $k$ given group $g$.
{\bf The sample confusion matrix}
is defined as $\mat{\sConf}[\vect{h}] = \frac{1}{n} \sum_{i=1}^n \mat{\sConf}^{(i)}[\vect{h}]$, where
$\mat{\sConf}^{(i)}[\vect{h}] \in [0, 1 ]^{K\times K}$, and $\sConf_{k,\ell}^{(i)}[\vect{h}] = \indicator{y_i=k}h_\ell({\mathbf{x}}_i)$. Here, $\indicator{\cdot}$ is the indicator function, so $\sum_{k=1}^{K}\sum_{\ell =1}^{K}\sConf_{k,\ell}^{(i)}[\vect{h}]=1$.
{\em The empirical group-specific confusion matrices} $\widehat{\mat C}^g$ are computed by conditioning on groups. In the empirical case, it is convenient to represent group memberships via indices alone, i.e., ${\mathbf{x}}_i \in g$ as $i \in g$.
We have $\mat{\sConf}^g[\vect{h}] = \frac{1}{|g|} \sum_{i\in g} \mat{\sConf}^{(i)}[\vect{h}]$.
{\bf Fairness constraints.}
Let $\mathcal{G}_{\text{fair}}$ represent the (potentially overlapping) set of groups across which we wish to enforce fairness. The following states our formal assumptions on $\mathcal{G}_{\text{fair}}$.
\begin{assumption}
$\mathcal{G}_{\text{fair}}$ is a function of the sensitive attributes $\mathcal{A}$ only.
\label{ass:gps}
\end{assumption}
We will focus the discussion on common cases in the literature. These include non-overlapping (unrestricted, intersectional), and overlapping (independent, gerrymandering) group partitions.
\begin{itemize}
\item {\em Unrestricted case.} The simplest case is where the group is defined by a single sensitive attribute (when there are multiple sensitive attributes, all but one are ignored). These have been the primary settings addressed by past literature \citep{Hardt16, Narasimhan18, agarwal18}. Thus for some fixed $i \in [M]$, $g_{j} =\{({\mathbf{z}}, {\mathbf{a}})|a_i = j \}$, so $|\mathcal{G}_\text{unrestricted}| = |A_i|$. In the special case of binary sensitive attributes, $|\mathcal{G}_\text{unrestricted}| = 2$.
\item {\em Intersectional groups}. Here, the non-overlapping groups are associated with all possible combinations of sensitive features. Thus $g_{\mathbf{a}} =\{({\mathbf{z}}, {\mathbf{a}}')|{\mathbf{a}}'={\mathbf{a}}\} \, \forall {\mathbf{a}} \in \mathcal{A}$ so $|\mathcal{G}_\text{intersectional}| = \prod_{m \in M}|A_m|$. In the special case of binary sensitive attributes, $|\mathcal{G}_\text{intersectional}| = 2^M$.
\item {\em Independent groups}. Here, the groups are overlapping, with a set of groups associated with each fairness attribute separately. It is convenient to denote the groups based on indices representing each attribute, and each potential setting. Thus $g_{i,j} =\{({\mathbf{z}}, {\mathbf{a}})|a_i = j \}$, so $|\mathcal{G}_\text{independent}| = \sum_{m \in M}|A_m|$. In the special case of binary sensitive attributes, $|\mathcal{G}_\text{independent}| = 2M$.
\item {\em Gerrymandering intersectional groups}. Here, group intersections are defined by any subset of the sensitive attributes, leading to overlapping subgroups. $\mathcal{G}_\text{gerrymandering} = \{\{({\mathbf{z}}, {\mathbf{a}}): {\mathbf{a}}_I={\mathbf{s}}\}: I\subset[M],\, {\mathbf{s}} \in \mathcal{A}_I\}$
where ${\mathbf{a}}_I$ denotes ${\mathbf{a}}$ restricted to the entries indexed by $I$. It is also the closure of $\mathcal{G}_\text{independent}$ under intersection. As a result, $\mathcal{G}_\text{intersectional} \subseteq \mathcal{G}_\text{gerrymandering}$, and $\mathcal{G}_\text{independent} \subseteq \mathcal{G}_\text{gerrymandering}$. In the special case of binary sensitive attributes, $|\mathcal{G}_\text{gerrymandering}| = 3^M$.
\end{itemize}
{\bf Fairness metrics.}
We formulate group fairness by upper bounding a fairness violation function $\mathcal{V}: \mathcal{H} \mapsto \mathbb{R}^J$ which can be represented as a linear function of the confusion matrices, i.e. $\mathcal{V}(\vect{h})= \Phi(\mat C[{\vect h}], \confsh{\vect h})$ where $\forall j\in [J],\; \mathcal{V}(\vect{h})_j = \phi_j(\mat C[{\vect h}], \confsh{\vect h}) = \ip{{\mathbf{U}}_j, \mat C} - \sum_{g\in\mathcal{G}_{\text{fair}}} \ip{{\mathbf{V}}_j^g, \mat C^g}$. This formulation is sufficiently flexible to include the fairness statistics we are aware of in common use as special cases. For example, demographic parity for binary classifiers \citep{Dwork2012} can be defined by fixing $\mat{C}_{0,0}^g+\mat{C}_{1,1}^g$ across groups. Equal opportunity \citep{Hardt2016} is recovered by fixing the group-specific true positives, using population specific weights, i.e.,
\begin{equation*}
\label{eq:dp}
\phi_\text{DP}^{\pm} = \pm (\mat{C}_{0,0}^g+\mat{C}_{1,1}^g
- \mat{C}_{0,0}+\mat{C}_{1,1})-\nu,
\quad
\phi_\text{EO}^{\pm} = \pm\left(\frac{1}{\omega_1^g}\mat{C}_{1,1}^g
- \frac{1}{\omega_1}\mat{C}_{1,1}\right)-\nu,
\end{equation*}
using both a positive and negative constraint to penalize both positive and negative deviations between the group and the population, and relaxation $\nu$.
\paragraph{Performance metrics.}
We consider an error metric $\mathcal{E}: \mathcal{H} \mapsto \mathbb{R}_+$ that is a linear function of the population confusion $\mathcal{E}(\mathbf{h}) = \psi(\mat{C}) = \langle {\mathbf{D}}, \mat{C}[{\vect h}] \rangle$. This setting has been studied in binary classification~\citep{pmlr-v80-yan18b}, multiclass classification~\citep{narasimhan2015consistent}, multilabel classification~\citep{koyejo2015consistent}, and multioutput classification~\citep{wang2019consistent}. For instance, standard classification error corresponds to setting ${\mathbf{D}} = 1- {\mathbf{I}}$.
The goal is to learn the Bayes-optimal classifier with respect to the given metric, which, when it exists, is given by:
\begin{equation}
\vect{h}^* \in \operatorname{argmin}_{\vect{h}}\; \mathcal{E}(\vect{h}) \; \text{s.t.} \; \mathcal{V}(\vect{h}) \le \mathbf{0}.
\label{eq:Bayes}
\end{equation}
We denote the optimal error as $\mathcal{E}^* = \mathcal{E}(\vect{h}^*)$. We say a classifier $\vect{h}_N$ constructed using finite data of size $N$ is $\{\mathcal{E}, \mathcal{V}\}$-consistent if $ \mathcal{E}(\vect{h}_n)\xrightarrow{ \mathbb{P} }\mathcal{E}^*$ and $\mathcal{V}(\vect h_n) \xrightarrow{\P} \mathbf{0}$, as $n {\,\rightarrow\,} \infty$. We also consider empirical versions of error $\hat\mathcal{E}(\vect h) = \psi(\wh{\mat C}[{\vect h}])$ and fairness violation $\widehat\mathcal{V}(\vect h) = \Phi(\wh{\mat C}[{\vect h}\, \honfsh{\vect h})$.
\begin{table}[h]
\caption{Examples of multiclass performance metrics and fairness metrics studied in this manuscript.}
\label{table-metrics}
\begin{center}
\begin{tabular}{llll}
\toprule
{Metric} & $\psi(\mat{\conf})$ & {Fairness Metric} & $\phi(\mat{\conf}, \{ {\mat{\conf}}^g \}_g )$ \\\midrule
Weighted Acc. & $\sum_{i=1}^{K}\sum_{j=1}^{K}b_{i,j}\conf_{i,j}$ &
Demographic Parity &
$(\mat{C}_{0,0}^g+\mat{C}_{1,1}^g - \mat{C}_{0,0}+\mat{C}_{1,1})-\nu$
\\[5pt]
Ordinal Acc. &
$\sum_{i=1}^{K}\sum_{j=1}^{K}(1-\frac{1}{K-1} |i-j|)\conf_{i,j}$
&
Equalized Opportunity &
$\left(\frac{1}{\omega_1^g}\mat{C}_{1,1}^g - \frac{1}{\omega^g}\mat{C}_{1,1}\right)-\nu$
\\[5pt]
\bottomrule
\end{tabular}
\end{center}
\end{table}
\section{Bayes-Optimal Classifiers}
\label{sec:bayesopt}
In this section, we identify a parametric form for the Bayes-optimal group-fair classifier under standard assumptions. To begin, we introduce the following general assumption on the joint distribution.
\begin{assumption}[$\eta$-continuity]
\label{assumption1}
Assume $\mathbb{P}(\{ \vect{\eta}({\mathbf{x}}) = \mathbf{c} \}) = 0 \; \forall \mathbf{c} \in \Delta^K.$
Furthermore, let $Q= \vect{\eta}({\mathbf{x}})$ be a random variable with density $p_{\eta}(Q)$, where $p_{\eta}(Q)$ is absolutely continuous with respect to the Lebesgue measure restricted to $\Delta^K$.
\label{ass:data}
\end{assumption}
This assumption imposes that the conditional probability as a random variable has a well-defined density. Analogous regularity assumptions are widely employed in literature on designing well-defined complex classification metrics and seem to be unavoidable (we refer interested reader to~\citet{pmlr-v80-yan18b,narasimhan2015consistent} for details).
Next, we define the general form of weighted multiclass classifiers, which are the Bayes-optimal classifiers for linear metrics.
\begin{definition}
\label{def:Min-Form}[\citet{narasimhan2015consistent}]
Given a loss matrix $\mat{W}\in \mathbb{R}^{K\times K}$, a weighted classifier $\mathbf{h}$ satisfies $h_i({\mathbf{x}})>0$ only if $i \in \arg\min_{k\in[K]} \ip{\mat{W}_{k}, \mat{\eta({\mathbf{x}})} }$.
\end{definition}
Next we present our first main result identifying the Bayes-optimal group-fair classifier.
\begin{theorem}
\label{thrm:Min-Form}
Under Assumption~\ref{ass:gps} and Assumption~\ref{ass:data}, if \eqref{eq:Bayes} is feasible (i.e., a solution exists), the Bayes-optimal classifier is given by $\mathbf{h}^*({\mathbf{x}}) = \mathbf{h}^*({\mathbf{z}}, {\mathbf{a}}) = \beta_{{\mathbf{a}}}\mathbf{h}_1({\mathbf{x}}) + (1-\beta_{{\mathbf{a}}})\mathbf{h}_2({\mathbf{x}}),$ where $\beta_{{\mathbf{a}}} \in (0,1), \forall {\mathbf{a}} \in \mathcal{A}$ and $\mathbf{h}_i({\mathbf{x}})$ are weighted classifiers with weights $\{\{ {\mathbf{W}}_{i, {\mathbf{a}}} \}_{i \in \{1, 2\}}\}_{{\mathbf{a}} \in \mathcal{A}}$.
\label{thm:max_over_eta}
\end{theorem}
One key observation is that pointwise, the Bayes-optimal classifier can be decomposed based on intersectional groups $\mathcal{G}_{\text{intersectional}} = \mathcal{A}$, even when $\mathcal{G}_\text{fair}$ is overlapping. This observation will prove useful for algorithms.
\subsection{Intersectional group fairness implies overlapping group fairness}
Recent research~\cite{Kearns18} has shown how imposing overlapping group fairness using independent fairness restrictions can lead to violation of intersectional fairness, primarily via examples. This observation led to the term {\em fairness gerrymandering}. Here, we examine this claim more formally, showing that enforcing intersectional fairness controls overlapping fairness, although the converse is not always true, i.e., enforcing overlapping fairness does not imply intersectional fairness. We show this result for the general case of quasi-convex fairness measures, with linear fairness metrics recovered as a special case.
\begin{proposition}
For any $\mathcal{G}_{\text{fair}}$ that satisfies assumption~\ref{ass:gps}, suppose $\phi:[0,1]^{K\times K}\times[0,1]^{K\times K}{\,\rightarrow\,} \R_+$ is quasiconvex,
$\phi(\mat C, \mat C^g) \leq 0
\, \forall g\in\mathcal{G}_\text{intersectional}
\implies
\phi(\mat C,\mat C^g)\leq 0 \,
\forall g\in\mathcal{G}_{\text{fair}}. $
The converse does not hold.
\label{section:comparison}
\end{proposition}
\begin{remark}
Note that the converse claim of Proposition~\ref{section:comparison}, does not apply to $\mathcal{G}_\text{gerrymandering}$. Controlling the gerrymandering fairness violation implies control of the intersectional fairness violation, since $\mathcal{G}_\text{intersectional} \subseteq \mathcal{G}_\text{gerrymandering}$.
\end{remark}
\begin{algorithm}[t]
\caption{{\tt GroupFair},
Group-fair classification with overlapping groups,
\label{alg:general}}
\KwIn{$\psi:[0,1]^{K\times K} {\,\rightarrow\,}[0,1],\,
\Phi: [0,1]^{K\times K}\times([0,1]^{K\times K})^{\mathcal{G}_{\text{fair}}}{\,\rightarrow\,}[0,1]^J$}
\myinput{samples $\{({\mathbf{x}}_1,y_1),\ldots, ({\mathbf{x}}_n, y_n)\}$.}
Initialize $\vec\vect{\lambda}_1\in [0,B]^{J}$\;
\For{$t=1,\ldots, T$}{
$h^t \gets \mino_{h\in\mathcal{H}}(\mathcal{L}(h,\vec\vect{\lambda}^t), z^n)$\;
$\vec\vect{\lambda}^{t+1}\gets \update_t(\vec\vect{\lambda}^t, \Phi(\wh{\mat C}[h^t], \hconfsh{h^t})-\ve)$\;
}
$\bar{\vect{h}}^T \gets \frac{1}{T}\sum_{t=1}^T \vect h^t,\quad
\bar{\vec\vect{\lambda}}^T\gets \frac{1}{T}\sum_{t=1}^T\vec\vect{\lambda}^t$\;
\vspace{0.03in}
\Return{$(\bar{\vect h}^T, \bar{\vec\vect{\lambda}}^T)$}
\end{algorithm}
\section{Algorithms}
\label{section:algorithms}
Here we present {\tt GroupFair}, a general empirical procedure for solving \eqref{eq:Bayes}.
The Lagrangian of the constrained optimization problem \eqref{eq:Bayes} is $\mathcal{L}(\vect h, \vect{\lambda}) = \mathcal{E}(\vect h) + \vect{\lambda}^\top\mathcal{V}(\vect h)$ with empirical Lagrangian $\hat\mathcal{L}(\vect h,\vect{\lambda}) = \hat\mathcal{E}(\vect h) + \vect{\lambda}^\top(\mathcal{V}(\vect h)-\ve)$, where $\ve$ is a buffer for generalization.
Our approach involves finding a saddle point of the Lagrangian. The returned classifiers will be probabilistic combinations of classifiers in $\mathcal{H}$, i.e. the procedure returns a classifier in $\conv(\mathcal{H})$.
In the following, we first assume the dual parameter $\vect{\lambda}$ is fixed, and describe the primal solution as a classification oracle. We consider both plugin and weighted ERM. In brief, the plugin estimator first proceeds assuming $\vect{\eta}({\mathbf{x}})$ is known, then we {\em plugin} the empirical estimator $\hat\vect{\eta}({\mathbf{x}})$ in its place. The plugin approach has the benefit of low computational complexity once fixed. On the other hand, the weighted ERM estimator requires the solution of a weighted classification problem in each round, but avoids the need for estimating $\hat\vect{\eta}({\mathbf{x}})$.
\subsection{Weighted ERM Oracle}
\label{section:werm}
In the weighed ERM approach we parametrize $h:\mathcal{X} {\,\rightarrow\,}[K]$ by a function class $\mathcal{F}$ of functions ${\mathbf{f}}:\mathcal{X}{\,\rightarrow\,}\R^K$. The classification is the argmax of the predicted vector, $h({\mathbf{x}}) = \operatorname{argmax}_j({\mathbf{f}}({\mathbf{x}})_j)$, so we denote the set of classifiers as $\mathcal{H}^{werm} = \operatorname{argmax}\circ\mathcal{F}$.
The following special case of Definition 1 in \citep{ramaswamy2016convex} outlines the required conditions for weighted multiclass classification calibration. This is commonly referred to as cost-sensitive classification~\citep{agarwal18} when applied to binary classification.
\begin{definition}[${\mathbf{W}}$-calibration~\citep{ramaswamy2016convex}]
Let ${\mathbf{W}} \in {\mathbb{R}}_+^{K\times K}$. A surrogate function ${\mathbf{L}}: {\mathbb{R}}^K {\,\rightarrow\,} {\mathbb{R}}^K_+$ is said to be ${\mathbf{W}}$-calibrated if
$$
\forall p \in \Delta^K: \inf_{{\mathbf{u}}: \operatorname{argmax}(u) \notin \operatorname{argmin}_k ({\mathbf{p}}^\top{\mathbf{W}})_k } {\mathbf{p}}^\top{\mathbf{L}}({\mathbf{u}}) > \inf_{{\mathbf{u}}} {\mathbf{p}}^\top{\mathbf{L}}({\mathbf{u}}).
$$
\label{def:calibration}
\end{definition}
Note that the weights are sample (group) specific -- which, while uncommon, is not new, e.g., \citet{pires13}.
\begin{proposition}
The weighted ERM estimator for average fairness violation is given by:
$
h({\mathbf{x}}) = \operatorname{argmax}_j({\mathbf{f}}^*({\mathbf{x}})_j), \;
{\mathbf{f}}^* = \min_{{\mathbf{f}}\in\mathcal{F}} \hat L(f); \;$
where $\hat L({\mathbf{f}}) = \hat {\mathbb{E}} [{\mathbf{y}}^T {\mathbf{L}}({\mathbf{f}})]$ is a multiclass classification surrogate for the weighted multiclass error with group-dependent weights $\forall {\mathbf{a}} \in \mathcal{A}$
\begin{align}
{\mathbf{W}}({\mathbf{x}}) =
\left[{\mathbf{D}} + \sum_{j=1}^J \vect{\lambda}_j\bigg({\mathbf{U}}_j-\sum_{g\in\mathcal{G}_{\text{fair}}}\frac{\1_{{\mathbf{a}}\in g}}{\hat\pi(g)}{\mathbf{V}}_j^g\bigg)\right].
\label{eq:weights}
\end{align}
\label{prop:wwerm}
\end{proposition}
\subsection{The Plugin Oracle}
\label{section:plugin}
The plugin hypothesis class are the weighted classifiers, identified by Theorem~\ref{thrm:Min-Form} as $\mathcal{H}^{plg} = \{h({\mathbf{x}})=\operatorname{argmin}_{j\in [K]}( \hat\vect{\eta}({\mathbf{x}})^\top {\mathbf{B}}({\mathbf{x}}))_j: {\mathbf{B}}({\mathbf{x}})\in\R^{K\times K}\}$. Here, we focus on the average violation case only.
By simply-reordering terms, the population problem can be determined as follows.
\begin{proposition}
The plug-in estimator for average fairness violation is given by $\hat h({\mathbf{x}}) = \operatorname{argmin}_{k\in [K]} (\vect{\eta}({\mathbf{x}})^\top {\mathbf{W}}({\mathbf{x}}))_k$, where ${\mathbf{W}}({\mathbf{x}})$ is defined in \eqref{eq:weights}.
\label{prop:plugin}
\end{proposition}
\subsection{{\tt GroupFair}, a General Group-Fair Classification Algorithm}
We can now present {\tt GroupFair}, a general algorithm for group-fair classification with overlapping groups, as outlined in Algorithm~\ref{alg:general}. As outlined, our approach proceeds in rounds, updating the classifier oracle and the dual variable. Interleaved with the primal update is a dual update $\update_t(\vect{\lambda}, {\mathbf{v}})$ via gradient descent on the dual variable. The resulting classifier is the average over the oracle classifiers.
{\bf Recovery of existing methods.}
When the groups are non-overlapping, {\tt GroupFair}\ with the Plugin oracle and projected gradient ascent update recovers FairCOCO ~\citep{Narasimhan18}. Similarly, when the groups are non-overlapping, and the labels are binary, {\tt GroupFair}\ with the weighted ERM oracle and exponentiated gradient update recovers FairReduction~\citep{agarwal18} (see also Table~\ref{tab:algs}). Importantly, {\tt GroupFair}\ enables a straightforward extension to overlapping groups.
\section{Consistency}
\label{section:consistency}
Here we discuss the consistency of the weighted ERM and the plugin approaches.
For any class $\mathcal{H} = \{h:\mathcal{X}{\,\rightarrow\,}[K]\}$, denote $\mathcal{H}_k = \{\1_{\{h(x)=k\}}:h\in \mathcal{H}\}$. We assume WLOG that ${\vect{c}}(\mathcal{H}_1)=\ldots={\vect{c}}(\mathcal{H}_K)$ and denote this quantity as ${\vect{c}}(\mathcal{H})$.
Next, we give a theorem relating the performance and satisfaction of constraints of an empirical saddle point to an optimal fair classifier.
\begin{theorem}
\label{thm:saddle-main}
Suppose $\psi : [0,1]^{K\times K}{\,\rightarrow\,} [0, 1]$ and $\Phi:[0,1]^{K\times K}\times ([0,1]^{K\times K})^{\mathcal{G}_{\text{fair}}}{\,\rightarrow\,} [0,1]^J$ are $\rho$-Lipschitz w.r.t. $\|\cdot\|_{\infty}$.
Recall $\hat\mathcal{L}(\vect h,\vect{\lambda})=\hat\mathcal{E}(\vect h)+\vect{\lambda}^\top(\hat\mathcal{V}(\vect h)-\ve\vec 1)$. Define $\gamma(n',\mathcal{H},\delta) = \sqrt{\frac{\VC(\mathcal{H})\log(n)+\log(1/\delta)}{n}}$. If $n_{\min} = \min_{g\in\mathcal{G}_{\text{fair}}} n_g,\, \ve = \Omega\left(\rho\gamma(n_{\min}, \mathcal{H}, \delta)\right)$ then w.p. $1-\delta$:
If $(\bar{\vect h}, \bar{\vec\vect{\lambda}})$ is a $\nu$-saddle point of $\max_{\vect{\lambda}\in[0,B]^J}\min_{\vect h\in\conv\mathcal{H}} \hat\mathcal{L}(\vect h, \vect{\lambda})$, in the sense that $\max_{\vect{\lambda}\in[0,B]^J} \hat\mathcal{L}(\bar{\vect h}, \vect{\lambda})-\min_{\vect h\in\conv(\mathcal{H})}\hat\mathcal{L}(\vect h, \bar{\vec\vect{\lambda}})\leq\nu$, and $\vect h^*\in\conv(\mathcal{H})$ satisfies $\mathcal{V}(\vect h^*)\leq 0$, then
\begin{equation*}
\mathcal{E}(\bar{\vect h})\leq \mathcal{E}(\vect h^*)+\nu+\mathcal{O}\left(\rho\gamma(n,\mathcal{H},\delta)\right)
, \quad
\|\mathcal{V}(\bar{\vect h})\|_{\infty}\leq \frac{1+\nu}{B}+\mathcal{O}\left(\rho\gamma(n_{\min}, \mathcal{H},\delta)\right)+\ve.
\end{equation*}
\end{theorem}
Thus, as long as we can find an arbitrarily good saddle point, which weighted ERM grants if $\mathcal{H}^{werm}$ is expressive enough while having finite VC dimension, then we obtain consistency. A saddle point can be found by running a gradient ascent algorithm on $\vect{\lambda}$ confined to $[0,B]^J$, which repeatedly computes $h^t = \operatorname{argmin}_{h\in\mathcal{H}} \hat\mathcal{L}(h, \vect{\lambda}^t)$; the final $(\bar{\vect h},\bar{\vec\vect{\lambda}})$ are the averages of the primal and dual variables computed throughout the algorithm.
Although Theorem~\ref{thm:saddle-main} captures the spirit of the argument for the plugin algorithm, it only applies naturally to the weighted ERM algorithm. This is because the plugin algorithm is solving a subtly different minimization problem: it returns $h^t$ as the \textit{population minimum}, \textit{if the estimated regression function $\hat\eta$ replaces the true regression function}.
\begin{theorem}
\label{thm:plugcon}
With probability at least $1-\delta$, if projected gradient ascent is run as $\update_t(\vec\vect{\lambda}, {\mathbf{v}}) = \proj_{[0,B]^J}(\vec\vect{\lambda}+\eta {\mathbf{v}})$ for $T$ iterations with step size $\eta= \frac{1}{B\sqrt{T}}$ and for $t=1,\ldots, T,\; h^t= \plugin(\hat\vect{\eta}, (\hat\pi_g)_{g\in\mathcal{G}_{\text{fair}}}, \psi, \Phi)$, letting $\rho = \max\{\|\psi\|_1, \|\phi_1\|_1,
\ldots, \|\phi_M\|_1\},\, \rho_g = \sum_{j=1}^J \|{\mathbf{V}}^g_j\|_{\infty},\, \rho_{\mathcal{X}} = \|{\mathbf{D}}\|_{\infty}+\sum_{j=1}^J \|{\mathbf{U}}_j\|_{\infty},\,\Delta\vect{\eta} = \E\|\vect{\eta}(x)-\hat\vect{\eta}(x)\|_1, \check{n} = \min_{g\in\mathcal{G}_{\text{fair}}} n_g$, then
\begin{gather*}
\kappa := \mathcal{O}\left(J\rho\sqrt{\frac{K^2\log(\check n) + \log(\frac{|\mathcal{G}_{\text{fair}}|K^2}{\delta})}{\check n}}\right) +\Delta\vect{\eta} \left(\rho_{\mathcal{X}} + \sum_{g\in\mathcal{G}_{\text{fair}}}\frac{\rho_g}{\pi_g}\right) + \sqrt{\frac{\log(\frac{|\mathcal{G}_{\text{fair}}|}{\delta})}{n}}\sum_{g\in\mathcal{G}_{\text{fair}}}\frac{\rho_g}{\pi_g^2} \\
\implies \mathcal{E}_{\psi}(\bar{\vect h}^T) \leq \mathcal{E}_{\psi}^*
+ \frac{JB}{\sqrt{T}}
+ \mathcal{O}\left(BJ\kappa\right),
\qquad \|\mathcal{V}_{\phi}(\bar{\vect h}^T)\|_{\infty} \leq \frac{2J}{\sqrt{T}} + \mathcal{O}\left(J\kappa\right).
\end{gather*}
\end{theorem}
A key point in the presented analyses (for both procedures) is that the dominating statistical properties depend on the number of fairness groups. We note that $|\mathcal{G}_{\text{fair}}| \ll |\mathcal{G}_\text{intersectional}| = |\mathcal{A}|$ for the independent case, so this significantly improves results. More broadly, we conjecture that the statistical bounds depend on $\min (|\mathcal{G}_{\text{fair}}|, |\mathcal{G}_\text{intersectional}|)$, and leave the details to future work. We also note the statistical dependence on the size of the smallest group. This seems to be unavoidable, as we need an estimate of the group fairness violation in order to control it. To this end, group violations may be scaled by group size, which leads instead to a dependence on the VC dimension of $\mathcal{G}_{\text{fair}}$, improving statistical dependence with small groups at the cost of some fairness~\cite{Kearns18}. We expect that the bounds may be improved by a more refined analysis, or modified algorithms with stronger assumptions. We leave this detail to future work.
\begin{table}[t]
\centering
\begin{tabular}{ccc}
\toprule
& $\mino_{h\in\mathcal{H}}(\mathcal{L}(h,\vect{\lambda}^t), z^n)$ & $\update_t(\vect{\lambda}, {\mathbf{v}})$ \\\midrule
FairReduction & $H \circ \operatorname{argmin}_{f\in\mathcal{F}} \hat L(f)$ & $B\frac{\exp(\log\vect{\lambda}-\eta_t {\mathbf{v}})}{B-\sum_{j=1}^M\lambda_j+\sum_{j=1}^M\exp(\log\lambda_i-\eta_t v_i)}$\\
FairCOCO & $\plugin(\hat\vect{\eta}, (\hat\pi_g)_{g\in\mathcal{G}_{\text{fair}}}, \psi, \Phi, \vect{\lambda}^t)$ &
$\proj_{[0,B]^M}(\vect{\lambda} + \eta_t {\mathbf{v}})$
\\\bottomrule
\end{tabular}
\vspace{0.03in}
\caption{The oracles shown are $\plugin$ \eqref{eq:plugsol} and ERM on the reweighted $\hat L$ \eqref{eq:rwl}. $H=[\operatorname{argmax}_{k\in[K]} (\cdot)_k]$ converts a function $\mathcal{X}{\,\rightarrow\,}\R^K$ to a classifier. In FairCOCO, $\hat\vect{\eta}$ is estimated from samples $z^{1:n/2}=\{(x_1,y_1),\ldots, (x_{n/2}, y_{n/2})\}$ and all of the other probability estimates $(\hat\pi_g)_g$ and $\honfsh{h^t}$ are estimated from $z^{n/2:}=z^n\setminus z^{1:n/2}$.}
\label{tab:algs}
\end{table}
\subsection{Additional Related Work}
\label{section:related}
Recent work by \citet{Foulds18, Kearns18} and \citet{hebert2018} were among the first to define and study intersectional fairness with respect to parity and calibration metrics respectively. \citet{Narasimhan18} provide a plugin algorithm for group fairness and generalization guarantees for the unrestricted case.
\citep{Menon2018} considered Bayes optimality of fair binary classification where the sensitive attribute is unknown at test time, using an additional sensitive attribute regressor.
\citet{2018cotterwoodwang} provide proxy-Lagrangian algorithm with generalization guarantees, assuming proxy constraint functions which are strongly convex, and argue that better generalization is achieved by reserving part of the dataset for training primal parameters and part of the dataset for training dual parameters. \citet{2018celis} provide an algorithm with generalization guarantees for independent group fairness based on solving a grid of interval constrained programs; their and \citet{Narasimhan18}'s work are most similar to ours.
\section{Experiments}
\label{section:experiments}
We consider demographic parity as the fairness violation, i.e., $\phi_\text{DP}^{\pm} = \pm (\mat{C}_{0,0}^g+\mat{C}_{1,1}^g
- \mat{C}_{0,0}+\mat{C}_{1,1})-\nu,$ combined with 0-1 error $\psi(\mat C) = \mat C_{01} + \mat C_{10}$ as the error metric. All labels and protected attributes are binary or binarized. We use the following datasets (details in the appendix): (i) Communities and Crime, (ii) Adult census, (iii) German credit and (iv) Law school.
{\bf Evaluation Metric.}
We compute the "fairness frontier" of each method -- that is, we vary the constraint level $\nu$.
We plot the fairness violation and the error rate on the train set and a test set. The fairness violation for demographic parity is defined by
\begin{equation*}
\text{fairviol}_\text{DP} = \max_{g\in\mathcal{G}_\text{fair}} |\wh{\mat C}^g_{0, 1}+\wh{\mat C}^g_{1, 1} - \wh{\mat C}_{0,1}-\wh{\mat C}_{1,1}|
\end{equation*}
Observe that on the training set, it is always possible to achieve extreme points by ignoring either the classification error or the fairness violation.
{\bf Baseline: \texttt{Regularizer}} is a linear classifier implemented by using Adam to minimize logistic loss plus the following regularization function:
\begin{align}
\rho \sum_{j=1}^M\left(\frac{\sum_{i: (z_i)_j=1} \sigma(w^\top x_i)}{|\{i:(z_i)_j=1\}|}
- \frac{\sum_{i=1}^n \sigma(w^\top x_i)}{n}\right)^2
\label{eq:reg}
\end{align}
where $\sigma(r) = \frac{1}{1+e^{-r}}$ is the sigmoid function. This penalizes the squared differences between the average prediction probabilities for each group and the overall average prediction probability. Other existing methods we are aware of are either not applicable to overlapping groups, or are special cases of {\tt GroupFair}.
{\bf Experiment 1: Independent group fairness.}
We consider independent group fairness, defined by considering protected attributes separately.
Our results compare extensions of FairCOCO~\citep{Narasimhan18} and a FairReduction~\citep{agarwal18}, existing special cases of {\tt GroupFair}\ using the plugin and weighted ERM oracles respectively.
Results are shown in Figure~\ref{fig:plots}. We further present the differences in training time in \ref{tab:times}. On all datasets, the variants of {\tt GroupFair}\ are much more effective than a generic regularization approach.
However, \texttt{Plugin} seems to violate fairness more often at test time -- perhaps this is due to the $\|\hat\eta-\eta\|_1$ term in the generalization bound in Theorem~\ref{thm:plugcon}. At the same time, \texttt{Plugin} is almost 2 orders of magnitude faster, since its $\mino$ essentially has a closed-form solution, while \texttt{Weighted-ERM} has to solve a new ERM problem in each iteration.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{smargtest.png}
\caption{Experiments on independent group fairness, showing fairness frontier. The pareto frontier closest to the bottom left represent the best fairness/performance tradeoff.}
\label{fig:plots}
\end{figure}
\begin{table}[t]
\centering
\caption{Average training times (averaged over the training sessions for each fairness parameter). The Plugin Oracle is significantly faster than other approaches.}
\begin{tabular}{cccccccc}
\toprule
&\multicolumn{4}{c}{Independent}&
\multicolumn{3}{c}{Gerrymandering}\\
& C\& C & Adult & German & Law school & Adult & German & Law school\\\midrule
\texttt{Weighted-ERM} & 684.4 s & 424.0 s & 187.0 s & 68.6 s & 817.0 s& 40.4 s & 49.4 s\\
\texttt{Plugin} & 11.5 s & 8.5 s &4.4 s &3.8 s & 699.8 s& 13.0 s & 17.7 s\\
\texttt{Regularizer} & 75.4 s & 87.4 s & 35.2& 68.0 s & N/A & N/A & N/A\\
\texttt{Kearns et al.} & N/A & N/A & N/A & N/A & 2213.7 & 821.5 s & 1674.4 s\\\bottomrule
\end{tabular}
\label{tab:times}
\end{table}
\begin{figure}[th!]
\centering
\includegraphics[width=0.9\linewidth]{sgerrys.png}
\caption{Experiments on gerrymandering group fairness. The pareto frontier closest to the bottom left represent the best fairness/performance tradeoff.}
\label{fig:gerry}
\end{figure}
{\bf Experiment 2: Gerrymandering group fairness.}
Unfortunately, intersectional fairness is not statistically estimable in most cases as most intersections are empty. As a remedy, \citep{Kearns18} propose max-violation fairness constraints over $\mathcal{G}_\text{gerrymandering}$, where each group is weighed by group size, i.e., $
\max_{g\in\mathcal{G}_\text{gerrymandering}} \frac{|g|}{n}|\wh{\mat C}^g_{0, 1}+\wh{\mat C}^g_{1, 1} - \wh{\mat C}_{0,1}-\wh{\mat C}_{1,1}|
$, so empty groups are removed, and small groups have relatively low influence unless there is a very large fairness violation. We denote the approach of \citet{Kearns18} as \texttt{Kearns et al.} This approach is closely related to \texttt{Weighted-ERM} but searches for the maximally violated group by solving a cost-sensitive classification problem and uses fictitious play between $\lambda$ and $\vect h$. For the \texttt{Plugin} and \texttt{Weighted-ERM} approaches, we optimize the cost function directly using gradient ascent, precomputing the gerrymandering groups present in the data.
Results are shown in Figure~\ref{fig:gerry}. We further present the differences in training time in Table~\ref{tab:times}.
The results are roughly equivalent in terms of performance, however, both the \texttt{Weighted-ERM} and \texttt{Plugin} approach are 1-2 orders of magnitude faster than \texttt{Kearns et al.}
\section{Conclusion}
This manuscript considered algorithmic fairness across multiple overlapping groups simultaneously. Using a probabilistic population analysis, we present the Bayes-optimal classifier, which motivates a general-purpose algorithm, {\tt GroupFair}. Our approach unifies a variety of existing group-fair classification methods and enables extensions to a wide range of non-decomposable multiclass performance metrics and fairness measures.
Future work will include extensions beyond linear metrics, to consider more general fractional and convex metrics. We also wish to explore more complex prediction settings beyond classification.
\input{references.bbl}
\bibliographystyle{plainnat}
\newpage
|
1,477,468,750,608 | arxiv | \section{Introduction} \label{sec:intro}
A common feature of social networks is trait-assortativity, the tendency of similar individuals to interact more intensely or frequently than dissimilar ones.
Assortativity can be beneficial, allowing communities of individuals who share common beliefs or experiences to pursue shared goals.
On the other hand, assortativity can also restrict flows of information and resources across heterogeneous populations.
Recent scrutiny, for example, has fallen on the role of online platforms in promoting political polarization by allowing users to micromanage their contacts and information sources \cite{Anagnostopoulos2014,Bakshy2015}.
The importance of trait assortativity has inspired various models of self-sorting populations.
Among the most influential of these is the classical Schelling model \cite{Schelling1979}, which models the emergence of spatial segregation through a preference of agents to live near a minimum number of similar neighbors.
Inspired by this model, the authors of \cite{Henry2011} consider the case of a social network in which agents are assigned an immutable attribute vector that may model demographics or opinions.
Agents are allowed to destroy their connections to dissimilar partners and create new connections to similar ones, with the aversion to dissimilarity governed by a tunable parameter.
The authors show that the model always generates segregated communities for any nonzero degree of dissimilarity aversion.
Because the fixed node attributes are generated exogenously to the system dynamics, this model is most appropriate for studying assortativity based on immutable or slowly-changing attributes, such as demographic variables.
Contrasting to these dynamics is the family of voter models \cite{Clifford1973,Holley1975}, which are also defined on networks.
In a typical voter model, each node is endowed with an opinion that evolves over time, usually via adoption of the opinion of a uniformly random neighbor.
In original formulations, the network topology of a voter model is held fixed as opinions evolve.
In many networks, we naturally expect the opinions of individuals to both influence and be influenced by the connections they form.
Over the past dozen years, a class of \emph{adaptive network} models has emerged to model such interacting influences.
Adaptive networks \cite{vazquez2008generic,Gross2008,Gross2009} are characterized by dynamical coupling between node attributes and edge topology.
Such models have been studied in contexts including epidemic spreading \cite{Gross2006,Marceau2010,Gross2017,Lee2017,Horstmeyer2018} and game theory \cite{Lee2018, pacheco2006coevolution}, but are most commonly deployed as models of opinion dynamics \cite{Holme2006,Durrett2012,Gross2012,Silk2014,Malik2016,Shi2013, pinheiro2016linking}.
In this setting, they often appear as \emph{adaptive} (or \emph{coevolutionary}) \emph{voter models} (AVMs), which add opinion-based edge-rewiring to the opinion-adoption dynamics of the base voter model.\footnote{Non-voter type updates are also possible in adaptive opinion-dynamics models; see e.g. \cite{Bhawalkar2013a} for a game-theoretic approach.}
The tunable interaction between opinion and topology updates generates polarized networks of opinion-based communities.
AVMs are therefore often considered ``model organisms'' \cite{Silk2014} of endogenous fragmentation, polarization, and segregation in social and information networks.
Mathematically, AVMs display rich behavior, including metastability and phase transitions.
However, the nonlinearity driving this rich behavior renders AVMs difficult to analyze even approximately.
Many extant methods are restricted in scope, tractability, or accuracy, and often fail to provide insight into observed behaviors.
Our aim in this article is to develop a class of approximation methods that both explain qualitative behaviors in these systems and provide unprecedented analytical scope, computational efficiency, and predictive accuracy.
\subsection{Outline of the Paper}
In \Cref{sec:AVMs}, we formulate the class of binary-state AVMs studied here, review their behavior, and survey previous approaches developed for approximating their macroscopic behaviors.
We study a model variant that includes a small amount of random opinion-switching (``mutation''), which renders the model ergodic.
Using ergodicity, we develop in \Cref{sec:analytic} an approximation scheme for the equilibrium macroscopic properties across the entirety of the model's phase space.
Our scheme offers predictions for the point of emergence of persistent disagreement, which corresponds to the ``fragmentation transition'' in non-ergodic model variants.
It also offers predictions for the density of disagreement once it emerges, including the ``arches'' characteristic of this class of models.
We close in \Cref{sec:discussion} with comparisons to the body of existing models, showing that we achieve favorable scope, accuracy, and computational complexity.
Finally, we discuss promising extensions, both to our approximation methodology and to the model itself.
\section{Adaptive Voter Models} \label{sec:AVMs}
Adaptive Voter Models (AVMs) constitute a class of first-order, discrete-time Markov processes on a space of states of the form $\mathcal{G} = (\mathcal{N}, \mathcal{L}, \mathcal{E})$, where $\mathcal{N}$ is a set of nodes and $\mathcal{E}$ a set of edges; $(u,v) \in \mathcal{E}$ means that an edge linking nodes $u$ and $v$ is present in $\mathcal{G}$.
We denote by $\mathcal{N}(u)$ the neighborhood of $u$ --- all nodes adjacent to $u$, including $u$ itself.
The vector $\mathcal{L}$ maps $\mathcal{N} \rightarrow \mathcal{X}$ where $\mathcal{X}$ is an alphabet of possible states or opinions.
We treat the node set $\mathcal{N}$ as fixed, while both $\mathcal{L}$ and $\mathcal{E}$ evolve stochastically.
We here restrict ourselves to the commonly-considered binary-state case, which we denote $\mathcal{X} = \{0,1\}$, though multi-state variants \cite{Holme2006, Shi2013} are also of interest.
The temporal evolution of an AVM is characterized by superimposed voting dynamics on $\mathcal{L}$ and edge-rewiring dynamics on $\mathcal{E}$.
To these, our model adds a third process in the form of random opinion switching or ``mutation'' in $\mathcal{L}$.
We specify the discrete-time stochastic dynamics $(\mathcal{E}(t), \mathcal{L}(t)) \mapsto (\mathcal{E}(t+1), \mathcal{L}(t+1))$ as follows:
\begin{enumerate}
\item With probability $\lambda \in [0,1]$, \textbf{mutate}: uniformly sample a node $u\in \mathcal{N}$ and set $\mathcal{L}_u(t+1) \gets \mathrm{uniformChoice}(\mathcal{X}\setminus \{\mathcal{L}_u(t)\})$.
Note that mutation does not add states to the opinion alphabet $\mathcal{X}$, which is fixed.
In the binary-state case, a mutation step deterministically maps $\mathcal{L}_u(t+1)\gets 1 - \mathcal{L}_u(t)$.
\item Otherwise (with probability $1-\lambda$), sample an edge $(u,v) \in \mathcal{E}(t)$ uniformly from the set $\{(u,v):\mathcal{L}_u(t) \neq \mathcal{L}_v(t)\}$ of \emph{active} edges
(also referred to in some studies as \emph{discordant} edges).
The orientation of $(u,v)$ is uniformly random.
Then,
\begin{enumerate}
\item With probability $\alpha \in [0,1]$, \textbf{rewire}: delete the (undirected) edge $(u,v)$ and add edge $(u,w)$ selected according to one of the following two rules depending on the model variant being used.
In the \emph{rewire-to-random} model variant, $w$ is chosen uniformly from $\mathcal{N}\setminus \mathcal{N}(u)$.
In the \emph{rewire-to-same} variant, $w$ is chosen uniformly from the set $S_u = \{w \in \mathcal{N}\setminus \mathcal{N}(u) | \mathcal{L}_{w}(t) = \mathcal{L}_u(t)\}$.
\item Otherwise (with probability $1-\alpha$) \textbf{vote}: $\mathcal{L}_u(t+1) \gets \mathcal{L}_v(t)$.
\end{enumerate}
\end{enumerate}
From a modeling perspective, mutation may represent phenomena such as media influence, noisy communication, or finite agential memory.
The mutation mechanism is reminiscient of the ``noisy'' voter model of \cite{GRANOVSKY1995}, and was introduced in an adaptive model variant by \cite{Ji2013}.
The rewiring and voting steps both occur after sampling an active edge uniformly at random.
Other sampling schemes are also possible.
The sampling in \cite{Holme2006}, for example, selects a uniformly random node $u$ with nonzero degree.
Then, a uniformly random neighbor $v$ of $u$ is chosen.
Rewiring occurs with probability $\alpha$ and voting with probability $1-\alpha$ regardless of their respective opinions.
In the model introduced by \cite{vazquez2008generic} and further studied by \cite{Toruniewska2017, Ji2013, Kimura2008}, $u$ and $v$ are chosen similarly, but in the event that $\mathcal{L}_{u}(t) = \mathcal{L}_{v}(t)$ nothing happens and the sampling step is repeated.
Sampling via active edges as we do here was to our knowledge introduced in \cite{Durrett2012} and employed in many recent studies \cite{Demirel2012,Bohme2011, Bohme2012,Silk2014,Basu2015a,Rogers2013}.
The authors of \cite{Durrett2012} note that models with different sampling mechanisms nevertheless display similar qualitative -- and often quantitative -- macroscopic behaviors.
AVMs are usually studied through a standard set of summary statistics.
Let $n = \abs{\mathcal{N}}$ be the number of nodes, $m = \abs{\mathcal{E}(t)}$ the number of edges, and $c = {2m}/{n}$ the mean degree.
Since the dynamics conserve $n$ and $m$, $c$ is time-independent and may be regarded as an additional system parameter.
Let $N_i(t) = \abs{\{u \in \mathcal{N} \;|\;\mathcal{L}_u(t) = i \}}$ be the number of nodes holding opinion $i$ at time $t$.
Let $\q(t) = (q_0(t), q_1(t)) = n^{-1}\left(N_0(t), N_1(t)\right)$ be the vector of opinion densities.
For each pair $i$ and $j$ of opinions in $\mathcal{X}$, let $M_{ij}(t) = \abs{\{(u,v) \in \mathcal{E} \;|\; \mathcal{L}_u(t) = i,\; \mathcal{L}_v(t) = j \}}$ be the number of \emph{oriented} edges between nodes of opinion $i$ and nodes of opinion $j$.
Note that $M_{ij}(t) = M_{ji}(t)$ and $\sum_{i,j \in \mathcal{X}} M_{ij}(t) = 2m$ at all times $t$, since each (undirected) edge is counted twice in the vector $\mathbf{M}$, once in each of two orientations.
Let $\X(t) = (X_{00}, X_{01}, X_{10}, X_{11}) = \mathbf{M}/(2m) = \left(M_{00}(t), M_{01}(t), M_{10}(t), M_{11}(t)\right)/(2m)$ be the vector of \emph{oriented} edge densities.
We define the scalar $R(t) = X_{01}(t) + X_{10}(t) = 2X_{01}(t)$ to be the overall density of active edges.
By construction, $R(t)$ is a random variable on the interval $[0,1]$.
Let $\x = \E[\X]$ and $\rho(t) = \E[R(t)]$, with expectations taken with respect to the time-dependent measure of the Markov process.
Note that the objects $\mathcal{L}(t)$ $\X(t)$, and $R(t)$ are random functions of time $t$, while $\x(t)$ and $\rho(t)$ are deterministic functions of time.
For notational compactness, we will suppress the argument $t$ when no possibility of confusion arises.
Most previous studies have considered AVM variants without mutation, corresponding in our setting to $\lambda = 0$.
In this setting, any state with $R = 0$ is an absorbing state of the Markov chain.
Such a state consists of one or more connected components within each of which consensus reigns.
Letting $C(u)$ denote the connected component of node $u$ in the absorbing state in this regime, it holds that $C(u) = C(v)$ implies $\mathcal{L}_u = \mathcal{L}_v$ for any nodes $u$ and $v$.
As discussed in both \cite{Holme2006} and \cite{Durrett2012}, there is a phase transition in the (random) final value $\q^*$ of the opinion densities in the absorbing state.
In both model variants, there is a critical value $\alpha^*$, depending on $\q(0)$, such that, if
$\alpha \geq \alpha^*(\q(0))$, $\abs{\q^* - \q(0)}_1 = O\left(\frac{\log n}{n}\right)$ with high probability as $n$ grows large.
In the large $n$ limit, the opinion densities are not appreciably altered by the dynamics.
We refer to this as the ``subcritical'' parameter regime.
On the other hand, if $\alpha < \alpha^*(\q(0))$, $\q^*$ is governed by a bimodal distribution whose modes are independent of $\q(0)$, determined instead by $\alpha$, $c$, and the model rewiring variant.
In both models, the phase transition marks the point at which the voting dynamics outstrip the rewiring dynamics, in the sense that the rewiring dynamics are no longer fast enough to resolve most disagreements, and therefore also corresponds to a transition in the time to reach the final state \cite{Holme2006,Rogers2013}.
We refer to this regime as ``supercritical.''
In \cite{Durrett2012}, the authors show via simulation and analytical methods that this same phase transition marks the emergence of a \emph{quasistable manifold} along which the system dynamics evolve.
This manifold is well-approximated by a concave parabola in the $(q_1,\rho)$-plane, reflected by its colloquial name, ``the arch.''
Similar arches were observed for an AVM variant in \cite{vazquez2008generic} and for a non-adaptive voter model in \cite{vazquez2008analytical}.
When $\alpha > \alpha^*(\q(0))$, $\rho$ converges rapidly to $0$ while $\q$ remains nearly constant.
When $\alpha < \alpha^*$, on the other hand, the trajectory converges to a point on the arch, and then slowly diffuses along it until reaching an absorbing state at one of the two bases.
In the rewire-to-random arch, $\alpha^*$ depends on $q_1$, and the arch is therefore supported on a proper sub-interval of $[0,1]$.
On the other hand, the rewire-to-same transition is independent of $q_1$, and the associated arch is supported on the entirety of $[0,1]$.
The bases of the arch correspond to the modes in the long-run distribution of $\q^*$.
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{fig/full_comparisons.png}
\caption{(a): Phase transition in the density $\rho$ of discordant edges when $\q = \left(\frac{1}{2},\frac{1}{2}\right)$, for varying mean degree $c$.
Grey ridges give the average density $\rho$ over multiple simulation runs.
Symbols give estimates of the phase transition from extant methods.
The solid line gives the estimate of our proposed method, obtained by solving \Cref{eq:transition_approx}.
(b): Quasi-stable arches in the $(q_1,\rho)$-plane for varying $\alpha$.
Points are sampled from simulations at intervals of $5,000$ time-steps.
Filled points on the rewire-to-same panel give the active-motif estimate of \cite{Demirel2012} for the symmetric top of the arch.
The solid lines give the approximate master equation estimates of \cite{Durrett2012}.
The arches shrink to the horizontal axis as $\alpha$ increases.
All simulations were performed with $N = 10^{4}$ nodes.
}
\label{fig:prev_results}
\end{figure}
While multiple studies have achieved insight via numerical study of simulation traces \cite{Yi2013, Shi2013, Ji2013}, analytical insight into the phenomenology of AVMs remains limited.
The central analytical project is to estimate the behavior of $\rho$ as a function of the parameters $\lambda$, $\alpha$, and $c$, as well as the opinion density $\q$.\footnote{Recent papers have studied other features of interest, such as approximate conservation laws \cite{Toruniewska2017} and topology near the phase transition $\alpha^*$ \cite{Horstmeyer2018}; however, we will not pursue these themes further.}
The most modest task is to estimate the phase transition $\alpha^*$ in the case of symmetric opinion densities $\q = \left(\frac{1}{2}, \frac{1}{2}\right)$.
\Cref{fig:prev_results}(a) shows a selection of extant methods to approximate the location of the phase transition in these model variants over the last decade and compares them to the observed emergence of the top of the arch in model simulations.
The pair approximation (PA) \cite{Kimura2008, Durrett2012} is an all-purpose method for binary-state models that usually produces qualitatively correct but quantitatively poor results.
Indeed, \Cref{fig:prev_results} shows that the pair approximation overestimates the location of the phase transition, performing especially poorly in the rewire-to-same model variant.
More specialized methods are required to obtain quantitatively reasonable estimates.
The method of \cite{Bohme2011} uses compartmental equations to accurately estimate the rewire-to-same phase transition with symmetric opinion densities, finding close agreement with observation in this restricted task.
In \cite{Basu2015a} the authors apply stopping-time arguments to give a rigorous proof of the existence of a phase transition in both model variants.
However, their results apply only in the context of dense limiting graphs and do not explicitly predict the value of $\alpha^*$.
Other schemes provide estimates not only of the transition but also of the quasistable supercritical active link density $\rho$ when $\q = \left(\frac{1}{2}, \frac{1}{2}\right)$.
The authors of \cite{Demirel2012} propose a compartmental approach based on \emph{active motifs} to estimate the phase transition and arch in the symmetric opinion rewire-to-same model variant.
An active motif consists of a node and a number of active links attached to it; a system of ordinary differential equations may be obtained by approximately tracking the evolution of active motif densities in continuous time.
The resulting estimate of the phase transition (\Cref{fig:prev_results}(a)) and of the top of the arch (\Cref{fig:prev_results}(b)) are both highly accurate, but require an active-link localization assumption specific to the rewire-to-same variant.
The authors of \cite{Silk2014} follow a related approach for the rewire-to-same variant based on more general \emph{active neighborhoods}.
Active neighborhoods count the numbers of both active and inactive links attached to a given node.
They obtain an analytic approximation by transforming the resulting system into a single partial differential equation governing the generating functions of the neighborhood densities.
The resulting estimate of the phase transition (\cref{fig:prev_results}(a)) and the active link density (not shown) are uniformly dominated in accuracy by the explicit active-motif approach.
To our knowledge, the only methods for approximating the complete arch are the pair approximation and the approximate master equations (AMEs, \cite{Gleeson2011}) used in \cite{Durrett2012}.
Approximate master equations are similar to active-neighborhood techniques, but are formulated explicitly for the case of general opinion densities $\q$.
For small mean degree, approximate master equations can provide relatively accurate predictions of the rewire-to-random phase transition (\Cref{fig:prev_results}(a)) and qualitatively reasonable estimates of the arches (\Cref{fig:prev_results}(b)), though the shapes of the arches may be somewhat distorted.
Their estimates for $\alpha^*$ and $\rho$ in the rewire-to-same variant are substantially worse, although the qualitative shape of the arches appears correct.
AMEs are constrained by their computational cost: to obtain a solution requires the numerical solution of $\Theta(k_\mathrm{max}^2)$ coupled differential equations, where $k_\mathrm{max}$ is the largest node-degree expected to emerge in the course of a simulation, and therefore depends at least linearly on the mean degree $c$.
The scheme thus rapidly becomes impractical for high enough mean degree or for initially skewed degree distributions.
\subsection{AVMs with Mutation}
The approximation scheme we will develop in \Cref{sec:analytic} depends on the presence of mutation in the model -- that is, $\lambda > 0$.
The introduction of mutation has an important technical consequence: the process is ergodic, up to symmetry.
\begin{dfn}
A \emph{labeled graph isomorphism} of a state $\mathcal{G} = (\mathcal{N}, \mathcal{L}, \mathcal{E})$ is a permutation $\tau:\mathcal{N}\mapsto \mathcal{N}$ such that $(u,v) \in E$ iff $(\tau(u),\tau(v)) \in E$ and $\mathcal{L}_u = \mathcal{L}_{\tau(u)}$ for all $u \in N$.
We write $\overline{\mathcal{G}}$ for the equivalence class of $\mathcal{G}$ under labeled graph isomorphism.
\end{dfn}
\begin{theorem}
When $\lambda > 0$, if $\binom{n-4}{2} \geq m-1$, the process $\overline{\mathcal{G}}_t$ is ergodic.
\end{theorem}
\begin{proof}
We will first show aperiodicity by constructing cycles of lengths $2$ and $3$ in the state space.
To construct a cycle of length $2$, simply choose a node and perform two sequential mutation steps.
The construction of a cycle of length $3$ is slightly more involved.
Pick an edge $e \in \mathcal{E}$.
Label one end $u$, and the other end $v_1$.
Pick two more nodes $v_2$ and $v_3$.
Using mutation and rewiring steps, remove all edges connected to $u$, $v_1$, $v_2$, and $v_3$ except for $e$.
This can always be done by hypothesis, since the remaining $m-1$ edges may be placed among the $\binom{n-4}{2}$ pairs of remaining nodes via mutation and rewiring steps.
Using mutation steps, set $\mathcal{L}_u = \mathcal{L}_{v_2} = 0$ and $\mathcal{L}_{v_1} = \mathcal{L}_{v_3} = 1$.
Call this initial state $\mathcal{G}$.
Then, consider the following sequence:
\begin{enumerate}
\item Rewire $(u,v_1) \mapsto (u,v_2)$.
\item Mutate $\mathcal{L}_{v_2} \gets 1$.
\item Mutate $\mathcal{L}_{v_1} \gets 0$.
\end{enumerate}
Call the end state $\mathcal{G}'$.
Each of these steps is supported in both rewire-to-same and rewire-to-random model variants.
Furthermore, the permutation $\tau$ that interchanges $v_1$ and $v_2$ is a labeled isomorphism from $\mathcal{G}$ to $\mathcal{G}'$.
We have therefore constructed a supported cycle of length 3 in the state space of the process $\overline{\mathcal{G}}_t$, completing the proof of aperiodicity.
To show irreducibility, let $\mathcal{G}_1 = (\mathcal{N}, \mathcal{L}_1, \mathcal{E}_1)$ and $\mathcal{G}_2(\mathcal{N}, \mathcal{L}_2, \mathcal{E}_2)$ be elements of the state space of a single AVM.
Since $\abs{E_1} = \abs{E_2} = m$, the sets $\abs{E_1 \setminus E_2} = \abs{E_2 \setminus E_1}$.
These sets may therefore be placed in bijective correspondence.
For each edge $e = (u,v) \in E_1\setminus E_2$, we arbitrarily identify $e' = (u',v') \in E_2 \setminus E_1$.
Perform the sequence of rewirings $(u,v)\mapsto (u, v') \mapsto (u', v')$ possibly with mutation steps in order to activate the edges.
Doing so reduces the set $E_1\setminus E_2$ be one edge.
Repeat this process inductively until $\E_1\setminus E_2 = \emptyset$; that is, until $E_1 = E_2$.
Finally, perform mutation steps on all nodes $u$ on which $\mathcal{L}_1$ and $\mathcal{L}_2$ disagree.
The result is a path of nonzero probability through the state space of $\mathcal{G}_t$ and therefore of $\overline{\mathcal{G}}_t$, as was to be shown.
\end{proof}
Since the process $\overline{\mathcal{G}}_t$ is ergodic, it possesses an equilibrium measure $\eta$ supported on the entirety of its state space.
In the remainder of this paper, we will abuse notation by identifying $\mathcal{G}$ with $\bar{\mathcal{G}}$ and referring to $\eta$ as the equilibrium distribution of $\mathcal{G}_t$.
Ergodicity implies that states with $R = 0$ are no longer absorbing.
Instead, a typical sample from $\eta$ displays bifurcated structure closely aligned with the opinion groups, with dense connections between common opinions and sparser connections between differing opinions.
This behavior of the mutating AVM thus makes it a more realistic model of social processes in which long-standing disagreement influences connections.
In this article, we focus on the limit of small $\lambda$, which allows us to derive approximations for the non-mutating AVMs.
In particular, the equilibrium measure $\eta$ concentrates around the $\lambda=0$ arch, allowing us to describe the arch as the expected active link density $\rho^* = \E_\eta[R]$.
\section{Model Analysis} \label{sec:analytic}
We now derive a set of analytical methods for estimating the phase transition $\alpha^*$ and supercritical expected active link density $\rho^*$.
Our strategy is to study perturbations from the fully-fragmented state $R = 0$.
These perturbations are induced by mutation, without which the fully-fragmented state is absorbing.
While many existing techniques amount to continuous-time mass-action laws for system moments, our methods are fundamentally discrete and local in that we study changes in the edge density vector $\X$ stemming from a single mutation event.
Assume that $\lambda$ is small but positive.
Suppose that at time $t$, $R = 0$.
In this state, $\mathcal{G}_t$ consists of one or more connected components within which the opinion function $\mathcal{L}$ is constant.
Suppose now that, at time $t+1$, node $u$ on component $C(u)$ changes its opinion from $0$ to $1$ through mutation.
Because opinions on $C(u)$ are otherwise uniform, all active links present in component $C(u)$ are contained in the neighborhood of $u$ itself.
In particular, any additional active links that may be generated over short timescales will arise in the region local to $u$.
Let $T$ be the hitting time of the event $R = 0$; i.e., the amount of time required to return to the fully-fragmented state.
We can distinguish two regimes, depending on the scaling of $\E[T]$ with $n$.
\begin{enumerate}
\item \textbf{Subcritical}:
We have $\E[T] = O(1)$.
Intuitively, this occurs when $u$'s dissenting opinion is either snuffed out by voting events or ``quarantined'' by rewiring events in a small number of time steps.
This case always occurs when $\alpha = 1$, since $T$ is then simply the time until each active link has been rendered inactive via rewiring.
The expected number of active edges scales as $n\rho^* = \Theta(1)$, since there are only $\Theta(1)$ time steps in which active edges may be generated.
We therefore have $\rho^* \rightarrow 0$ as $n$ grows large.
\item \textbf{Supercritical}:
We have $\E[T] = O(n^2)$, corresponding to the consensus-time of the non-adaptive voter model \cite{Holley1975};
as such, this case always occurs for $\alpha = 0$.
Mechanistically, $u$'s dissenting opinion triggers a cascade of active edge-generation through voting and rewiring events with nonzero probability.
In this case, the number $R$ of active edges scales with $n$ (see, e.g. \cite{vazquez2008analytical}), and the equilibrium active edge density $\rho^*$ is nonzero as $n$ grows large.
\end{enumerate}
These two regimes are separated by critical values in the parameters $\alpha$, $\lambda$, and $c$.
Indeed, the transition in $\alpha$ is precisely that described previously for the $\lambda = 0$ case.
The situation is thus reminiscent of the standard Galton-Watson branching process \cite{athreya2004branching}, in which the criticality of the aggregate process can be characterized locally by the reproductive potential of a single node.
To develop quantitative approximations, we therefore study the local dynamics around node $u$.
In the scenario above, we can distinguish a \emph{local majority} of nodes with opinion $0$.
Despite the fact that $C$ is no longer in consensus, local neighborhoods are dominated by opinion $0$ nodes.
Similarly, we can distinguish a local minority --- initially comprising node $u$ alone --- of opinion $1$.
In the subcritical regime, every connected component possesses a local minority and local majority.
In the supercritical regime, these distinctions degrade as opinions become increasingly well-mixed.
We will use this physical intuition to formulate a closed-form approximation in the neighborhood of the critical point.
Write $\mathbf{m}(t) = \E[\mathbf{M}(t)]$ for the expected global edge count vector.
Then, the dynamics in the edge count may be written
\begin{align}
\mathbf{m}(t+1) - \mathbf{m}(t) =\; &\lambda \mathbf{w}(\mathcal{G}(t)) + (1-\lambda)\alpha \mathbf{r}(\mathcal{G}(t)) +\cr & (1-\lambda)(1-\alpha) \mathbf{v}(\mathcal{G}(t))\;, \label{eq:dynamics}
\end{align}
where $\mathbf{w}$, $\mathbf{r}$, and $\mathbf{v}$ are functions of the graph state $\mathcal{G}(t)$ giving the expected increments in $\mathbf{m}$ due to mutation, rewiring, and voting, respectively.
Importantly, $\mathbf{w}$ and $\mathbf{r}$ depend only on $\q$ and $\X$, the first and second moments of $\mathcal{L}$.
The entries of the mutation term may be written
\begin{align}
\mathbf{w}(\mathcal{G}) = \mathbf{w}(\X) = c
\left[
\begin{matrix}
X_{10} - X_{00} \\
X_{00} - X_{10}+X_{11}-X_{01}\\
X_{00} - X_{10}+X_{11}-X_{01}\\
X_{01} - X_{11}
\end{matrix}
\right]\,.
\end{align}
We illustrate by deriving the expression for $\mathbf{w}_{00}(\mathbf{X})$.
Edges between nodes of opinion $0$ are created when an opinion-1 node on an active edge mutates.
A uniformly random opinion-1 node has in expectation $cX_{10}$ active edges available to transform into $0$-$0$ edges upon mutation.
Similarly, $0$-$0$ edges are destroyed when one of the incident nodes mutates.
A uniformly random opinion-0 node has in expectation $cX_{00}$ inactive edges that are destroyed upon mutation.
The expressions for the other entries of $\mathbf{w}$ are derived by parallel arguments.
The rewiring terms $\mathbf{r}$ are written as follows:
\begin{align*}
\mathbf{r}(\mathcal{G}) = \mathbf{r}(\q) =
\begin{cases}
\left(q_0, -\frac{1}{2}, -\frac{1}{2}, q_1\right)^T & \text{rewire-to-random}\\
\left(1, -1, -1, 1\right)^T & \text{rewire-to-same}.
\end{cases}
\end{align*}
Notably, the rewiring function depends on the opinion densities $\q$ only in the rewire-to-random case, because the rewire-to-same variant always removes exactly one active edge, replacing it with an inactive one in a rewiring step.
To derive the expression for the rewire-to-random case, we can condition on the opinion of the node that ``keeps'' the edge.
If the $0$-opinion node keeps the edge, then with probability $q_0$ the new edge joins to another opinion $0$ node, destroying the active edge and creating a $0$-$0$ edge.
A similar argument accounts for the $q_1$ term.
Summing up the ways for an active edge to be removed, we have
\begin{align*}
r_{01}(\q) = -\frac{1}{2}\left[q_0 + q_1\right] = -\frac{1}{2},
\end{align*}
as was to be shown.
We are interested in estimating the arch, which is in turn a function of the expected edge density vector $\x = \E[\X]$.
The computations above show that $\mathbf{w}(\mathcal{G}) = \mathbf{w}(\X)$ and $\mathbf{r}(\mathcal{G}) = \mathbf{r}(\q)$.
The mutation and rewiring dynamics in $\X$ are therefore Markovian: for any fixed $\q$, when $\alpha = 1$, $\mathbf{X}$ is a Markov process.
Indeed, we further have $\E[\mathbf{w}(\mathcal{G})] = \mathbf{w}(\x)$.
Because of this, computing $\x$ in the $\alpha = 1$ case for fixed $\q$ reduces to solving a four-dimensional linear system.
Unfortunately, the voting function $\mathbf{v}$ cannot be similarly parsed in terms of $\X$, because the voting dynamics are non-Markovian in these variables.
We may therefore view the short-timescale dynamics of $\X$ for fixed $\q$ as a mixture of Markovian opinion-switching and rewiring processes with a non-Markovian voting process.
Our strategy is to approximate the expectation of the non-Markovian voting term with a Markovian approximation near the phase transition, using the asymmetry between local minorities and majorities.
This approximation supposes that $\mathbf{v}(\mathcal{G}(t)) \approx \hat{\mathbf{v}}(\q, \x)$ for $R \ll \frac{1}{2}$, with the function $\hat{\mathbf{v}}$ of $\q$ and $\x$ to be determined.
To construct $\hat{\mathbf{v}}$, we study the local neighborhood of a node $u$ that has just changed its opinion from $\bar{\imath} \in \{0,1\}$ to $i \in \{1,0\}$.
We denote expectations conditioned on this event using the shorthand $\E[\cdot|i]$.
Immediately after this event, $u$ possesses an initial random number $J_0$ of inactive and $K_0$ of active edges.
The distributions of $J_0$ and $K_0$ depend on $\q$, $\x$, $c$, and their moments, as well as the conditions under which node $u$ changed its opinion.
If $u$ changes its opinion due to a mutation on an otherwise constant-opinion component, then $J_0 = 0$.
On the other hand, if $u$ changes its opinion through a voting event, then $J_0 \geq 1$, since there must have been a node to pass on the opinion to $u$.
To compute $\hat{\mathbf{v}}$, we track each of the $K_0$ active edges until each of them has been rendered inactive, counting voting events along the way.
Under timescale-separation and mean-field assumptions, these calculations may be carried out in closed form.
The assumption of timescale-separation supposes that $\mathcal{G}$ changes slowly relative to the neighborhood of node $u$, so that only update steps that sample the initial $K_0$ edges require accounting.
The mean-field assumption supposes that nodes in the local majority have degree distributions governed by the global network average $\x$, reflecting the fact that, by definition, most nodes are members of their respective local majorities.
These assumptions are approximately correct when the active edge density $R$ and mutation rate $\lambda$ are both small, and will tend to degrade when either quantity is increased.
Define the vector $\mathbf{c}$ with components $c_{ij} = cx_{ij}/q_i$, which denotes the average number of neighbors of type $j$ of a node of type $i$.
Note that, though $x_{ij} = x_{ji}$, it is not the case that $c_{ij} = c_{ji}$ unless $\q = \left(\frac{1}{2}, \frac{1}{2}\right)$.
The random variable $K_0$ is the number of opinion $\bar{\imath}$ neighbors incident to $u$ immediately prior to $u$ changing opinion; under the mean-field assumption, we therefore have $\E[K_0|i] = c_{\bar{\imath}\bar{\imath}}$.
Meanwhile, $\E[J_0|i] = 1 + c_{\bar{\imath} i}$ if $u$ changed its opinion due to voting and $\E[J_0|i]=0$ if $u$ changed its opinion due to mutation.
Since we assume $\lambda$ to be small and mutations to therefore be slow, we focus on the former case.
We need to track multiple types of voting events, and we define random variables for each.
\begin{enumerate}
\item Neighbors of $u$ may vote. By the assumption of timescale-separation, each such vote occurs along one of the $K_0$ initial active edges. Let $E$ denote the (random) number of such votes.
\item Nodes not attached to $u$ may vote. In the rewire-to-random model, such events may occur after an active edge attached to $u$ is rewired away from $u$ but remains active, allowing for
a later time at which one of the two nodes on this edge votes to render the edge inactive.
Let $F$ denote the (random) number of such voting events.
\item Node $u$ itself may vote prior to all of its $K_0$ active edges becoming inactive or removed from $u$. Let $G$ denote the indicator random variable for this event.
\end{enumerate}
We next write down vectors tracking the impact of each of the above voting event types on $\mathbf{m}$, the vector of expected global edge counts.
We first compute the impact vector $\mathbf{e}_i(\C)$ of a Type 1 event.
Since votes occur along active edges, a Type 1 event consists of a neighboring node $v$ changing opinion from $\mathcal{L}_v = \bar{\imath}$ to $\mathcal{L}_v = i$.
In this event, edge $(u,v)$ is rendered inactive.
At node $v$, $c_{\bar{\imath}\bar{\imath}}$ edges are activated in expectation, and $c_{\bar{\imath}i}$ edges are rendered inactive as $i$-$i$ edges.
We therefore have
\begin{align}
\mathbf{e}_i(\C) &= \frac{1}{2}\left(-2\E[K_0|i] , \E[K_0|i] - \E[J_0|i], \E[K_0|i] - \E[J_0|i], 2\E[J_0|i])\right) \nonumber\\
&= \frac{1}{2}\left(-2c_{\bar{\imath}\bar{\imath}} , c_{\bar{\imath}\bar{\imath}} - c_{\bar{\imath}i} - 1, c_{\bar{\imath}\bar{\imath}} - c_{\bar{\imath}i} - 1, 2(1+c_{\bar{\imath}i})\right)\;. \label{eq:neighbor_vote}
\end{align}
Type 2 events are again mean-field approximated.
Since these edges are no longer connected to $u$, their impact on $\mathbf{m}$ is independent of $\mathcal{L}_u$, and we therefore have
\begin{align} \label{eq:wild_vote}
\mathbf{f}(\C) = \frac{\mathbf{e}_0(\C) + \mathbf{e}_1(\C)}{2}\;.
\end{align}
The analysis for Type 3 events is more subtle.
For $i = 1$, this term has components
\begin{align}
\mathbf{g}_1(\q, \C) = \frac{1}{2}\left(2\E[GK|1], \E[G(J-K)|1], \E[G(J-K)|1], -2\E[GJ|1]\right)\;, \label{eq:g_components}
\end{align}
where $J$ (respectively, $K$) are the number of inactive (active) edges attached to $u$ at the time of voting, and $G$ is the event that $u$ votes prior to deactivation.
To complete the approximation scheme, it is necessary to compute the expectations appearing in \Cref{eq:g_components} and then compute the expected number of events of each type.
We begin with $K$, the active edge count at the time that $u$ votes.
Conditioned on a fixed initial number $K_0$ of active edges and $u$'s opinion $i$, $K$ is distributed as a truncated geometric distribution:
\begin{align*}
\prob(K = k|i, K_0) =
\begin{cases}
(1-\beta_i)\beta_i^{K_0 - k} &\quad 1\leq k \leq K_0\\
\beta_i^{K_0} &\quad k = 0,
\end{cases}
\end{align*}
where $\beta_i$ is the probability that an event is not a vote by $u$, given that it removes a discordant edge from $u$ and that $u$ has opinion $i$.
This probability is given explicitly by
\begin{align}
\beta_i =
\begin{cases}
\frac{1+\alpha q_i}{2-\alpha(1-q_i)} &\quad \text{rewire to random}\\
\frac{1+\alpha}{2} &\quad \text{rewire to same.} \label{eq:beta}
\end{cases}
\end{align}
To derive the rewire-to-random expression, we enumerate the events that remove an active edge $(u,v)$ from $u$, given that $(u,v)$ is sampled for update.
A vote by either node $u$ or node $v$ deactivates the edge, and occurs with probability $1-\alpha$.
A rewiring event in which $v$ maintains the edge removes the edge from $u$ and occurs with probability $\alpha/2$.
A rewiring event in which $u$ maintains the edge occurs with probability $\alpha/2$, and deactivates the edge with probability $q_i$ in the rewire-to-random case.
The total rate of active edge removal from $u$ is therefore $2-\alpha(1-q_i)$.
The rate of active edge removal, excluding Type 3 voting events, is $2-\alpha(1-q_i) - (1-\alpha) = 1+\alpha q_i$.
A similar derivation yields the expression for the rewire-to-same variant.
The probability of $u$ voting prior to deactivation, conditioned on $K_0$, is
\begin{align*}
\E[G|K_0, i] = \prob(K \geq 1) = 1-\beta_i^{K_0}\;.
\end{align*}
Averaging over $K_0$ yields
\begin{align*}
\E[G|i] = \sum_{k_0}\prob(K_0 = k_0)(1-\beta_i^{k_0}) = 1 - \phi_{K_0}(\beta_i)\;,
\end{align*}
where $\phi_{K_0}(z) = \sum_{k = 1}^\infty \prob(K_0 = k) z^{k}$ is the probability generating function of $K_0$.
Some previous work (e.g. \cite{vazquez2008generic}) explicitly models quantities such as $K_0$ as binomial or Poisson random variables.
In our experiments, the crude approximation $\E[G|i] \approx 1 - \beta_{i}^{\E[K_0|i]} = 1 - \beta_{i}^{c_{\bar{\imath}\bar{\imath}}}$ yields similar results with much faster computations, and is therefore used in the results presented below.
The expected number of active edges at the time that $u$ votes is
\begin{align*}
\E[GK|i] &= \E_{K_0}\E[GK|i, K_0] \\
&= \E_{K_0}\left[K_0 - \frac{\beta_i(1-\beta_i^{K_0})}{1-\beta_i}\bigg|i\right] \\
&= \E[K_0|i] - \frac{\beta_i}{1-\beta_i}\E[G|i]\;.
\end{align*}
We have accounted for the decay in the local active edge density around $u$.
It remains to compute $\E[E|i]$, $\E[F|i]$, and $\E[GJ|i]$.
To do so, it is useful to introduce the coefficients
\begin{align}
\varepsilon_i =
\begin{cases}
\frac{1-\alpha(1-q_i)}{1+\alpha q_i}\,, \\
\frac{1}{1+\alpha}\,,
\end{cases} \quad
\sigma_i =
\begin{cases} q_1\frac{2(1-\alpha)}{2-\alpha}\,, &\quad \text{rewire to random}\,, \\
0\,, &\quad \text{rewire to same.}
\end{cases} \label{eq:coefs}
\end{align}
The coefficient $\varepsilon_i$ gives the probability that an event that removes an active edge from $u$, other than a vote by $u$, produces an inactive edge either through rewiring or through a vote by a neighbor of $u$.
The coefficient $\sigma_i$ gives the probability that an active edge which is rewired but not immediately deactivated is ultimately deactivated via a voting event.
The derivations of these coefficients are similar to that of $\beta_i$ above.
Node $u$ begins with an initial number $J_0$ of inactive edges, and gains more via rewiring and voting.
At the time that $u$ votes, in expectation $\E[K_0|i] - \E[GK|i]$ active links have been removed; each has a probability $\varepsilon_i$ of being deactivated while remaining attached to $u$.
The expected number of inactive edges at the time that $u$ votes is therefore
\begin{align*}
\E[GJ|i] &= \E[J_0|i] + \varepsilon_i \left(\E[K_0|i] - \E[GK|i]\right)\;.
\end{align*}
To compute $\E[E|i]$, the expected number of Type 1 events, we note that a voting event along edge $(u,v)$ has equal probability to change $\mathcal{L}_u$ as $\mathcal{L}_v$.
The expected number of Type 1 events is therefore equal to the expected number of Type 3 events, and we have $\E[E|i] = \E[G|i] = 1 - \phi_{K_0}(\beta_i)$.
Finally, we compute the expected number of Type 2 events.
By definition, for a Type 2 event to occur, the edge must no longer be attached to $u$.
The expected number of such edges is $\E[K_0 + J_0 - G(K + J)|i]$.
The probability that such an edge was removed by $u$ by a rewiring event that did not deactivate the edge is $\sigma_i$.
We obtain
\begin{align*}
\E[F|i] = \sigma_i\E[K_0 + J_0 - G(K + J)|i]\;.
\end{align*}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig/symmetry_breaking_random.pdf}
\caption{Illustration of the asymmetry between Type 1 and Type 3 events.
Histograms give the impact of a voting event on $M_{01}$, the number of active edges.
Each panel corresponds to a different value of the expected active edge density $\rho$.
The expected impact of Type 1 (blue) and Type 3 (orange) events are shown in the horizontal margin, as well as the simulation mean (black).
Simulations performed on a rewire-to-random AVM of $n = 10^4$ nodes and $c = 8$, with varying rewiring rate $\alpha$.
Events are tallied only for $0.49 < q_1 < 0.51$.}
\label{fig:symmetry_breaking}
\end{figure}
An important prediction of this formalism is that Type 1 and Type 3 events, though they occur at the same rate, have different impacts on the active edge density.
Since $\E[K|i] < \E[K_0|i]$ and $\E[J|i] > \E[J_0|i]$, we have
\begin{align} \label{eq:differential_impacts}
\mathbf{e}_i(\C)_{\bar{\imath}i} = \frac{\E[K_0|i] - \E[J_0|i]}{2} > \frac{\E[K|i] - \E[J|i]}{2} =
-{g}_i(\q,\C)_{\bar{\imath}i}\;.
\end{align}
\Cref{eq:differential_impacts} states that Type 1 events increase the active edge density more than Type 3 events decrease it.
This reflects a local asymmetry in the subcritical regime, between votes that increase the census of a local minority opinion and votes that reduce it.
The asymmetry is due to the intervening rewiring-steps, which tend to remove edges from the focal node $u$ prior to a Type 3 event.
Since Type 1 and Type 3 events occur at the same rate, our formalism predicts that voting events tend to increase the active edge-density when $\rho$ is small.
In \Cref{fig:symmetry_breaking}, we check this prediction by comparing the expressions in \Cref{eq:differential_impacts} to the distribution of all impacts $\Delta M_{01}$ on the active edge count due to voting events.
In the subcritical regime, the mean increment (black) is positive, reflecting the fact that Type 1 events (blue) outstrip Type 3 events (orange) in expected generation of active edges.
As $\rho$ grows, the separation-of-timescales assumption degrades, and the asymmetry between Type 1 and Type 3 events breaks down.
For large $\rho$, Type 1 and Type 3 events have similar increments in expectation and the distribution of $\Delta M_{01}$ becomes symmetric.
Finally, we average over events of Types 1-3 to obtain the approximate expected increment in edge counts per voting event.
It is given by the four-vector
\begin{align}
\hat{\mathbf{v}}(\q, \x) = \frac{1}{2} \sum_{i \in \{0,1\}} \frac{\E[E|i]\mathbf{e}_i(\C) + \E[F|i]\mathbf{f}(\C) + \E[G|i]\mathbf{g}_i(\mathbf{q},\mathbf{c})}{\E[E + F + G|i]}\;. \label{eq:v_hat}
\end{align}
For convenience, we summarize the expressions appearing in \Cref{eq:v_hat} in \Cref{tab:summary}.
\begin{table}
\centering
\begin{tabular}{l|l}
Term & Expression \\
\hline
Type 1 expected increment & $\mathbf{e}_i(\C)_{01} = c_{\bar{\imath}i} - c_{ii} - 1$ \\
Type 2 expected increment & $\mathbf{f}_i(\C)_{01} = \frac{\mathbf{e}_0(\C)_{01} + \mathbf{e}_1(\C)_{01}}{2}$ \\
Type 3 expected increment & $\mathbf{g}_i(\C)_{01} = c_{\bar{\imath}i} + \varepsilon_i\frac{\beta_i}{1 -\beta_i}\left(1-\phi_{K_0}(\beta_i)\right)$ \\
Type 1 expected count & $\E[E|i] = 1 - \phi_{K_0}(\beta_i)$ \\
Type 2 expected count & $\E[F|i] = \sigma_i(c_{\bar{\imath}\bar{\imath}} + c_{\bar{\imath}i} - \mathbf{g}_{i}(\C)_{01})$\\
Type 3 expected count & $\E[G|i] = 1 - \phi_{K_0}(\beta_i)$
\end{tabular}
\caption{Summary of the terms appearing in \Cref{eq:v_hat}, our approximation to the voter term $\E[\mathbf{v}(\mathcal{G})]$.
Only the $01$ components (corresponding to active edges) are shown.}
\label{tab:summary}
\end{table}
Combining \Cref{eq:dynamics} with \Cref{eq:neighbor_vote,eq:wild_vote,eq:v_hat} yields our Markovian approximation to the expected edge count dynamics:
\begin{align}
\mathbf{m}(t+1) - \mathbf{m}(t) = \lambda \mathbf{w}(\x) + (1-\lambda)\alpha \mathbf{r}(\q) + (1-\lambda)(1-\alpha) \hat{\mathbf{v}}(\q, \x)\;. \label{eq:approx_dynamics}
\end{align}
We emphasize that this approximation is derived under assumptions that are only approximately correct in and near the subcritical regime.
Recalling that $\mathbf{m} = 2m\x$, we see that
\Cref{eq:approx_dynamics} is a closed, deterministic difference equation in $\x$.
We then seek $\hat{\x}(q;\alpha, \lambda)$, the limit point of the approximate dynamics under \Cref{eq:approx_dynamics}.\footnote{In principle, \Cref{eq:approx_dynamics} may admit multiple limit points. Throughout our numerical numerical experiments, we have found the limit point to be unique.}
The approximation indicates the subcritical case when $\hat{\rho}(\q;\alpha,\lambda) = 2\hat{\x}_{01}(\q;\alpha, \lambda) \leq 0$, and the supercritical case otherwise.
Solving
\begin{align}
\alpha^*(\q, \lambda) = \max \{\alpha : \hat{\rho}(\q; \alpha, \lambda) = 0\} \label{eq:transition_approx}
\end{align}
then gives our approximation for the phase transition in $\alpha$.
\Cref{fig:phase_transition_heatmap} compares numerical solutions to \Cref{eq:transition_approx} simulation data for the complete range of $q_1 \in [0,1]$.
The accuracy of the approximation is strongest for $\q \approx \left(\frac{1}{2}, \frac{1}{2} \right)$ and on the rewire-to-random model variant.
See also \Cref{fig:prev_results}(a) for comparisons of the solutions of \Cref{eq:transition_approx} to extant approximation schemes in the case $\q = \left(\frac{1}{2},\frac{1}{2}\right)$.
\Cref{fig:phase_transition_heatmap} highlights one of the qualitative differences between the rewire-to-random and rewire-to-same model variants.
As discussed in \Cref{sec:AVMs}, while $\alpha^*$ depends strongly on $\q$ in the rewire-to-random model variant, it is independent of $\q$ in the rewire-to-same variant.
This behavior is reflected algebraically in \Cref{eq:beta,eq:coefs}, which in turn govern the terms appearing in \Cref{eq:approx_dynamics}.
The quantities $\beta$, $\varepsilon$, and $\sigma$ depend directly on $\q$ in the rewire-to-random model, regardless of the value of $\x$.
However, in the rewire-to-same model, dependence on $\q$ emerges only when $\rho > 0$.
This in turn implies that the phase transition is itself independent of $\q$, as is indeed observed in both the data and our approximation.
Beyond the algebra, the localized approximation scheme we have developed gives to our knowledge the first mechanistic explanation explanation of this difference in the phase transitions of the two models.\footnote{The pair-approximation (PA) equations of \cite{Durrett2012} predict this difference but the mechanism therein is less clear to us.}
Consider the emergence of dissenting node $u$ with opinion $1$ on a component of majority opinion $0$.
In the rewire-to-random model, the fast local rewiring dynamics depend explicitly on $\q$, the global opinion densities.
When $q_1$ is large, an edge rewired away from $u$ is more likely to become inactive, resulting in fewer active edges in the neighborhood of $u$.
This is in turn reflected by the term $g_1(\q,\C)_{01} = \frac{1}{2}\left(\E[K|i] - \E[J|i]\right)$ governing the impact of Type 3 events, whose magnitude enters into the calculation of the phase transition via \Cref{eq:approx_dynamics,eq:transition_approx}.
In the rewire-to-same case, however, the fast rewiring does not explicitly depend on $\q$.
An active edge attached to $u$ that rewires becomes inactive with probability $1$.
As a result, there is no dependence of Type 3 events on $\q$, and the phase transition is independent of $\q$.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig/phase_transition_heatmap.pdf}
\caption{Approximation of the phase transition $\alpha^*$ for rewire-to-random and rewire-to-same systems for varying $c$ and $\q$.
Color gives the equilibrium density of active edges.
Dashed lines give solutions to \Cref{eq:transition_approx}.
Some numerical artifacts are visible in the rewire-to-random case for large $c$.
Simulations carried out on AVMs with $n = 10^4$ and $\lambda = 2^{-5}$.}
\label{fig:phase_transition_heatmap}
\end{figure}
We now turn to the approximation of $\rho^*(\q;\alpha, \lambda)$, the equilibrium density of active edges in the supercritical regime.
In this regime, the distinction between local minority and majority nodes progressively erodes, as does the validity of the timescale-separation assumption.
One way to view this erosion is in terms of decay of the impact of Type 3 events, as discussed in \Cref{fig:symmetry_breaking}.
As $\rho$ increases, the impact of a single Type 3 event progressively diminishes due to re-randomization of the focal node's local neighborhood.
We model this re-randomization via a simple interpolation to a mean-field approximation of the arch in the case $\alpha = 0$, which corresponds to a variant of the voter model without rewiring.
We begin by deriving this approximation.
When $\alpha = \lambda = 0$, active edges enter and exit the system only through voting events.
We have already written the mean-field approximation for the expected impact of a voting event in \Cref{eq:neighbor_vote}.
When only these events take place, the equilibrium condition is $e_i(\C) = 0$ for $i = 0,1$.
It suffices to solve the system
\begin{align*}
0 &= 1 + c_{10} - c_{00} \\
0 &= 1 + c_{01} - c_{11}
\end{align*}
for $\C$ and subsequently for $\x$.
We recall that $c_{ij} = c{x_{ij}}/{q_i}$ and that $2x_{01} = 1 - x_{00} - x_{11}$.
Substituting these relations we obtain
\begin{align*}
\frac{2q_0q_1}{c}\left(\begin{matrix}1 \\ 1\end{matrix}\right) + \q = \left[\begin{matrix} 1 + q_1 & q_0 \\ q_1 & 1 + q_0 \end{matrix}\right]\left(\begin{matrix}x_{00} \\ x_{11}\end{matrix}\right)\;.
\end{align*}
The unique solution is
\begin{align*}
\left(\begin{matrix}x^*_{00} \\ x^*_{11}\end{matrix}\right) = \frac{q_0 q_1}{c} \mathbf{e} + \frac{1}{2}\left(\begin{matrix} q_0(1 + q_0 - q_1) \\ q_1(1 + q_1 - q_0) \end{matrix}\right)\;.
\end{align*}
We may then compute the mean-field approximation for the $\alpha = 0$ arch:
\begin{align*}
\hat{\rho}^*(\q) = 2x_{01}
= 1 - x^*_{00} - x^*_{11}
= 2q_0 q_1 \frac{c-1}{c}.
\end{align*}
We note that this result is identical to that derived in \cite{vazquez2008analytical} for a node-updating non-adaptive voter model.
We now introduce the interpolation function
\begin{align}
s(\q,\x) = \frac{\hat{\rho}^*(\q) - \rho}{\hat{\rho}^*(\q)}\;, \label{eq:interpolation}
\end{align}
to quantify the distance of the system state from the estimated $\alpha = 0$ arch.
We then use this interpolation function to introduce decay in Type 3 events, replacing $\mathbf{g}(\q, \C)$ in \Cref{eq:v_hat} with $\tilde{\mathbf{g}}(\q, \C) = \mathbf{g}(\q, \C)s(\q,\x)$.
The corresponding solution for $\hat{\x}$ yields the supercritical approximation of $\rho$.
\Cref{fig:arch_approximations} shows the resulting approximations for the arch in both models, across a range of parameter regimes and both model variants.
The arches for the rewire-to-random model agree well with the data on both the support of the arch and the equilibrium active edge density.
The rewire-to-same arches are somewhat less precise.
The arches do correctly span the complete interval $[0,1]$.
The overall numerical agreement with the data is comparable to extant methods, but the parabolic shape of the arch is not completely reproduced --- there is some warping near the base.
The reason for this warping is not clear to us at present, and further investigation into this phenomenon may yield theoretical and computational progress.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig/arch_approx.pdf}
\caption{Approximations to the arch for varying $\alpha$, $\q$, and $c$.
Points give averages over simulation runs on AVMs with $n = 10^4$ and $\lambda = 2^{-5}$.
Solid lines give the equilibrium value of $\hat{\rho}$ obtained by numerically solving for the fixed points of \Cref{eq:dynamics} using the interpolation function given by \Cref{eq:interpolation}.}
\label{fig:arch_approximations}
\end{figure}
\section{Discussion} \label{sec:discussion}
The Markovian approximation technique we have developed offers predictions for the equilibrium active edge density $\rho^*$ across the entirety of parameter space, and for varying opinion densities $\q$.
Its accuracy in these tasks is generally comparable to that of the best extant methods.
For example, \Cref{fig:prev_results}(a) shows that our Markovian approximation is at least as accurate as AMEs \cite{Durrett2012} in predicting the $c = 4$ phase transition for the rewire-to-random model, and grows more accurate as $c$ grows large.
Our approximation is substantially more accurate than AMEs for the rewire-to-same phase transition, and only slightly less accurate than the compartmental approach of \cite{Bohme2011} for this model variant.
Because relatively few approximation schemes make predictions for the arch, comparisons are sparser.
The compartmental method of \cite{Demirel2012} approximates the equilibrium active edge density at $\q = \left(\frac{1}{2},\frac{1}{2}\right)$ more accurately than our method (\Cref{fig:prev_results}(b)), but does not make predictions for asymmetric opinion densities.
AME predictions \cite{Durrett2012} recover the asymmetric phase transition and arches reasonably well in the rewire-to-random case, but are much less accurate for the rewire-to-same variant.
Whereas the AME arches display warping in the rewire-to-random variant, our Markovian approximation displays warping in the rewire-to-same variant.
In the $c = 4$ case shown in \Cref{fig:prev_results}(b), the present method offers overall accuracy in computing the arches similar to that of the AMEs, and improves as $c$ grows large.
\subsection{Computational Considerations}
Solving \cref{eq:approx_dynamics} requires finding the solution of a system of four coupled nonlinear equations, which may be done efficiently using a standard numerical solver.
Notably, the dimensionality of the approximation is independent of the mean degree $c$.
This contrasts to compartmental methods \cite{Durrett2012,Bohme2011,Silk2014,Demirel2012}, the dimension of which generally display quadratic or higher scaling in $c$.
For example, AMEs comprise a system of $\Theta\left(k_\mathrm{max}^2\right)$ coupled differential equations, where $k_\mathrm{max}$ is the highest node degree expected to be encountered in simulation; in the $c = 4$ case, the authors of \cite{Durrett2012} used $272$ such equations.
This scaling makes the computation of approximations computationally prohibitive even for modest mean degrees $c$.
Similarly, the method of \cite{Bohme2011} for approximating the rewire-to-same phase transition requires a bisection search in $\alpha$ for which each function evaluation corresponds to finding the largest eigenvalue of a $(c-1) \times (c-1)$ matrix.
The scaling is thus at least $O\left((\log \frac{1}{\epsilon})(c-1)^2\right)$, where $\epsilon$ is the desired approximation accuracy.
Because our proposed method scales independently of $c$, it can be used to approximate AVMs with arbitrarily large mean degrees.
\subsection{Conclusions}
Adaptive voter models offer a simple set of mechanisms that generate emergent opinion-based assortativity in complex networks.
While the underlying rules are simple to state, the coevolving nature of the dynamics render these systems very interesting and challenging to analyze.
We have considered an ergodic adaptive voter model variant which enables a local perspective on fragmentation transitions and other model properties.
The local perspective allows us to use the asymmetry of voting events to develop Markovian approximations based on the fast timescale dynamics around single nodes.
The resulting approach is conceptually intuitive, computationally tractable, and empirically accurate.
One of the most puzzling issues raised by our results is the difference between the accuracies of our approach for the rewire-to-random and rewire-to-same adaptive voter models.
While we succeed in characterizing the rewire-to-random arch nearly exactly, the same methods produce poorer results for the rewire-to-same model.
We conjecture that the rapid local sorting produced in rewire-to-same dynamics violates our mean-field assumption on Type 1 events, which would lead to approximation degradation.
It would be interesting to extend our methodology to see whether refinements are possible that better characterize the rewire-to-same behavior and shed further light on the essential features governing the dramatic difference in the nature of the phase transitions in these two models.
It is also of interest to consider extensions and generalizations.
The most natural extension is to the case of multiple opinion states and structured opinion spaces.
Previous work on multi-opinion models has been restricted to either characterization of the various phase transitions \cite{Bohme2012} or empirical discussion of supercritical equilibrium behavior \cite{Shi2013}.
One explanation for this limitation in scope is computational, in that the number of operations required to compute approximations under active-motif and AME approaches are exponential in the number $\abs{\mathcal{X}}$ of opinion states, rendering both methods infeasible.
In contrast, likely extensions of our moment-closure methods scale as $\Theta(\abs{\mathcal{X}}^2)$, which could be computationally tractable.
If accuracy were preserved, such extensions would present the first scalable analytic methods for multi-opinion models.
While we developed our approximations for the specific case of the binary-state AVM, that development relies only on ergodicity, timescale-separation, and the mean-field assumption.
We conjecture that these ingredients should be present in any adaptive model with homophilic dynamics in which rewiring steps involve uniform selection from an extensive subset of the graph, such as a subset sharing a given node state.
An example of a more complex system in which these ingredients are present is the networked evolutionary prisoner's dilemma game of \cite{Lee2018}, in which nodes display richer strategic behavior in their opinion update and rewiring behavior.
Despite this, the existence of a phase transition driven by homophily may allow for the deployment of our novel methods in such cases as well.
\section*{Acknowledgments}
We are grateful to Feng (Bill) Shi for contributing code used for simulations, Hsuan-Wei Lee for contributing code used to construct the approximate master equation solutions shown in \Cref{fig:prev_results}, and Patrick Jaillet for helpful discussions.
\section*{Code}
Documented code for running simulations and computing the approximations described in this paper may be found at \url{https://github.com/PhilChodrow/AVM}.
\bibliographystyle{siamplain}
|
1,477,468,750,609 | arxiv | \section{Introduction}
\label{Introduction}
Electron-transfer (ET) chemical reactions in solution constitute a paradigmatic class of chemical reactions \cite{pauling1988general}.
In the simplest case, an electron of charge $-e$ ($e$ is the elementary charge) is transferred from an anion $(A^-)$ to a cation $(C^+)$, following the ET reaction $A^- + C^+ \longrightarrow A + C$.
In the more general class of charge-transfer (CT) chemical reactions, a modification of the local-charge density of states occurs between different chemical groups of the reacting molecules, thus resulting in a partially-transferred (shifted) charge $\delta_{\rm{e}}$ during the CT process.
Such is the case for intramolecular CT reactions in $D-A$ molecules where an electron donor (D) group is connected to an electron acceptor (A) group through a molecular bridge ($-$), thus resulting in the following intramolecular CT mechanism $D-A \longrightarrow D^{+\delta_{\rm{e}}}-A^{-\delta_{\rm{e}}}$.
The theoretical description of ET and CT reactions has a long history \cite{pauling1988general}, which was fully developed after the works of Marcus \cite{marcus_theory_1956,marcus_electron_1993,siders_quantum_1981,marcus_nonadiabatic_1984}, and Kestner et al. \cite{kestner_thermal_1974}, with later successful applications for biological molecules \cite{hopfield_electron_1974}.
At the heart of Marcus theory is the necessity to take into account explicitly the solvent in the modelling of ET reaction rates.
Out-of-equilibrium fluctuations in the solvent nuclear coordinates are indeed necessary to reach the crossing point of the reactant (R) and product (P) potential energy surfaces (PES), at which the electron-transfer occurs.
The reaction rate $k_{\rm{ET}}$ is given by the simple result of Marcus \cite{marcus_theory_1956}
$k_{\rm{ET}}=k_e \exp{ \left( -\Delta_r G^*/k_B T \right)}$, with $k_e$ a reaction-dependent global rate, $T$ the temperature, $k_B$ the Boltzmann constant and $\Delta_r G^* = \left( \Delta_r G^0 + \lambda_{\rm{S}} \right)^2/4\lambda_{\rm{S}}$ an effective activation energy depending on the solvent reorganization energy $\lambda_{\rm{S}}$ and variation of the thermodynamic Gibbs potential $\Delta_r G^0$ (reaction driving-force).
Although this expression takes a form similar to transition-state theory \cite{eyring_activated_1935,evans_applications_1935,wigner_transition_1938,kramers_brownian_1940},
it is important to notice that its derivation using quantum mechanical first principles does not involve the concept of activated-complex or transition-state \cite{kestner_thermal_1974,siders_quantum_1981}.
The experimental investigation of a wide variety of ground-state chemical reactions, photoreactions, ET and CT reactions came recently to a strong revival, due to the ability of confining molecular ensembles inside micro or nano-optical and plasmonic cavities \cite{ebbesen_hybrid_2016,ribeiro2018polariton}, resulting in an alteration of their chemical reactivity.
For instance, it was shown that electromagnetic microcavities, can be tailored such that a single cavity-mode of frequency $\omega_c \approx 2.2 \mbox{ eV}/\hbar$ (with $\hbar$ the Planck constant) can be tuned to resonance with the electronic transition between the ground and excited state of the cavity-confined molecules \cite{schwartz_reversible_2011,hutchison_modifying_2012}.
The conjunction of a low cavity volume $V$, large number of confined molecules $N$ and strong molecular electric dipole $\mu$,
results in sizable vacuum quantum fluctuations of the
cavity electrical field $E_0=\sqrt{\hbar\omega_c/2\varepsilon_0 V}$, with $\varepsilon_0$ the electromagnetic vacuum permittivity.
The resulting light-matter coupling strength between the molecular dipoles and the cavity mode, as quantified by the collective vacuum Rabi splitting frequency $\tilde{\Omega}_R=\mu E_0 \sqrt{N}/\hbar$ \cite{haroche_cavity_nodate,dicke_coherence_1954,tavis_exact_1968}, can be as high as $\tilde{\Omega}_R \approx 0.7 \mbox{ eV}/\hbar$ \cite{schwartz_reversible_2011}, thus exceeding the cavity optical losses $\kappa \approx 0.2 \mbox{ eV}/\hbar$.
In this regime of electronic light-matter strong coupling, a collective hybrid excitation is formed between the resonant cavity-mode and the embedded molecules called \textit{polariton}.
As was reported and investigated in depth for optical spectra in semiconducting microcavities \cite{weisbuch_observation_1992,houdre_early_2005}, polariton excitations are
characterized by the vacuum Rabi-splitting of cavity optical absorption spectra \cite{schwartz_reversible_2011}.
It is remarkable that the strong-coupling regime (with $\tilde{\Omega}_R \approx 110 \mbox{ meV}/\hbar$ and $\kappa \approx 60 \mbox{ meV}/\hbar$) was also recently achieved in liquid phase, in which optically active molecules are confined inside a nanofluidic Fabry-P\'{e}rot cavity \cite{bahsoun_electronic_2018}.
The formation of cavity polaritons has deep consequences on the chemical
reactivity of the embedded molecules.
It was shown experimentally that the potential energy landscape of a photoisomerization chemical reaction is strongly altered under resonant conditions between an electromagnetic cavity mode and the electronic transition between ground and excited states of the reaction product \cite{schwartz_reversible_2011}, thus resulting in a significant slowing-down of the reaction kinetics \cite{hutchison_modifying_2012}.
Theoretical investigations have described and computed the polariton potential energy surface (PPES) of such class of reactions, taking into account the role of nuclear degrees of freedom in describing the light-matter interaction mechanism \cite{galego_cavity-induced_2015}, the former inducing a vibrational dressing of the polaritons \cite{cwik_excitonic_2016}.
The resulting alteration of the PPES was shown to be responsible for the cavity-induced slowing-down of photochemical reactions \cite{galego_suppressing_2016,galego_many-molecule_2017}.
On the other hand, ET chemical reactions for molecular populations in cavity were also predicted to be accelerated by orders of magnitude, as a result of both the modulation of the PPES and the influence of the collective decoupling between nuclear motion and electronic degrees of freedom in the light-matter strong-coupling regime \cite{herrera_cavity-controlled_2016}.
Recent theoretical works on ET reactions in confined electromagnetic environment reported a cavity-induced significant enhancement of the ET reaction rate \cite{semenov2019electron}, and
emphasize the role of counter-rotating terms \cite{mandal2020polariton} (beyond rotating-wave approximation) and self-dipole energy terms \cite{semenov2019electron,mandal2020polariton} in writing the interaction Hamiltonian: both are necessary for preserving gauge invariance \cite{craig1998molecular,di2019resolution} and computing accurately the PPES upon entering the ultra-strong coupling regime of cavity quantum electrodynamics ($\tilde{\Omega}_R \geq \omega_c$).
In this paper, we revise the theoretical description of the kinetics of CT chemical reactions in nanofluidic Fabry-P\'{e}rot cavities.
We investigate on the same footing the collective coupling between molecular populations and a single electromagnetic cavity-mode, taking into account dissipation and dephasing mechanisms induced by the solvent and cavity losses.
The organization of the paper is the following.
In Sec.\ref{Theoretical_Modelling}, we introduce our theoretical microscopic model of solvated molecules interacting with a single electromagnetic cavity-mode.
We develop an analytical scheme based on the Born-Oppenheimer approximation that enables to compute analytically the PPES in the regime for which the collective vacuum Rabi splitting $\tilde{\Omega}_R$ is larger than the intramolecular vibrational reorganization energy $\lambda_{\rm{v}}$.
In this regime, we obtain approximate many-body wave functions for the polaritons and dark states of the molecules-cavity ensemble, in presence of coupling to the reaction coordinate and solvent bath.
In some limits, we recover the results of Refs.\cite{herrera_cavity-controlled_2016,wu2016polarons,zeb2017exact}, that are based on the use of the variational polaron ansatz \cite{toyozawa1954theory,silbey_electronic_1976,bera2014generalized}.
We interpret physically this result by introducing the concept of \textit{reacton}, which is the collective excitation of the reactant molecules interacting strongly with the cavity-mode and dressed by its interaction with the solvent.
In Sec.\ref{Charge_transfer_reaction_rate}, we derive a generalization of Marcus theory \cite{marcus_electron_1993} adapted to the \textit{reacton}'s formation inside the electromagnetic cavity.
We improve the already existing theory of Ref.\cite{herrera_cavity-controlled_2016} by adapting a theoretical framework derived by Kestner et al. \cite{kestner_thermal_1974} for describing ET reactions in solution.
This enables to incorporate explicitly the solvent into the reaction mechanism by using the separation of time-scales between fast intra-molecular vibrational modes along the reaction coordinate and slow vibrational modes of the solvent bath.
Compared to more recent Refs.\cite{semenov2019electron,mandal2020polariton}, we improve several points of the theory by including explicitly both the collective coupling of N molecules to the cavity-mode (and not of a single molecule) and the presence of dissipation by the environment.
We then compute the modification of the CT reaction rate due the formation of the \textit{reacton} inside the cavity, for a specific model of photoreaction involving a charge-transfer process in the electronic excited-state.
We show that the \textit{reacton} opens new channels for the charge-transfer mechanism.
Depending on the range of parameters, the reaction kinetics can either be slower or faster inside cavity compared to outside cavity.
In Sec.\ref{Dissipation}, we derive the dissipation and dephasing rates induced by the cavity optical losses, non-radiative relaxation induced by molecular vibrations, and dephasing of the \textit{reacton} by the solvent bath.
For this purpose, we extend the approach derived from quantum optics in Ref.\cite{canaguier-durand_non-markovian_2015} using the
the dressed-atom approach \cite{cohen1998atom}, to our case of many-body \textit{reacton} basis.
In Sec.\ref{Ultrafast_reaction_kinetics}, we solve numerically the
whole ultrafast picosecond kinetics of the photoreaction.
We develop a rate-equation approach that we solve numerically, obtaining the
time-dependent evolution of reactants and products concentration inside the cavity, after a
single-photon has been absorbed to initiate the reaction.
Despite strong cavity losses and dissipation induced by the solvent, we predict fingerprints of the \textit{reacton} formation that should be visible on picosecond time-scales.
Finally, we develop in Sec.\ref{Perspectives} some open perspectives in this field that are of interest for the design and engineering of a new generation of open chemical reactors, the kinetics of which is modulated by vacuum quantum fluctuations of the cavity electromagnetic field.
\section{Theoretical modelling}
\label{Theoretical_Modelling}
\subsection{Microscopic Hamiltonian}
\label{Microscopic_Hamiltonian}
\begin{figure}[!h]
\centering
\includegraphics[width=1.0\linewidth]{fig1.png}
\caption{
(a) Pictorial representation of molecules of (E)-4-(2-(1-methylpyridin-1-ium-4-yl)vinyl)phenolate, in solution inside a nanofluidic Fabry-P\'{e}rot cavity.
%
The nomenclature describes this photoactive molecule in its aromatic form.
%
%
%
$\lambda_c/2 = \pi c/ n \omega_c$ is the wavelength of the cavity fundamental electromagnetic mode, with $c$ the speed of light and $n$ the refractive index of the medium.
%
(b) Sketch of the PES for such molecules as a function of the RC.
%
The electronic ground-state minima $g$ and $g'$ and excited-state minima $e$ and $f$ for the molecule are presented as well as their typical energies $\varepsilon_g$, $\varepsilon_{g'}$, $\varepsilon_e$ and $\varepsilon_f$ in eV.
%
The grey arrow stands for the cavity-mode of frequency $\omega_c$ that is resonant with the $g-e$ electric dipole transition.
%
}
\label{fig:Fig1}
\end{figure}
We investigate the chemical reactivity of a solution of molecules inside a Fabry-P\'{e}rot nanofluidic cavity.
For this purpose, bi-phenyl molecules have been studied extensively \cite{maus_excited_2002,herrera_cavity-controlled_2016}, since they have interesting photochemical properties due to a rotational degree of freedom around a C-C bond connecting the phenyl groups, as well as a possibility of being functionalized by various chemical groups with electron donating or accepting character.
Other donor-acceptor molecules with an internal high-frequency vibrational mode are also good candidates for investigating CT reaction rates in solution.
In our paper, we consider typical organic molecules with interesting photoactive properties, embedded inside the cavity.
Such is the case for the molecule represented in Fig.\ref{fig:Fig1}a, and written (E)-4-(2-(1-methylpyridin-1-ium-4-yl)vinyl)phenolate; this nomenclature describes the structure of the molecule in its aromatic form.
We show in Fig.\ref{fig:Fig1}b a sketch of the PES for such a molecule described within Born-Oppenheimer approximation \cite{tully_perspective_2000}, as a function of the reaction coordinate (RC).
The RC corresponds to an intra-molecular vibration or a rotation mode of the molecule.
The electronic structure of this molecule is described by an electronic ground-state with two relative minima labelled $g$ and $g'$, and an electronic excited-state with two minima $e$ and $f$.
Upon photoexcitation from $g$ to $e$, the molecule can reach the more stable excited-state $f$,
by changing its conformation and undergoing an elementary CT process.
For simplicity, we approximate the complex electronic structure of the molecule by displaced parabolic PES \cite{herrera_cavity-controlled_2016}, in the spirit of the parabolic approximation in Marcus theory \cite{marcus_electron_1993}.
We consider the system made of N molecules in solution coupled to a single electromagnetic cavity-mode (see Fig.\ref{fig:Fig1}a).
We write the microscopic Hamiltonian $\mathcal{H}$ describing this system
\begin{eqnarray}
\mathcal{H} = H_{\rm{CaM}} + V_{\rm{M-Ca}} + V_{\rm{CT}}
\label{H_CaM1}\,,
\end{eqnarray}
as the sum of the Hamiltonian $H_{\rm{CaM}}$ describing the free electromagnetic cavity-mode (Ca) and quadratic PES of the solvated molecules (M), plus the Hamiltonian $V_{\rm{M-Ca}}$ standing for electromagnetic interactions between the molecules and the cavity-mode.
We denote $V_{\rm{CT}}$ the Hamiltonian describing weak-coupling between
electronic excited-states $e$ and $f$ of the molecule, at the origin of charge-transfer.
Each of those Hamiltonian is given by
\begin{eqnarray}
&&H_{\rm{CaM}} = \sum_{i=1}^{N} \sum_{r=g,g',e,f} \varepsilon_{ri}
\ket{r_i} \bra{r_i} + \hbar\omega_c\left( a^\dagger a + \frac{1}{2} \right)
\,,\label{H_CaM2} \\
&&\varepsilon_{ri} = \varepsilon_{r} + \frac{\omega_\mathrm{v}^2}{2}\left(
Q_{\mathrm{v},i} - \overline{Q}_{\mathrm{v},r}\right)^2
+
\sum_{k}\frac{\omega_k^2}{2}\left(
Q_{\mathrm{S},ik} - \overline{Q}_{\mathrm{S},rk}\right)^2
\,, \nonumber \\
\label{H_CaM3}\\
&&V_{\rm{M-Ca}} = \frac{\hbar\Omega_R}{2}\sum_{i=1}^{N} \left(
\ket{e_i} \bra{g_i} a
+
a^\dagger \ket{g_i} \bra{e_i}
\right)
\,,\label{H_CaM4} \\
&&V_{\rm{CT}} = \sum_{i=1}^{N} \left(
V_{ef} \ket{e_i} \bra{f_i}
+
V^*_{ef} \ket{f_i} \bra{e_i}
\right)
\,,\label{H_CaM5}
\end{eqnarray}
with $\varepsilon_{ri}$ the PES corresponding to $\ket{r_i}$ the electronic state $r=g,g',e,f$ belonging to the molecule number $i=1,\cdots,N$.
The PES in Eq.\ref{H_CaM3} is the sum of an electronic part $\varepsilon_{r}$ (bottom of the parabola in Fig.\ref{fig:Fig1}b), plus a quadratic dependence along the nuclear coordinate $Q_{\mathrm{v},i}$ corresponding to the intra-molecular vibration mode of molecule $i$, plus molecular vibrations $Q_{\mathrm{S},ik}$ of the bath of solvent molecules labelled with a quasi-continuum index $k$.
We suppose that each molecule has the same intra-molecular vibration frequency $\omega_{\mathrm{v}}$ and bath mode frequency $\omega_k$ along the RC, independently of its electronic state $r$ (same curvature around each minimum of the bare PES in Fig.\ref{fig:Fig1}b).
We label $\overline{Q}_{\mathrm{v},r}$ and $\overline{Q}_{\mathrm{S},rk}$ the displaced nuclear equilibrium positions associated respectively to the intra-molecular and solvent modes, both depending on the electronic state $r$.
The free electromagnetic mode of the cavity is described in Eq.\ref{H_CaM2} by $a$ ($a^\dagger$) the annihilation (creation) operator of a photon excitation inside the cavity of frequency $\omega_c$.
The light-matter interaction Hamiltonian in Eq.\ref{H_CaM4} is an electric-dipole coupling term, written within rotating-wave approximation (RWA) \cite{cohen1998atom,dicke_coherence_1954,tavis_exact_1968}.
It couples the electronic ground-state $g$ to the excited-state $e$ of each molecule $i$ through the same cavity-mode, with a coupling strength given by the bare vacuum Rabi frequency $\Omega_R\equiv \mu E_0/\hbar$.
We suppose for simplicity that there is no direct dipole coupling between the $g'$ and $f$ states, either because the corresponding dipole matrix elements are weak, or the cavity frequency is detuned from the corresponding electronic transition.
We note that counter-rotating and self-dipole energy terms have been neglected in Eq.\ref{H_CaM4}.
Those terms are derived in Ref.\cite{craig1998molecular} and their effects have been investigated in depth in recent Refs.\cite{semenov2019electron,mandal2020polariton}.
They both give rise to energy shifts of the PES of relative order $\tilde{\Omega}_R/\omega_c$ compared to the standard RWA.
Those terms are thus weak but sizable in the strong (but not ultra-strong) coupling regime $(\hbar\kappa < \hbar\tilde{\Omega}_R < \hbar\omega_c)$.
As a first approximation, we neglect them in the Hamiltonian, in order to be able to derive tractable analytical approximations for computing the polaritonic PES and reaction rates.
For typical values of the collective Rabi frequency $\hbar\tilde{\Omega}_R\approx 0.2-0.7\mbox{ eV}$ and cavity frequency $\hbar\omega_c\approx 2.8 \mbox{ eV}$ in a nanofluidic cavity, the corresponding corrections are of order $7-25\%$.
Finally, the matrix element $V_{ef}$ in Eq.\ref{H_CaM5} is at the origin of the intramolecular CT process between any $e$ and $f$ state of one molecule.
The Hamiltonian $V_{\rm{CT}}$ is supposed to be a weak perturbation to the Hamiltonian $\mathcal{H}_0= H_{\rm{CaM}} + V_{\rm{M-Ca}}$ containing the molecular population coupled to the cavity-mode, but uncoupled to the excited-states $f$ and $g'$.
This approach holds in the incoherent regime of electron-transfer for which $|V_{ef}| \ll k_B T$.
In the following, we denote $\Delta_{gr}= \varepsilon_r - \varepsilon_g$, the
difference of electronic energies between the molecular ground-state $g$ and the excited-state $r$.
The detuning between the cavity-mode frequency and the targeted electronic dipole transition $g-e$ is written as $\delta = \omega_c - \Delta_{ge}/\hbar$.%
\subsection{Polaritonic Potential Energy Surfaces (PPES)}
\label{PPES}
\begin{figure}[!h]
\centering
\includegraphics[width=1.0\linewidth]{fig2.pdf}
\caption{
PPES for the lower polariton $\mathcal{E}_{-}$ (red triangle down), upper polariton $\mathcal{E}_{+}$ (blue triangle up) and dark states $\mathcal{E}_{D}$ (black dots).
%
Dotted curves are computed from numerical diagonalization of $\mathcal{H}_0$ (within RWA and absence of coupling to the solvent).
%
The corresponding plain curves are obtained from analytical formula in Eq.\ref{PPES1} and
Eq.\ref{DS2}.
%
The plain green and dashed yellow curves are the PES for the ground-state $\mathcal{G}$ and excited-state $\mathcal{F}$ respectively.
%
Parameters are: $N = 50$, $Q_{\mathrm{v},i}$ fixed for all $i=2,\cdots, N$ with a value equals to $10 x_{0v}$ while $Q_{\mathrm{v},1}$ is varied, $\varepsilon_{g}= 0 \mbox{ eV}$, $\varepsilon_{e}= 2.8 \mbox{ eV}$, $\varepsilon_{f}= 2.6 \mbox{ eV}$, $\hbar\omega_{c}=2.8 \mbox{ eV}$, $\hbar\omega_{\mathrm{v}}=50 \mbox{ meV}$, $\hbar\Omega_R = 0.1 \mbox{ eV}$ ($\hbar\tilde{\Omega}_{R} = 0.7 \mbox{ eV}$), $\hbar\delta= 0 \mbox{ eV}$, $\lambda_{\mathrm{v},e}=0.1 \mbox{ meV}$.
%
}
\label{fig:Fig2}
\end{figure}
In this section, we compute the polariton PES (PPES), assuming a vanishing Hamiltonian $V_{\rm{CT}}$ in Eq.\ref{H_CaM1}.
Upon quantization of the intra-molecular and solvent vibrational modes, $\mathcal{H}_0$ gets identical to the Holstein-Tavis-Cummings Hamiltonian \cite{wu2016polarons,zeb2017exact,herrera_absorption_2017}.
In general, its eigenvalues and eigenstates have to be computed numerically.
In order to have analytical insight into the physics below this diagonalization, we make use of a generalized Born-Oppenheimer approximation \cite{tully_perspective_2000,galego_cavity-induced_2015}, taking into account the time-scale separation between slow nuclei motion $(\hbar\omega_{\mathrm{v}} \approx 50 \mbox{ meV})$ and the fast dynamics of strongly-coupled electrons and cavity-mode $(\Delta_{ge} \approx \hbar\omega_c \approx 2.8 \mbox{ eV})$.
We introduce the following notations for $q_{\mathrm{v},i}=Q_{\mathrm{v},i}-\overline{Q}_{\mathrm{v},g}$ and $q_{\mathrm{S},ik}=Q_{\mathrm{S},ik}-\overline{Q}_{\mathrm{S},gk}$ the displacements of the intra-molecular and solvent vibrational modes with respect to the ground-state equilibrium nuclear configuration.
The shift of the equilibrium nuclear positions $\Delta\overline{Q}_{\mathrm{v},r}=\overline{Q}_{\mathrm{v},r}-\overline{Q}_{\mathrm{v},g}$
and $\Delta\overline{Q}_{\mathrm{S},rk}=\overline{Q}_{\mathrm{S},rk}-\overline{Q}_{\mathrm{S},gk}$ in each excited electronic state $r$ (see displaced parabolas in Fig.\ref{fig:Fig1}b), is due to electron-phonon interactions.
The corresponding electron-phonon coupling strengths are given by the reorganisation energies \cite{marcus_theory_1956,marcus_electron_1993,kestner_thermal_1974} of intra-molecular and solvent vibrations, defined respectively as $\lambda_{\mathrm{v},r}=\omega_\mathrm{v}^2\Delta\overline{Q}^2_{\mathrm{v},r}/2$
and $\lambda_{\mathrm{S},r}= \sum_k \lambda_{\mathrm{S},rk}$, with $\lambda_{\mathrm{S},rk}=\omega_k^2 \Delta\overline{Q}^2_{\mathrm{S},rk}/2$.
We introduce the usual adimensional Huang-Rhys factors \cite{huang2000theory} $g_{\mathrm{v},r} = \Delta\overline{Q}_{\mathrm{v},r}/2x_{0\mathrm{v}}$ and $g_{\mathrm{S},rk} = \Delta\overline{Q}_{\mathrm{S},rk}/2x_{0\mathrm{S},k}$, which are nothing but the shifts of the modes' equilibrium positions in units of the zero-point motions $x_{0\mathrm{v}}=\sqrt{\hbar/2\omega_\mathrm{v}}$ and $x_{0\mathrm{S},k}=\sqrt{\hbar/2\omega_k}$.
Huang-Rhys factors are related to reorganisation energies by the relations
$g^2_{\mathrm{v},r} = \lambda_{\mathrm{v},r}/\hbar\omega_{\mathrm{v}}$ and $g^2_{\mathrm{S},rk} = \lambda_{\mathrm{S},rk}/\hbar\omega_k$.
We proceed by first pre-diagonalizing $\mathcal{H}_0$, in the limit of vanishing electron-phonon coupling $(\lambda_{\mathrm{v},r}=\lambda_{\mathrm{S},r}=0)$.
In this limit, nuclear motion and polariton dynamics are factorizable, so that
an exact solution can be given for the eigenstates and eigenfunctions of $\mathcal{H}_0$
\cite{tavis_exact_1968}.
Finally, we compute analytically by perturbation theory \cite{cohen1998mecanique} the lowest non-vanishing order corrections in the electron-phonon coupling strength and add them to the zero-order terms to find approximate expressions of the PPES.
\subsubsection{Ground-state}
\label{Ground_state}
The exact many-body ground-state $\ket{\mathcal{G}}$ and eigenenergy $\mathcal{E}_{\mathcal{G}}$ of $\mathcal{H}_0$ are given by
\begin{eqnarray}
\ket{\mathcal{G}} &=& \ket{G}\otimes\ket{0_c}
\,,\label{Gs1} \\
\mathcal{E}_{\mathcal{G}} &=& \varepsilon_{\rm{G}} +
\sum_{i=1}^N \frac{\omega_{\mathrm{v}}^2}{2} q_{\mathrm{v},i}^2
+
\sum_{i=1}^N\sum_{k}\frac{\omega_k^2}{2} q_{\mathrm{S},ik}^2
\,,\label{Gs2}
\end{eqnarray}
with $\ket{G}=\ket{g_1,\cdots,g_N}$ the product of the electronic ground-states for $N$ molecules,
and $\ket{0_c}$ the vacuum state of the electromagnetic cavity-mode.
The ground-state PES of Eq.\ref{Gs2} is shown in Fig.\ref{fig:Fig2} (plain green curve).
It is the sum of an electronic part $\varepsilon_{\rm{G}} = N \varepsilon_{g}+\hbar\omega_c/2$, corresponding to the energy of $N$ independent molecules in their ground-state $g$ and the cavity-mode in its vacuum ground-state, plus a quadratic contribution of vibrational oscillations around the ground-state equilibrium configurations of intramolecular and solvent modes.
We note that the inclusion of counter-rotating terms in Eq.\ref{H_CaM4} would induce a
Lamb-shift of the ground-state energy that can be taken into account either by second-order perturbation theory \cite{mandal2020polariton}, or by full numerical diagonalization.
Such an effect (not considered here) becomes important in the ultrastrong coupling regime, when the collective vacuum Rabi splitting is a significant portion or larger than the optical frequency $\tilde{\Omega}_R \geq \omega_c$ \cite{ciuti_quantum_2005}.
\subsubsection{Upper and lower polaritons}
\label{Upper_lower_polaritons}
The RWA in Eq.\ref{H_CaM4} enables to separate the energy-sector
corresponding to at most one cavity-photon or one molecular excitation from the higher-energy sectors and from the ground-state one.
We obtain the first upper ($\ket{\rho=+}$) and lower ($\ket{\rho=-}$) polariton manybody eigenstates
as
\begin{eqnarray}
\ket{+} &=& \cos(\theta) \ket{G}\otimes\ket{1_c}
+ \sin(\theta)\ket{E_1}\otimes\ket{0_c}
\,,\label{LUP1} \\
\ket{-} &=& -\sin(\theta) \ket{G}\otimes\ket{1_c}
+ \cos(\theta) \ket{E_1}\otimes\ket{0_c}
\,,\label{LUP2}
\end{eqnarray}
with
\begin{eqnarray}
\ket{E_1} &=& \frac{1}{\sqrt{N}}\sum_{i=1}^{N}\ket{(e_i)}
\,.\label{LUP3}
\end{eqnarray}
The coefficients in front of the manybody states in Eq.\ref{LUP1} are $\cos(\theta) = \sqrt{\alpha_{-}}$ and $\sin(\theta) = \sqrt{\alpha_{+}}$ with
\begin{eqnarray}
\alpha_{\rho=\pm} &=& \frac{1}{2}\left( 1 -\rho \frac{\delta}{\tilde{\Omega}_R} \right)
\,. \label{Alpha}
\end{eqnarray}
The totally symmetric molecular state $\ket{E_1}$ is obtained as the sum of all states containing $N-1$ molecules in the ground-state and one molecule $i$ in the excited state $\ket{(e_i)}\equiv\ket{g_1,\cdots,g_{i-1},(e_i),g_{i+1},\cdots,g_N}$.
The electronic excitation in this $\ket{E_1}$ Dicke-state is thus delocalized on the whole molecular ensemble, the former playing the role of a giant collective dipole oscillating in phase with the electromagnetic cavity-mode \cite{dicke_coherence_1954}.
The polaritons in Eq.\ref{LUP1} and Eq.\ref{LUP2} are linear combinations of two states: one involving the manybody electronic ground-state $\ket{G}$ with one photon populating the cavity and the other the collective Dicke-state $\ket{E_1}$ with the cavity in its quantum mechanical ground-state.
The coefficients $\cos(\theta)$ and $\sin(\theta)$ are function of both the cavity-molecule detuning $\delta$ and collective vacuum Rabi splitting $\tilde{\Omega}_R$ given by
\begin{eqnarray}
\tilde{\Omega}_R = \sqrt{\delta^2 + \left( \Omega_R\sqrt{N}\right)^2}
\,.\label{LUP6}
\end{eqnarray}
As expected, $\tilde{\Omega}_R$ scales with $\sqrt{N}$, or more precisely with the
square-root of the molecular concentration $\sqrt{N/V}$ \cite{tavis_exact_1968,haroche_cavity_nodate,houdre_early_2005}.
At resonance between the cavity-mode frequency and the molecular transition, $\delta=0$ and $\cos(\theta)=\sin(\theta)=1/\sqrt{2}$, such that the polaritons are half-matter, half-light hybrid excitations.
At strong-detuning ($\delta \rightarrow \pm \infty$), the polariton states coincide back with the bare molecular ground and excited states.
We obtain the PPES $\mathcal{E}_{\rho=\pm}$ corresponding to $\ket{\rho=\pm}$
\begin{eqnarray}
&&\mathcal{E}_{\rho} = \varepsilon_{\rho} +
\sum_{i=1}^N \frac{\omega_{\mathrm{v}}^2}{2} \left( q_{\mathrm{v},i} - \Delta \overline{Q}_{\mathrm{v},\rho}\right)^2
\, \nonumber \\
&& \qquad \quad \; +
\sum_{i=1}^N\sum_{k}\frac{\omega_k^2}{2} \left( q_{\mathrm{S},ik} - \Delta \overline{Q}_{\mathrm{S},\rho k}\right)^2
\,, \label{PPES1}
\end{eqnarray}
with $\varepsilon_{\rho}$ the polariton energy
\begin{eqnarray}
\varepsilon_{\rho} &=& \varepsilon_{\rm{G}} + \hbar\omega_c -
\frac{\hbar}{2}\left( \underline{\delta} - \rho \underline{\tilde{\Omega}}_R \right)
\,, \label{PPES4} \\
\underline{\delta} &=& \delta - \frac{\lambda_{\mathrm{v},e}+\lambda_{\mathrm{S},e}}{\hbar}
\left(
1 - \frac{\alpha^2_+ + \alpha^2_-}{N}
\right)
\,, \label{PPES5} \\
\underline{\tilde{\Omega}}_R &=& \tilde{\Omega}_R - \frac{\lambda_{\mathrm{v},e}+\lambda_{\mathrm{S},e}}{\hbar}\frac{\delta}{\tilde{\Omega}_R}
\left(
1 - \frac{1}{N}
\right)
\,, \label{PPES6}
\end{eqnarray}
and $\Delta \overline{Q}_{\mathrm{v},\rho}$ and $\Delta \overline{Q}_{\mathrm{S},\rho k}$ the respective shifts in the intra-molecular and solvent modes' equilibrium positions
\begin{eqnarray}
\Delta \overline{Q}_{\mathrm{v},\rho} &=& \alpha_{\rho} \frac{\Delta \overline{Q}_{\mathrm{v},e}}{N}
\,, \label{PPES2} \\
\Delta \overline{Q}_{\mathrm{S},\rho k} &=& \alpha_{\rho}\frac{\Delta \overline{Q}_{\mathrm{S},ek}}{N}
\,. \label{PPES3}
\end{eqnarray}
As expected, the polariton energy in Eq.\ref{PPES4} depends
on both the molecule-cavity detuning $\underline{\delta}$ and collective vacuum Rabi frequency $\underline{\tilde{\Omega}}_R$.
However, at lowest-order in the electron-phonon coupling strength, both quantities are renormalized in Eq.\ref{PPES5} and Eq.\ref{PPES6}, and become explicitly dependent on the
intra-molecular and solvent reorganization energies (respectively $\lambda_{\mathrm{v},e}$ and $\lambda_{\mathrm{S},e}$), as well as on the number $N$ of molecules.
At this level of approximation, the contribution of nuclear motion to the PPES in Eq.\ref{PPES1} is still quadratic, but with new equilibrium positions $\Delta \overline{Q}_{\mathrm{v},\rho}$ and $\Delta \overline{Q}_{\mathrm{S},\rho k}$ for the intra-molecular and solvent modes, both depending on detuning, collective Rabi frequency and number of molecules.
The shifts in equilibrium positions in Eq.\ref{PPES2} and Eq.\ref{PPES3} are the same for each molecule, thus corresponding to the excitation of a long-range vibrational mode, in which each molecular vibration couples in phase with the same polariton.
In the large-$N$ limit, we recover the results of the collective decoupling mechanism between nuclear motion and the polariton, as derived in Ref.\cite{herrera_cavity-controlled_2016}, for which the configuration of the nuclear equilibrium positions gets back to the ground-state configuration ($\Delta \overline{Q}_{\mathrm{v},\rho} \approx 0$ and $\Delta \overline{Q}_{\mathrm{S},\rho k} \approx 0$ when $N \gg 1$).
Eq.\ref{PPES1} is a direct physical consequence of the generalized Born-Oppenheimer approximation and perturbation expansion at the lowest-order of the electron-phonon coupling strength.
This approach generalizes previous results of Refs.\cite{herrera_cavity-controlled_2016,semenov2019electron,mandal2020polariton} by taking into account on the same footing the finite number $N$ of molecules, finite molecule-cavity detuning, and dressing of the polariton by molecular vibrations of the solvent environment.
The PPES in Eq.\ref{PPES1} interpolates smoothly between the limits of single-molecule $N=1$ and large number of molecules $N\gg 1$ inside the cavity (terms of leading order $\approx 1/N$).
It also consistent with previous methods of approximation based on the use of the variational polaron ansatz \cite{toyozawa1954theory,silbey_electronic_1976,bera2014generalized,herrera_cavity-controlled_2016}.
We present in Fig.\ref{fig:Fig2} the PPES for the lower polariton state $\mathcal{E}_{\rm{-}}$
(plain red curve) and upper polariton state $\mathcal{E}_{\rm{+}}$ (plain blue curve) as obtained
from Eq.\ref{PPES1}.
For comparison, the PPES obtained by numerical diagonalization of $\mathcal{H}_0$ (within RWA) are plotted in Fig.\ref{fig:Fig2} as lower and upper triangles, standing respectively for the lower and upper PPES.
We show a very good matching of the exact numerical curves and analytical results of Eq.\ref{PPES1}, in the moderate to strong-coupling regime for which the effective Rabi frequency
is in the range $\hbar \omega_c > \hbar \tilde{\Omega}_R > \lambda_{\mathrm{v},e}, \lambda_{\mathrm{S},e}, \omega_{\mathrm{v}}$.
\subsubsection{Dark states}
\label{Dark_States}
The spectrum of $\mathcal{H}_0$ in the single-photon excitation sector, also contains a manifold of
$N-1$ degenerate states uncoupled to the cavity-mode.
The expression of those dark states $\ket{\mathcal{D}_{p}}$ is more complex than the one of the bright polaritons \cite{dukalski2013high,ozhigov2016space}.
It can be obtained exactly in the case of vanishing electron-phonon coupling strength
\begin{eqnarray}
\ket{\mathcal{D}_{p}} &=& \frac{1}{\sqrt{p+1}} \left(
\frac{1}{\sqrt{p}}\sum_{j=1}^{p} \ket{\left(e_j\right)}
-
\sqrt{p} \ket{\left(e_{p+1}\right)}
\right)
\otimes\ket{0_c}
\,, \nonumber \\
\label{DS1} \\
\mathcal{E}_{\mathcal{D}_{p}} &=& \varepsilon_{\rm{D}} +
\sum_{i=1}^N
\frac{\omega_{\mathrm{v}}^2}{2} q_{\mathrm{v},i}^2
+
\sum_{i=1}^N\sum_{k}\frac{\omega_k^2}{2} q_{\mathrm{S},ik}^2
\,,\label{DS2}
\end{eqnarray}
with $p=1,\cdots,N-1$ an index labelling the dark state, and $\varepsilon_{\rm{D}}\equiv \varepsilon_{\mathrm{G}} + \Delta_{ge}$ the dark state energy (independent of $p$).
Within RWA, those states do not couple directly to the optical cavity-mode.
Their PES in Eq.\ref{DS2} is thus independent of the collective vacuum Rabi splitting.
In the case of finite arbitrary electron-phonon interactions, the dark PES can only be computed numerically, similarly to the Holstein polaron problem \cite{holstein1959studies}.
We obtain numerically a lifting of the dark PES degeneracy, with the creation of a miniband of states between the lower and upper polaritons.
The miniband width is proportional to the total reorganization energy $\lambda_{\mathrm{v},e}+\lambda_{\mathrm{S},e}$.
The coupling to molecular vibrations thus broadens the manifold of dark states as does an inhomogeneous static disorder \cite{houdre_vacuum-field_1996}.
We plot in Fig.\ref{fig:Fig2} the miniband of dark PES $\mathcal{E}_{\mathcal{D}_{p}}$ obtained numerically (black dots), compared to the analytical PES given by Eq.\ref{DS2} (plain black curve).
The former is a good approximation to the average position of the miniband.
In the rest of the paper, we will use the analytical expression given by Eq.\ref{DS2},
even in cases for which the electron-phonon interaction is finite, which is a good approximation
if the broadening of the miniband is smaller than the vacuum Rabi splitting.
Finally, there are additional eigenstates of $\mathcal{H}_0$ that do not couple to the optical cavity-mode and are thus ``dark", but play an important role regarding the chemical reactivity of the confined molecules.
Such is the case for the excited-states $\ket{(r_i)}\equiv\ket{g_1,\cdots,g_{i-1},(r_i),g_{i+1},\cdots,g_N}$ containing the molecule number $i$ in the excited electronic state $r=f$ or $r=g'$, while the remaining $N-1$ molecules are in the ground-state $g$.
The corresponding manybody state $\ket{(\mathcal{R}_i)}$ and eigenenergy $\mathcal{E}_{\rm{\mathcal{R}_i}}$ for $r=f,\,g'$ are given by
\begin{eqnarray}
&&\ket{\mathcal{R}_i} = \ket{\left(r_i\right)}\otimes\ket{0_c}
\,, \label{DS3} \\
&&\mathcal{E}_{\rm{\mathcal{R}_i}} = \varepsilon_{\rm{R}} +
\sum_{j=1,j\neq i}^N
\frac{\omega_{\mathrm{v}}^2}{2} q_{\mathrm{v},j}^2
+
\sum_{j=1,j\neq i}^N \sum_{k}\frac{\omega_k^2}{2} q_{\mathrm{S},jk}^2
\nonumber \\
&& \quad +
\frac{\omega_{\mathrm{v}}^2}{2} \left( q_{\mathrm{v},i} - \Delta \overline{Q}_{\mathrm{v},r} \right)^2
+
\sum_{k}\frac{\omega_k^2}{2} \left( q_{\mathrm{S},ik} - \Delta \overline{Q}_{\mathrm{S},rk} \right)^2
\,, \nonumber \\
\label{DS4}
\end{eqnarray}
with $\varepsilon_{\rm{R}}\equiv \varepsilon_{\mathrm{G}} + \Delta_{gr}$ the $r$-state energy.
The corresponding PES $\mathcal{E}_{\rm{\mathcal{R}_i}}$ are $N$-fold degenerate.
We plot $\mathcal{E}_{\rm{\mathcal{F}_i}}$ in Fig.\ref{fig:Fig2} as a dashed yellow curve.
\subsubsection{The concept of \textit{reacton}}
\label{Reacton}
The PPES in the subsections Sec.\ref{Upper_lower_polaritons} and Sec.\ref{Dark_States} have a simple interpretation.
They arise from the collective dipole coupling between the electronic $g$ and $e$ states
of the molecules and a single electromagnetic cavity-mode, resulting in the formation of a polariton.
This polariton gets further dressed by interactions with a bath of intra-molecular and solvent vibrational modes, thus sharing some similarities with the concept of polaron \cite{holstein1959studies} in solid-state physics.
The dressed polariton is however more complex than a single polaron excitation, since it involves many different energy scales \cite{hutchison_modifying_2012} ranging from molecular vibrational frequencies $\hbar\omega_{\mathrm{v}} \approx 10 \mbox{ meV}$, electronic transitions and cavity optical frequency $\Delta_{ge} \approx \hbar\omega_c \approx 2 \mbox{ eV}$, as well as the collective vacuum Rabi frequency $\hbar\tilde{\Omega}_R\approx 0.7 \mbox{ eV}$ that is intermediate between the vibronic and optical frequency scales.
We call this dressed and collective polariton excitation a \textit{reacton}, since, as we will show later, the formation of this entity modifies significantly the chemical properties of confined and resonant molecules inside the cavity.
The concept of \textit{reacton} is a key concept that generalizes and unifies several previous investigations in the field of polaritonic chemistry \cite{galego_many-molecule_2017,herrera_cavity-controlled_2016,cwik_excitonic_2016},
and shares conceptual similarities to the \textit{dressed-atom} approach in quantum optics \cite{canaguier-durand_non-markovian_2015, cohen1998atom}.
While in this paper we compute the \textit{reacton} properties within the
range of validity of the Born-Oppenheimer approximation \cite{galego_cavity-induced_2015}, in general, those have to be computed numerically self-consistently \cite{cwik_excitonic_2016}.
\section{Charge-transfer reaction rate}
\label{Charge_transfer_reaction_rate}
In this section, we investigate the modification of chemical reactivity for cavity-confined molecules, induced by the \textit{reacton} formation.
Due to the weak but non-vanishing matrix elements ($V_{ef} \ne 0$) in the Hamiltonian $V_{\rm{CT}}$ (see Eq.\ref{H_CaM5}), molecules that are in the excited electronic state $e$ (valley of reactant) may undergo a CT process towards the other excited electronic state $f$ (valley of product), assisted by a reorganization of the molecular nuclei configuration.
The theoretical framework for describing the kinetics of such CT chemical reactions in solution was developed mainly by the works of Marcus \cite{marcus_theory_1956,marcus_electron_1993,siders_quantum_1981},
Kestner et al. \cite{kestner_thermal_1974}, Freed et al. \cite{freed_multiphonon_1970} and Hopfield \cite{hopfield_electron_1974}.
Our approach generalizes this framework to the case of PPES for the chemical reaction written in the \textit{reacton} basis (see Sec.\ref{PPES}), rather than in the bare molecular basis.
\subsubsection{Marcus theory applied to the \textit{reacton}}
\label{Marcus_Theory_Reacton}
\begin{figure}[!h]
\centering
\includegraphics[width=1.0\linewidth]{fig3.png}
\caption{CT thermal reaction rate inside cavity $k_{\rm{CT}}$ (yellow dotted curve) as a function of the bare reaction driving-force $\Delta_{ef}$.
The total rate $k^{(\rm{tot})}_{\mathrm{CT}}$ is presented as a purple plain curve.
Classical contributions of the PPES to $k_{\rm{CT}}$ are shown as dashed curves for the rates $k^{(\mathrm{cl})}_{\rm{CT},\mathcal{F}-}$ (in red), $k^{(\rm{cl})}_{\rm{CT},\mathcal{F}+}$ (in blue), and $k^{(\rm{cl})}_{\mathrm{CT},\mathcal{FD}}$ (in black).
The thermal rate $k^{(0)}_{\rm{CT}}$ and classical rate $k^{(0,\mathrm{cl})}_{\rm{CT}}$ outside cavity (for $\hbar\tilde{\Omega}_R \approx 0.0 \mbox{ eV}$) are shown respectively as
dashed-dotted green and cyan curves.
The grey $(a)$ and $(b)$ arrows are two specific values of $\Delta_{ef}$, the first one corresponding to the molecule of Fig.\ref{fig:Fig1}.
Chosen parameters are : $N=5000$, $k_BT= 26 \mbox{ meV}$, $\varepsilon_{g}=0 \mbox{ eV}$, $\varepsilon_{e}=2.8 \mbox{ eV}$, $\varepsilon_{f}=2.6 \mbox{ eV}$, $\hbar\omega_c= 2.8 \mbox{ eV}$, $\hbar\omega_{\mathrm{v}}= 50 \mbox{ meV}$, $\hbar\omega_k = 0.1 \mbox{ meV}$, $\hbar\Omega_R= 10 \mbox{ meV}$ ($\hbar\tilde{\Omega}_R= 0.7 \mbox{ eV}$), $\hbar\delta= 0 \mbox{ eV}$, $\lambda_{\mathrm{v},e}=0.1 \mbox{ meV}$ ($\tilde{\lambda}_{\mathrm{v},\rho\mathcal{F}}= 80 \mbox{ meV}$), $\lambda_{\mathrm{S},e}=0 \mbox{ meV}$ ($\tilde{\lambda}_{\mathrm{S},\rho\mathcal{F}}= 10 \mbox{ meV}$).
}
\label{fig:Fig3}
\end{figure}
Using standard Fermi's Golden Rule, we compute the CT thermal reaction rate $k_{\rm{CT}}$ \cite{kestner_thermal_1974,siders_quantum_1981}.
In the \textit{reacton} basis, $k_{\rm{CT}}$ is the sum
of partial contributions to the rate $k_{\mathrm{CT},\mathcal{F}\rho}$ from each PPES $\rho=\pm,\mathcal{D}$ belonging to the valley of reactant towards the valley of products $\mathcal{F}$.
This sum is ponderated by Boltzmann weights standing for thermal occupation of the valley of reactant \footnote{We generalize this approach in Sec.\ref{Ultrafast_reaction_kinetics} to cases where the occupation of the PPES are out-of-equilibrium.}
\begin{eqnarray}
k_{\rm{CT}} &=& \sum_{\rho=\pm,\mathcal{D}} \frac{e^{-\varepsilon_{\rho}/k_BT}}{Z_e} k_{\mathrm{CT},\mathcal{F}\rho}
\,, \label{kET1} \\
k_{\mathrm{CT},\mathcal{F}\rho} &=& \alpha_{\rho}\frac{2\pi}{\hbar}|V_{ef}|^2
\mathcal{L}_{\mathrm{v},\rho\mathcal{F}} \star
\mathcal{L}_{\rm{cl}}\left(\Delta_{\rho\mathcal{F}},\tilde{\lambda}_{\mathrm{S},\rho\mathcal{F}}\right)
\,, \label{kET2}
\end{eqnarray}
with $\Delta_{\rho\mathcal{F}}=\varepsilon_{\rm{F}}-\varepsilon_{\rho}$ the driving-force of the chemical reaction, and $\tilde{\lambda}_{\mathrm{S},\rho\mathcal{F}}$ the solvent reorganisation energies renormalized by the \textit{reacton} formation given by
\begin{eqnarray}
\tilde{\lambda}_{\mathrm{S},\pm\mathcal{F}} &=& \sum_k \hbar\omega_k\left( g_{\mathrm{S},fk} - \alpha_\pm \frac{g_{\mathrm{S},ek}}{N} \right)^2 + \alpha_\pm \frac{\lambda_{\mathrm{S},e}}{2N}\left(1-\frac{1}{N}\right)
\,, \nonumber \\
\label{Reorg1}
\end{eqnarray}
and $\tilde{\lambda}_{\mathrm{S},\mathcal{D}\mathcal{F}} = \lambda_{\mathrm{S},f}$.
We write $Z_e$ the partition function for the reactant valley, and $\alpha_{\rho}$ the prefactors given by Eq.\ref{Alpha} for $\alpha_{\pm}$ and $\alpha_{\mathcal{D}}=1$.
Interestingly, the CT rate in Eq.\ref{kET2} is the convolution $\mathcal{L}_{\mathrm{v},\rho\mathcal{F}} \star \mathcal{L}_{\rm{cl}}\left(E,\tilde{\lambda}_{\mathrm{S},\rho\mathcal{F}}\right)\equiv\int dE' \mathcal{L}_{\mathrm{v},\rho\mathcal{F}}\left(E'\right)
\mathcal{L}_{\rm{cl}}\left(E-E',\tilde{\lambda}_{\mathrm{S},\rho\mathcal{F}}\right)$ between an intra-molecular vibrational lineshape $\mathcal{L}_{\mathrm{v},\rho\mathcal{F}}\left(E \right)$ and a solvent lineshape $\mathcal{L}_{\rm{cl}}\left(E \right)$.
As expected, the bath of solvent modes broadens the intra-molecular vibrational lineshape along the RC.
We note here the usual separation of time scales between ``fast" intra-molecular vibrons
$(\hbar\omega_{\mathrm{v}} \approx 50 \mbox{ meV} > k_BT)$ and ``slow" vibrational modes of the solvent
$(\hbar\omega_{k} \approx 0.1 \mbox{ meV} < k_BT)$.
This implies that in general, $\mathcal{L}_{\mathrm{v},\rho\mathcal{F}}\left(E \right)$ has to be computed considering quantum mechanical vibrational modes \cite{kestner_thermal_1974}, while $\mathcal{L}_{\rm{cl}}\left(E \right)$ is obtained in the limit of classical vibrational modes by the standard Gaussian lineshape \cite{o1953absorption,lax1952franck,kubo1955generating,marcus_electron_1993,kestner_thermal_1974}
\begin{eqnarray}
\mathcal{L}_{\mathrm{v},\rho\mathcal{F}}\left(E \right) &=&
\sum_{n_{\mathrm{v}},m_{\mathrm{v}}=0}^{+\infty}
F_{n_{\mathrm{v}} m_{\mathrm{v}}}
\delta\left\lbrack E + \hbar\omega_{\mathrm{v}}\left( m_{\mathrm{v}} - n_{\mathrm{v}} \right) \right\rbrack
\,, \label{kET3bis} \\
\mathcal{L}_{\rm{cl}}\left(E,\lambda\right) &=&
\frac{1}{\sqrt{4\pi \lambda k_B T}}
\exp{\left\lbrack
-\frac{ \left( E + \lambda \right)^2}{4\lambda k_B T} \right\rbrack}
\,. \label{kET3}
\end{eqnarray}
The coefficient $F_{n_{\mathrm{v}} m_{\mathrm{v}}}$ in Eq.\ref{kET3bis} is defined by
\begin{eqnarray}
F_{n_{\mathrm{v}} m_{\mathrm{v}}} = e^{-g^2_{\mathrm{v},\rho\mathcal{F}}\left( 1 + 2 \overline{n}_{\mathrm{v}} \right)}
\frac{g^{2\left( n_{\mathrm{v}} + m_{\mathrm{v}} \right)}_{\mathrm{v},\rho\mathcal{F}}}{n_{\mathrm{v}}! m_{\mathrm{v}}!}
\left( 1 + \overline{n}_{\mathrm{v}} \right)^{m_{\mathrm{v}}}
\overline{n}_{\mathrm{v}}^{n_{\mathrm{v}}}
\,, \label{kET5}
\end{eqnarray}
with $\overline{n}_{\mathrm{v}} \equiv n_{\rm{B}}\left( \hbar\omega_{\mathrm{v}} \right)$ the thermal equilibrium Bose distribution $n_{\rm{B}}\left( E \right)=\left( e^{E/k_BT} - 1\right)^{-1}$ for the intra-molecular vibrational modes.
It involves the Franck-Condon overlap $|\left\langle n_{\mathrm{v}}|\tilde{m}_{\mathrm{v}}\right\rangle|^2$ between the vibrational state $\ket{n_{\mathrm{v}}}$ belonging to the valley of reactants and the vibrational state $\ket{\tilde{m}_{\mathrm{v}}}$ belonging to the valley of products \cite{siders_quantum_1981}, the former mode being displaced by the renormalized Huang-Rhys factors
\begin{eqnarray}
g^2_{\mathrm{v},\pm\mathcal{F}} = \left( g_{\mathrm{v},f} - \alpha_\pm \frac{g_{\mathrm{v},e}}{N} \right)^2 + \alpha_\pm \frac{g^2_{\mathrm{v},e}}{2N}\left(1-\frac{1}{N}\right)
\,, \label{Reorg2}
\end{eqnarray}
and $g^2_{\mathrm{v},\mathcal{D}\mathcal{F}} = g^2_{\mathrm{v},f}$.
Using Eq.\ref{kET2}, Eq.\ref{kET3bis} and Eq.\ref{kET3}, we derive the final form for the CT thermal reaction rates
\begin{eqnarray}
k_{\rm{CT},\mathcal{F}\rho} &=& \alpha_{\rho}\frac{2\pi}{\hbar} |V_{ef}|^2
\sum_{n_{\mathrm{v}},m_{\mathrm{v}}=0}^{+\infty}
F_{n_{\mathrm{v}} m_{\mathrm{v}}}
\mathcal{L}_{\rm{cl}}\left(\Delta^{n_{\mathrm{v}}m_{\mathrm{v}}}_{\rho\mathcal{F}},\tilde{\lambda}_{\mathrm{S},\rho\mathcal{F}}\right)
\,, \nonumber \\
\label{kET4}
\end{eqnarray}
with $\Delta^{n_{\mathrm{v}}m_{\mathrm{v}}}_{\rho\mathcal{F}}$ the partial driving-force of the CT reaction involving the exchange of $m_{\mathrm{v}} - n_{\mathrm{v}}$ molecular phonons
\begin{eqnarray}
\Delta^{n_{\mathrm{v}}m_{\mathrm{v}}}_{\rho\mathcal{F}} = \Delta_{\rho\mathcal{F}} + \hbar\omega_{\mathrm{v}}\left( m_{\mathrm{v}} - n_{\mathrm{v}} \right)
\,. \label{kET6}
\end{eqnarray}
Eq.\ref{kET4} is one of the main result of this paper.
Compared to standard Marcus theory \cite{marcus_electron_1993} and previous works in polaritonic chemistry \cite{herrera_cavity-controlled_2016,semenov2019electron,mandal2020polariton},
we derived the CT reaction rate, taking into account the \textit{reacton} formation, which
includes the contribution of collective PPES $\rho=\pm,\mathcal{D}$ delocalized on the whole molecular ensemble, that are available to the chemical reaction.
We notice that due to the collective nature of the \textit{reacton}, not only the reaction driving force strength $\Delta_{\rho\mathcal{F}}$ is modified (see Eq.\ref{kET6}), but also the intra-molecular vibrational Huang-Rhys factors $g^2_{\mathrm{v},\rho\mathcal{F}}$ (see Eq.\ref{Reorg2}) and solvent reorganisation energies $\tilde{\lambda}_{\mathrm{S},\rho\mathcal{F}}$ (see Eq.\ref{Reorg1}).
Finally, in the limit of ``slow" intra-molecular vibrational mode $\omega_{\mathrm{v}}<k_BT/\hbar$, Eq.\ref{kET1} formally recovers the ``semi-classical" approximation derived by Marcus \cite{siders_quantum_1981}.
In this limit, we obtain the classical CT thermal rate $k^{(\rm{cl})}_{\rm{CT}}$
\begin{eqnarray}
k^{(\rm{cl})}_{\rm{CT}} &=& \sum_{\rho=\pm,\mathcal{D}} \frac{e^{-\varepsilon_{\rho}/k_BT}}{Z_e} k^{(\rm{cl})}_{\rm{CT},\mathcal{F}\rho} \,, \label{kET7} \\
k^{(\rm{cl})}_{\rm{CT},\mathcal{F}\rho} &=& \alpha_{\rho}\frac{2\pi}{\hbar} |V_{ef}|^2
\mathcal{L}_{\rm{cl}}\left( \Delta_{\rho\mathcal{F}},\tilde{\Lambda}_{\rho\mathcal{F}}\right)
\,,\label{kET8}
\end{eqnarray}
with total reorganization energy $\tilde{\Lambda}_{\rho\mathcal{F}}=\tilde{\lambda}_{\mathrm{v},\rho\mathcal{F}}+\tilde{\lambda}_{\mathrm{S},\rho\mathcal{F}}$.
\subsubsection{CT reaction rate inside the cavity}
\label{Reaction_Rate_Inside_Cavity}
\begin{figure}[!h]
\centering
\includegraphics[width=1.0\linewidth]{fig4.png}
\caption{
Thermal reaction rate $k_{\rm{CT}}$ (plain yellow curve) and partial classical reaction rate $k^{(\rm{cl})}_{\mathrm{CT},\mathcal{F}-}$ (dashed red curve) as a function of the number of coupled molecules $N$.
%
The thermal rate out of cavity $k^{(0)}_{\rm{CT}}$ is shown as a green dotted curve.
%
Parameters are those of Fig.\ref{fig:Fig3}, except for $N$, with the reaction driving-force
fixed to the value $\Delta_{ef}=-0.2 \mbox{ eV}$.
%
The case $N=5000$ is shown by the grey arrow ($a$) as in Fig.\ref{fig:Fig3}.
%
}
\label{fig:Fig4}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=1.0\linewidth]{fig5.png}
\caption{
Same plot as in Fig.\ref{fig:Fig4}, but with the reaction driving-force
fixed to the value $\Delta_{ef}=-0.4 \mbox{ eV}$.
%
The case $N=5000$ is shown by the grey arrow ($b$) as in Fig.\ref{fig:Fig3}. %
}
\label{fig:Fig5}
\end{figure}
In the following, we focus on the case of room temperature $k_B T=26 \mbox{ meV}$ and a cavity frequency $\omega_c$ that is resonant ($\delta=0$) with the molecular transition $\Delta_{ge}/\hbar = 2.8 \mbox{ eV}/\hbar$ in Fig.\ref{fig:Fig1}b.
For a typical Fabry-P\'{e}rot cavity of surface $10^4 \mbox{ }\mu m^2$ with distant mirrors of the fundamental optical cavity-mode wavelength $\lambda_c/2 \approx \pi c/\omega_c \approx 0.221 \mbox{ }\mu m$ (for $n\approx 1$), and for molecules of electric dipole moment $\mu \approx 5 \mbox{ D}$, we estimate a very weak bare vacuum Rabi-splitting $\hbar\Omega_R \approx 0.35 \mbox{ }\mu \mbox{eV}$.
In best cases for which the molecules are in average packed $25\,\AA$ away one from each other and equally coupled to the cavity mode, we estimate the maximum number of embedded molecules $N \approx 10^{11}$ thus leading to an upper-bound for the collective vacuum Rabi-splitting of about $\hbar\tilde{\Omega}_R = 0.11 \mbox{ eV}$.
The former value is consistent with reported experimental values of $\tilde{\Omega}_R$ in nanofluidic Fabry-P\'{e}rot cavities \cite{bahsoun_electronic_2018}.
For simplicity and illustrative purposes, we adopt a much larger value of the bare vacuum Rabi-splitting $\Omega_R = 10 \mbox{ meV}$ that is consistent with the highest single-molecule-cavity couplings $(\approx 100 \mbox{ meV})$ reported in plasmonic cavities \cite{chikkaraddy2016single}.
We consider a population of $N=5000$ molecules coherently coupled to the same optical cavity mode,
for which the collective vacuum Rabi-splitting $\tilde{\Omega}_R = 0.7 \mbox{ eV}$ is close to reported experimental values in optical microcavities \cite{hutchison_modifying_2012}.
Finally, we choose the frequency of intra-molecular vibrational modes $\omega_{\mathrm{v}} \approx 50 \mbox{ meV}$ and solvent ones $\omega_k \approx 0.1 \mbox{ meV}$.
The dressed reorganization energies are fixed to $\tilde{\lambda}_{\mathrm{v},\rho\mathcal{F}} = 80 \mbox{ meV}$ and $\tilde{\lambda}_{\mathrm{S},\rho\mathcal{F}} = 10 \mbox{ meV}$ leading to a total reorganization energy $\tilde{\Lambda}_{\rho\mathcal{F}}=90\mbox{ meV}$.
The former value corresponds to a solvent that is sufficiently apolar \cite{leontyev2005reorganization} not to screen too much electric interactions in solution but is still sufficiently polar to increase the impact of solvent fluctuations on the kinetics of the CT reaction.
We present in Fig.\ref{fig:Fig3}, the evolution of the CT thermal reaction rate $k_{\rm{CT}}$ (yellow dotted curve) computed from Eq.\ref{kET1} in units of
\begin{eqnarray}
k_{e}\equiv\frac{2\pi}{\hbar} |V_{ef}|^2/\sqrt{4 \pi \tilde{\Lambda}_{-\mathcal{F}}k_B T}
\,,\label{ke_init}
\end{eqnarray}
as a function of the bare reaction driving-force $\Delta_{ef}\equiv \varepsilon_f - \varepsilon_e$, at fixed $\tilde{\Omega}_R = 0.7 \mbox{ eV}$.
For comparison, we plotted the total rate $k^{(\rm{tot})}_{\rm{CT}}=\sum_{\rho=\pm,\mathcal{D}} k_{\rm{CT},\mathcal{F}\rho} $ (purple plain curve), which is the sum of contributions of each PPES to the reaction rate.
The partial and classical CT rates $k^{(\rm{cl})}_{\rm{CT},\mathcal{F}\rho}$ given by Eq.\ref{kET8} are also plotted as dashed curves.
We find that the contribution of dark states $k^{(\rm{cl})}_{\rm{CT},\mathcal{FD}}$ (in black) dominates over the two polariton satellite peaks of half amplitudes $k^{(\rm{cl})}_{\rm{CT},\mathcal{F}-}$ (in red) and $k^{(\rm{cl})}_{\rm{CT},\mathcal{F}+}$ (in blue).
The former are strongly dependent on both the detuning $\delta$ and collective vacuum Rabi frequency $\tilde{\Omega}_R$.
They are given by two Gaussian satellite peaks centered on
$\Delta_{ef} \approx - \tilde{\Lambda}_{\pm\mathcal{F}} + \left( \lambda_{\mathrm{v},e} + \lambda_{\mathrm{S},e} \pm \hbar\tilde{\Omega}_R \right)/2$, thus $\approx\pm 350\mbox{ meV}$ away from the main dark state peak.
The standard deviation of those curves is $\approx\sqrt{2\tilde{\Lambda}_{\pm\mathcal{F}} k_BT}$, corresponding to a full width at half maximum (FWHM) of $\approx 161 \mbox{ meV}$.
We remark that the actual CT thermal rate $k_{\rm{CT}}$ is very well approximated by the classical contribution of the lower polariton $k^{(\rm{cl})}_{\rm{CT},\mathcal{F}-}$.
On one side, this is due to the fact that $\hbar \tilde{\Omega}_R \gg k_B T$, so that only the
lowest-energy PPES channel is significantly populated at thermal equilibrium and is thus open for the ET reaction: the other channels $k^{(\rm{cl})}_{\rm{CT},\mathcal{FD}}$ and $k^{(\rm{cl})}_{\rm{CT},\mathcal{F}+}$ are far away in energy and thus do not contribute significantly to $k_{\rm{CT}}$ \footnote{This can be different in out-of-equilibrium situations, like the one of Sec.\ref{Ultrafast_reaction_kinetics}.}.
On the other side, we are not expecting a priori the classical approximation in Eq.\ref{kET8} to hold, since for our range of parameters, the intra-molecular vibrational modes are quantum mechanically frozen $(k_B T < \hbar\omega_{\mathrm{v}})$.
Departures from the Gaussian limit are indeed seen on the numerical plots, that manifest as the appearance of weak vibrational sidebands and asymmetries in the tails of the $k_{\rm{CT}}(\Delta_{ef})$ curve.
The former features are partially smeared out by convolution of the intra-molecular lineshape by the solvent lineshape in Eq.\ref{kET2}, thus explaining the unexpected good qualitative match of the CT rate with the classical limit (see also Ref.\cite{siders_quantum_1981}).
\subsubsection{Tuning the CT reaction rate}
\label{Tuning_ET_Rate}
\begin{table}[h!]
\centering
\footnotesize
\begin{tabular}{c|c c c c c c c}
\hline\hline
Rates &
\colorbox{orange}{$k_{\mathcal{G}\mathcal{G}^{\prime}}$} &
\colorbox{orange}{$k_{\mathrm{CT},-\mathcal{F}}$} &
\colorbox{orange}{$k_{\mathrm{CT},\mathcal{FD}}$} & \colorbox{orange}{$k_{\mathrm{CT},\mathcal{F}+}$} \\ [0.5ex]
\hline
meV & 3.7 & 41.4 & 42.2 & 0.001 \\
THz & 0.9 & 10 & 10.2 & 0.0003 \\ [1ex]
\hline
\end{tabular}
\caption{Table of computed and dominant thermal reaction rates.
The parameters are those of Fig.\ref{fig:Fig3}, for $\Delta_{ef} = -0.2 \mbox{ eV}$ (see grey arrow $(a)$).
}
\label{table:Table1}
\end{table}
To complete the picture of the reaction kinetics, we show on Fig.\ref{fig:Fig4} (plain yellow curve), the CT thermal rate $k_{\rm{CT}}$ inside cavity (with $\Omega_R=10\mbox{ meV}$) and the same rate $k^{(0)}_{\rm{CT}}$ outside cavity (for which $\Omega_R=0.0\mbox{ meV}$), as a function of the number $N$ of molecules coupled to the cavity-mode.
The parameters are those of Fig.\ref{fig:Fig3}, with the reaction driving-force fixed at $\Delta_{ef}\approx -0.2 \mbox{ eV}$.
This choice of $\Delta_{ef}$ corresponds to the PES for the chosen molecule in Fig.\ref{fig:Fig1}b.
For the case $(a)$ labelled by a grey arrow and corresponding to $N=5000$ and $\Delta_{ef} = -0.2 \mbox{ eV}$, we find in both Fig.\ref{fig:Fig3} and Fig.\ref{fig:Fig4} that $k_{\rm{CT}} \ll k^{(0)}_{\rm{CT}}$, so that the reaction kinetics gets much slower inside than outside cavity.
Interestingly in Fig.\ref{fig:Fig4}, the CT rate does not evolves in a monotonous fashion with $N$.
It first increases with $N$, reaching a maximum at $N \approx 500$ for which $k_{\rm{CT}} > k^{(0)}_{\rm{CT}}$ and finally slows down to $0$ with $k_{\rm{CT}} \ll k^{(0)}_{\rm{CT}}$ at large $N$.
There is thus an optimal value of $N$ (and thus of molecular concentration $N/V$ for the coupled molecules) for which the effect of vacuum quantum fluctuations of the cavity mode is maximum.
We interpret this behavior by the modulation of the reaction driving-force $\Delta_{\rho\mathcal{F}}$ with the collective vacuum Rabi splitting $\tilde{\Omega}_R \approx \Omega_R \sqrt{N}$.
The maximum of $k^{(\rm{cl})}_{\rm{CT},\mathcal{F}-}$ in Eq.\ref{kET8} is obtained
at the transition point to the inverted Marcus region, as is shown on Fig.\ref{fig:Fig4} (dashed red curve).
This optimal sensitivity of the CT reaction rate close to the inverted region of Marcus parabola, is in contrast to Ref.\cite{herrera_cavity-controlled_2016} that reported a monotonous increase of the reaction rate with $N$ in the resonant nuclear tunneling regime.
We provide on Table \ref{table:Table1} typical values for the cavity-induced CT reaction rates $k_{\rm{CT},\mathcal{F}\rho}$ associated to case $(a)$.
Furthermore, we estimate the reaction rate $k_{\mathcal{G'}\mathcal{G}}$ from the manybody ground-state $\mathcal{G}$ to the other manybody state $\mathcal{G}'$, using transition-state theory \cite{evans_applications_1935,wigner_transition_1938,kramers_brownian_1940,eyring_activated_1935}
\begin{eqnarray}
k_{\mathcal{G}'\mathcal{G}} &=& k_0 e^{-\frac{\Delta_{\mathcal{G}\mathrm{TS}}}{k_BT}}
\,,\label{TST1}
\label{arrh}
\end{eqnarray}
with the energy barrier $\Delta_{\mathcal{G}\mathrm{TS}}=\varepsilon_{\mathrm{TS}}-\varepsilon_{\mathcal{G}}$ between the ground-state and the transition-state, and the typical reaction rate $k_0 \approx k_B T/2\pi\hbar$.
Finally, for completeness, we show on Fig.\ref{fig:Fig5} the evolution of the CT thermal rate $k_{\rm{CT}}$ inside cavity as a function of number $N$ of coupled molecules, but for a different value of reaction driving-force fixed at $\Delta_{ef}\approx -0.4 \mbox{ eV}$.
For case $(b)$ shown as a grey arrow, for which $N=5000$ and $\Delta_{ef} = -0.4 \mbox{ eV}$, the kinetics of the CT reaction is much faster inside than outside cavity ($k_{\rm{CT}} \gg k^{(0)}_{\rm{CT}})$ in both Fig.\ref{fig:Fig3} and Fig.\ref{fig:Fig5}.
We find a similar trend as in Fig.\ref{fig:Fig4}, with a non-monotonous evolution of the CT rate with $N$.
It is thus interesting to notice that depending on the range of parameters (reaction driving-force, number of molecules, detuning), the reaction kinetics can be either slowed down or accelerated significantly by interaction with the cavity mode.
\section{Dissipation}
\label{Dissipation}
\subsubsection{Microscopic model for dissipation}
\label{Microscopic_model_dissipation}
\begin{figure}[tbh]
\includegraphics[width=0.8\linewidth]{fig6.png}
\caption{Schematics of dissipation and dephasing rates originating from interaction between the \textit{reacton} states and the external environment.
Radiative relaxation rates are presented as gold arrows, while non-radiative relaxation and dephasing rates are both respectively shown with light-blue arrows.
The reaction rates involved in the photochemical reaction are pictured with orange double arrows.
}
\label{fig:Fig6}
\end{figure}
In this section, we introduce a minimal microscopic model of dissipation and dephasing, induced by coupling of the \textit{reacton} states to the external environment (see Fig.\ref{fig:Fig6}).
We consider two main external environments, namely the electromagnetic environment (EM) of the cavity-mode described by the Hamiltonian $H_{\rm{EM}}$ in Eq.\ref{H_Diss2}, and the solvent vibrational environment (ph) modelled by the Hamiltonian $H_{\rm{ph}}$ in Eq.\ref{H_Diss3}.
We denote $V_{\rm{Ca-EM}}$ (in Eq.\ref{H_Diss4}) the interaction Hamiltonian between the cavity-mode and the external EM environment at the origin of photon-losses out of the cavity \footnote{We did not take into account terms at the origin of spontaneous emission in Eq.\ref{H_Diss1}, since the former occurs on nanosecond time scale while we investigate here the picosecond relaxation dynamics of the \textit{reacton}. Including spontaneous emission to our model would be straightforward.}, and $V_{\rm{M-ph}}$ (in Eq.\ref{H_Diss5}) the general Hamiltonian describing coupling between the solvated molecules and the vibrational modes of the solvent.
The total Hamiltonian $\mathcal{H}_{\mathcal{R}-\rm{env}}$ describing the external bath environments (env) and their coupling to the \textit{reacton} ($\mathcal{R}$) is given by
\begin{eqnarray}
&&\mathcal{H}_{\mathcal{R}-\rm{env}} = H_{\rm{EM}} + H_{\rm{ph}} + V_{\rm{Ca-EM}} + V_{\rm{M-ph}}
\label{H_Diss1}\,,\\
&&H_{\rm{EM}} = \sum_{q} \hbar\omega_q a^\dagger_q a_q
\,,\label{H_Diss2} \\
&&H_{\rm{ph}} = \sum_{i=1}^{N} \sum_{k} \hbar\omega_k b^\dagger_{ik} b_{ik}
\,,\label{H_Diss3} \\
&&V_{\rm{Ca-EM}} = i\hbar\sum_{q} \left( f_q a_q^\dagger a -
f^*_q a^\dagger a_q \right)
\,,\label{H_Diss4} \\
&&V_{\rm{M-ph}} = \sum_{i=1}^{N} \sum_{k} \left( b_{ik} + b^\dagger_{ik} \right) \lbrace
\lambda_{e,ik} \ket{e_i} \bra{e_i}
\nonumber \\
&& +
\lambda_{ge,ik} \left( \ket{g_i} \bra{e_i} + \ket{e_i} \bra{g_i} \right)
\rbrace
\,,\label{H_Diss5}
\end{eqnarray}
with $\omega_q$ and $\omega_k$, the respective frequencies of the electromagnetic and vibrational modes of the baths.
$a^\dagger_q$ is the creation operator for a photon in the external EM mode with momentum $q$, while $b^\dagger_{ik}$ the creation operator for a vibron in the solvent bath associated to molecule $i$ with quasi-momentum $k$.
In Eq.\ref{H_Diss4}, $f_q$ is the probability amplitude for a cavity-photon to tunnel out
of the cavity to the EM bath \cite{gardiner1985input,ciuti2006input}.
The electron-phonon interactions in Eq.\ref{H_Diss5} couple the quantized phonon displacement operators $ b_{ik} + b^\dagger_{ik}$ to both the electronic density of the excited state $e$ of molecule $i$ with amplitude $\lambda_{e,ik}$ (Holstein-like term \cite{holstein1959studies}) and to the off-diagonal hopping terms between states $e$ and $g$ with amplitude $\lambda_{ge,ik}$ (Su-Schrieffer-Heeger-like terms \cite{su1979solitons}).
We note that the bare PES given by Eq.\ref{H_CaM3} in Sec.\ref{Microscopic_Hamiltonian} arises (before second quantization) from electron-phonon interactions similar to the ones described by the Holstein-like terms of Eq.\ref{H_Diss5}.
There seems thus to be a redundancy in the writing of $V_{\rm{M-ph}}$.
However, this is not the case, since the manybody \textit{reacton} wavefunctions derived in Sec.\ref{Upper_lower_polaritons} and Sec.\ref{Dark_States} are not the exact eigenstates of the Hamiltonian $\mathcal{H}$ (in Eq.\ref{H_CaM1}), but only approximate ones.
Moreover, Eq.\ref{H_CaM3} doesn't contain off-diagonal coupling terms
which are present in Eq.\ref{H_Diss5} and induce contributions to the vibrational relaxation rates.
\subsubsection{Radiative relaxation}
\label{Radiative_relaxation}
\begin{table}[h!]
\centering
\footnotesize
\begin{tabular}{c|c c}
\hline\hline
Rates & \colorbox{oldgold}{$\Gamma_{\mathcal{G}-}$} & \colorbox{oldgold}{$\Gamma_{\mathcal{G}+}$} \\ [0.5ex]
\hline
meV & 28 & 28 \\
THz & 6.8 & 6.8 \\ [1ex]
\hline
\end{tabular}
\caption{Table of computed radiative relaxation rates due to cavity losses.
Parameters: same as in Fig.\ref{fig:Fig3}, for $\Delta_{ef} = -0.2 \mbox{ eV}$ (see grey arrow $(a)$).
The cavity quality factor is $Q=50$, which corresponds to a bare cavity damping rate $\kappa\approx 56 \mbox{ meV}$.
}
\label{table:Table2}
\end{table}
We consider the interaction Hamiltonian $V_{\rm{Ca-EM}}$ as a perturbation to the Hamiltonian $\mathcal{H} + H_{\rm{EM}} + H_{\rm{ph}}$ (see Eq.\ref{H_CaM1} and Eq.\ref{H_Diss1}).
We use Fermi's Golden Rule to compute the radiative relaxation rate $\Gamma_{\mathcal{G}\rho}$ from the manybody PPES state $\rho=\pm$ to the manybody ground-state $\mathcal{G}$ induced by $V_{\rm{Ca-EM}}$
(see Fig.\ref{fig:Fig6}, downward gold arrows).
We obtain
\begin{eqnarray}
\Gamma_{\mathcal{G}\rho} = \alpha_{-\rho}
\int dE \kappa \left(\frac{E}{\hbar}\right) J^{(em)} \left( E \right)
\mathcal{L}_{\mathrm{ph},\mathcal{G}\rho}\left(E-\Delta_{\mathcal{G}\rho}\right)
\,, \nonumber \\
\label{Rad_Rate_1}
\end{eqnarray}
with $\Delta_{\mathcal{G}\rho} = \varepsilon_{\rho} - \varepsilon_{\mathrm{G}}$
and $\kappa(\omega=E/\hbar)= 2\pi |f(\omega)|^2 \nu_{\rm{EM}}(\omega)$
the energy-dependent radiative dissipation rate of the cavity,
given by the product of the matrix-element square $|f_{q}|^2$ evaluated at energy $\hbar\omega_q\equiv\hbar\omega$, and the density of states of the external electromagnetic bath $\nu_{\rm{EM}}(\omega)=\sum_q \delta\left( \omega - \omega_q \right)$.
The factor $J^{(em)} \left( E \right) = 1 + n_{\rm{B}}(E)$ is associated to the emission (em) process of a photon into the electromagnetic environment that assists the downward transition.
The decay rate $\Gamma_{\mathcal{G}\rho}$ is the convolution between the cavity spectral distribution $\kappa(E/\hbar)J^{(em)} \left( E \right)$ and the generalized vibrational lineshape $\mathcal{L}_{\mathrm{ph},\mathcal{G}\rho}\left(E\right)\equiv \mathcal{L}_{\mathrm{v},\rho\mathcal{F}} \star \mathcal{L}_{\rm{cl}}\left(E,\tilde{\lambda}_{\mathrm{S},\rho\mathcal{F}}\right)$ obtained in Sec.\ref{Marcus_Theory_Reacton}.
Eq.\ref{Rad_Rate_1} is a generalization of Refs.\cite{canaguier-durand_non-markovian_2015,pino_quantum_2015,Martinez_2018} to the case of the manybody \textit{reacton} states.
We now use the simplified assumptions that i) the energy-dependent vibrational lineshape $\mathcal{L}_{\rm{ph},\mathcal{G}\rho}\left( E \right)$ is thinner than the cavity lineshape $\kappa\left( E/\hbar \right)$, such that $\Gamma_{\mathcal{G}\rho} \approx \alpha_{-\rho} J^{(em)} \left( \Delta_{\mathcal{G}\rho}\right) \kappa \left(\Delta_{\mathcal{G}\rho}\right)$, and ii) the energy dependence of $\kappa(\omega)\approx \kappa(\omega_c) \equiv \kappa$ can be neglected on the scale of the energy difference $\Delta_{\mathcal{G}\rho}$ for the considered radiative transition (Markovian assumption), such that
\begin{eqnarray}
\Gamma_{\mathcal{G}\rho} \approx \alpha_{-\rho} J^{(em)} \left( \Delta_{\mathcal{G}\rho} \right) \kappa
\,. \label{Rad_Rate_2}
\end{eqnarray}
Within assumptions i) and ii), we obtain the corresponding upward transition rates $\Gamma_{\rho\mathcal{G}}$ from the ground-state $\mathcal{G}$ to the polariton state $\rho=\pm$ as
\begin{eqnarray}
\Gamma_{\rho\mathcal{G}} &\approx& \alpha_{-\rho} J^{(abs)} \left( \Delta_{\mathcal{G}\rho} \right) \kappa
\,, \label{Rad_Rate_3}
\end{eqnarray}
with $J^{(abs)} \left( E \right) = n_{\rm{B}}(E)$ associated to the absorption (abs) process of a photon of the electromagnetic environment during the upward transition.
We notice however, that for the cavity mode $\hbar\omega_c \gg k_B T$ at room temperature, such that in practice $n_{\rm{B}}\left( \Delta_{\mathcal{G}\rho} \right) \ll 1$ and
$\Gamma_{\mathcal{G}\rho} \approx \alpha_{-\rho}\kappa \gg \Gamma_{\rho\mathcal{G}} \approx 0$.
We note that relaxing assumption ii) keeping assumption i) valid, one recovers the non-Markovian calculation for the radiative relaxation made in Ref.\cite{canaguier-durand_non-markovian_2015}, that is postulated to be at the origin of the observed much shorter lifetime for the upper polariton compared to the lower one.
In the following, we will make use of both approximations i) and ii), since those are the ones that minimize the knowledge about the microscopic damping mechanism.
Generalization to Eq.\ref{Rad_Rate_1} is possible if additional information about the energy-dependence of both optical cavity and vibrational lineshapes become available from experiments.
We estimate in Table.\ref{table:Table2} the values of typical radiative relaxation rates $\Gamma_{\mathcal{G}\rho}$ written in the \textit{reacton} basis (downward gold arrows in Fig.\ref{fig:Fig6}), from the knowledge of the bare cavity damping rate $\kappa$ and optical-cavity quality factor $Q$ in experiments\cite{schwartz2013polariton,wang2014quantum,canaguier-durand_non-markovian_2015,bahsoun_electronic_2018}.
\subsubsection{Non-radiative relaxation}
\label{Non_Radiative_relaxation}
\begin{table}[h!]
\centering
\footnotesize
\begin{tabular}{c|c c c c}
\hline\hline
Rates & \colorbox{richelectricblue}{$\gamma_{\mathcal{G^{\prime}}\mathcal{F}}$} & \colorbox{richelectricblue}{$\gamma_{\mathcal{G}\pm}$} & \colorbox{richelectricblue}{$\gamma_{\mathcal{G}\mathcal{D}}$} & \colorbox{richelectricblue}{$\gamma_{\mathcal{D}+}$} \\ [0.5ex]
\hline
meV & 6.6 & 3 & 6 & 41.3 \\
THz & 1.6 & 0.7 & 1.4 & 10 \\ [1ex]
\hline
\end{tabular}
\caption{Table of computed and dominant non-radiative relaxation rates due to electron-phonon interactions.
The parameters are those of Fig.\ref{fig:Fig3}, for $\Delta_{ef} = -0.2 \mbox{ eV}$ (see grey arrow $(a)$).
The bare vibronic relaxation rate is $\gamma_{\mathrm{v}}\approx 6 \mbox{ meV}$ and the dephasing rate is chosen to be $\gamma_\phi\approx 82.7 \mbox{ meV}$.
}
\label{table:Table3}
\end{table}
We compute the non-radiative relaxation rates induced by the SSH-like contributions to the electron-phonon interaction Hamiltonian $V_{\rm{M-ph}}$ in Eq.\ref{H_Diss5}.
We suppose for simplicity that the off-diagonal matrix elements $\lambda_{ge,ik}\equiv \lambda_{ge} (\omega_k)$ are independent of the molecular index.
Using similar approximations i) and ii) as in Sec.\ref{Radiative_relaxation}, we obtain for the
dominant downwards rates
\begin{eqnarray}
\gamma_{\mathcal{G}\rho} &\approx& \alpha_{\rho} \gamma_{\mathrm{v}} \,, \label{Non_Rad_Rate_1} \\
\gamma_{\mathcal{G}\mathcal{D}} &\approx& \gamma_{\mathrm{v}}
\,, \label{Non_Rad_Rate_2}
\end{eqnarray}
with $\gamma_{\mathrm{v}}(\omega)= 2\pi |\lambda_{ge}(\omega)|^2 \nu_{\mathrm{v}}(\omega)/\hbar^2$ the vibronic relaxation rate given by the product of the matrix-element square $|\lambda_{ge}(\omega_k)|^2$ evaluated at energy $\hbar\omega_k\equiv\hbar\omega$ and the density of states of the vibronic bath $\nu_{\mathrm{v}}(\omega)=\sum_k \delta\left( \omega - \omega_k \right)$.
From Eq.\ref{Non_Rad_Rate_2}, we see that the SSH coupling terms in Eq.\ref{H_Diss5} open a relaxation channel between the dark states manifold $\mathcal{D}$ and the ground-state $\mathcal{G}$.
Finally, the remaining Holstein-like terms in Eq.\ref{H_Diss5}, induce
additional vibrationally-assisted relaxation rates.
Adopting the same approximation for the diagonal matrix elements $\lambda_{e,ik}\equiv \lambda_{e} (\omega_k)$, we obtain
\begin{eqnarray}
\gamma_{-+} &\approx& \frac{\alpha_+\alpha_-}{N} J^{(em)} \left( \Delta_{-+}\right)\gamma_\phi
\,, \label{Dephasing_Rate_1} \\
\gamma_{+-} &\approx& \frac{\alpha_+\alpha_-}{N} J^{(abs)} \left( \Delta_{-+}\right)\gamma_\phi
\,, \label{Dephasing_Rate_2} \\
\gamma_{\mathcal{D}+} &\approx& \alpha_{+} \left( 1 - \frac{1}{N}\right)
J^{(em)} \left( \Delta_{\mathcal{D}+}\right)\gamma_\phi \,, \label{Dephasing_Rate_3} \\
\gamma_{+\mathcal{D}} &\approx& \frac{\alpha_+}{N} J^{(abs)} \left( \Delta_{\mathcal{D}+} \right)
\gamma_\phi \,, \label{Dephasing_Rate_4} \\
\gamma_{\mathcal{D}-} &\approx& \alpha_-\left( 1 - \frac{1}{N}\right)
J^{(abs)} \left( \Delta_{-\mathcal{D}}\right)\gamma_\phi \,, \label{Dephasing_Rate_5} \\
\gamma_{-\mathcal{D}} &\approx& \frac{\alpha_-}{N} J^{(em)} \left( \Delta_{-\mathcal{D}}\right)
\gamma_\phi \,, \label{Dephasing_Rate_6}
\end{eqnarray}
with the transition energies $\Delta_{-+}=\varepsilon_+ - \varepsilon_-$, $\Delta_{\mathcal{D}+}=\varepsilon_+ - \varepsilon_{\mathrm{D}}$,
$\Delta_{-\mathcal{D}}=\varepsilon_{\mathrm{D}} - \varepsilon_-$, and
the dephasing rate $\gamma_\phi(\omega)= 2\pi |\lambda_{e}(\omega)|^2 \nu_{\mathrm{v}}(\omega)/\hbar^2$.
We note that the Holstein coupling terms in Eq.\ref{H_Diss5}, being diagonal in the bare (uncoupled) molecular basis, thus induce pure dephasing rates in this initial basis (contribution to the decay of off-diagonal matrix elements of the molecule density matrix).
However, when expressed in the dressed (coupled) \textit{reacton} manybody basis, those terms become responsible for an opening of additional relaxation channels between the polariton states $\pm$ and the dark state manifold $\mathcal{D}$, as well as relaxation between upper and lower polaritons.
With respect to the nature of the initial dephasing mechanism in the uncoupled basis, we choose to keep the convention of designing the bare rate $\gamma_\phi$ and dressed rates derived above as ``dephasing" rates.
This is in contrast to the convention used for instance in Ref.\cite{pino_quantum_2015}.
Our theoretical approach to compute the vibrational relaxation rates is consistent with
Refs.\cite{pino_quantum_2015,Martinez_2018} which focused on the vibrational strong-coupling regime in microcavities \cite{pino_quantum_2015,Martinez_2018}.
We provide in Table \ref{table:Table3} typical values \cite{schwartz2013polariton,wang2014quantum,canaguier-durand_non-markovian_2015,bahsoun_electronic_2018} for the bare vibronic relaxation rate $\gamma_{\mathrm{v}}$, bare vibronic dephasing rate $\gamma_\phi$, as well as for the computed and dominant dressed relaxation rates obtained from Eq.\ref{Non_Rad_Rate_1} to Eq.\ref{Dephasing_Rate_6} (see blue arrows in Fig.\ref{fig:Fig6}) .
\section{Ultrafast reaction kinetics}
\label{Ultrafast_reaction_kinetics}
\subsubsection{Rate-equation in the \textit{reacton} basis}
\label{RE_Reacton}
\begin{figure}[tbh]
\includegraphics[width=1.0\linewidth]{fig7.pdf}
\caption{Probabilities $P_i(t)$ of occupying the \textit{reacton} states $i$, as a function of time $t$ in units of $1/k_{0}$ (defined in Eq.\ref{arrh}).
%
The parameters are those of Fig.\ref{fig:Fig3} case $(a)$, and Tables \ref{table:Table1}, \ref{table:Table2} and \ref{table:Table3}.
%
}
%
\label{fig:Fig7}
\end{figure}
In this section, we compute the (out-of-equilibrium) occupation probabilities $P_i(t)$ as a function of time $t$ of the \textit{reacton} states $i$ involved in the whole photochemical process.
Chemical reactions (see Sec.\ref{Tuning_ET_Rate}), radiative relaxation (see Sec.\ref{Radiative_relaxation}) and non-radiative relaxation mechanisms (see Sec.\ref{Non_Radiative_relaxation})
by the environment, induce incoherent transitions amongst the
\textit{reacton} states (see arrows in Fig.\ref{fig:Fig6}).
We describe the resulting time-evolution of the populations by a rate-equation, written in the \textit{reacton} basis
\begin{eqnarray}
\dot{\underline{P}}(t) &=& \mathbb{\Gamma} \underline{P}(t)
\,, \label{RE_1} \\
\underline{P}(0) &=& \frac{1}{2}\left\lbrack 0,0,1,0,1,0 \right\rbrack
\,, \label{RE_2}
\end{eqnarray}
with $\underline{P}(t)=\left\lbrack P_{\mathcal{G}}(t), P_{\mathcal{G}'}(t), P_-(t), P_{\mathcal{D}}(t), P_+(t), P_{\mathcal{F}}(t)\right\rbrack$, the vector of populations
$P_i(t)$, and $\mathbb{\Gamma}$ the rate-matrix with matrix-elements $\mathbb{\Gamma}_{ij}$ corresponding to the total transition rate (including chemical reaction rates, radiative and non-radiative relaxation rates) from the manybody state $j$ to the manybody state $i$.
The initial condition $\underline{P}(0)$ corresponds physically to an initial photon that has been absorbed at $t=0^-$ in order to initiate the photoreaction at $t=0^+$.
For a resonant situation $(\delta=0)$, this leads to the choice $P_-(0)=P_+(0)=1/2$ in Eq.\ref{RE_2}.
The solution of Eq.\ref{RE_1} with the initial condition of Eq.\ref{RE_2} is found by computing numerically $\underline{P}(t) = e^{\mathbb{\Gamma} t} \underline{P}(0)$.
The vector of populations can be expressed more conveniently as a linear combination of exponentially damped eigenmodes characterizing the whole photochemical process
\begin{eqnarray}
\underline{P}(t) &=& \underline{P}^{(st)} + \sum_{\lambda \ne 0} c_\lambda \underline{v}_\lambda e^{\lambda t}
\,, \label{RE_3} \\
c_\lambda &=& ^t\underline{w}_\lambda \: \underline{P}(0) \equiv \frac{w_{\lambda,-}+ w_{\lambda,+}}{2}
\,, \label{RE_4}
\end{eqnarray}
with $\underline{v}_\lambda$ the right-eigenvector and $^t\underline{w}_\lambda$ the left-eigenvector of the $\mathbb{\Gamma}$-matrix, associated to the real negative eigenvalue $\lambda$.
The left and right eigenvectors of $\mathbb{\Gamma}$ form a bi-orthogonal basis \cite{brody2013biorthogonal}, which enables by projection to find the unique coefficient $c_\lambda$ in Eq.\ref{RE_4} as a function of the initial condition.
The constant vector $\underline{P}^{(st)}\equiv \underline{v}_0$ in Eq.\ref{RE_3} is the null right-eigenvector (solution of $\mathbb{\Gamma} \underline{P}^{(st)} = \underline{0}$) providing the stationary populations of the \textit{reacton} states.
We finally get for $\underline{P}^{(st)}$ and $P_{\mathcal{F}}(t)$
\begin{eqnarray}
\underline{P}^{(st)} &=& \frac{1}{k_{\mathcal{G}'\mathcal{G}}+k_{\mathcal{G}\mathcal{G}'}} \left\lbrack k_{\mathcal{G}\mathcal{G}'},k_{\mathcal{G}'\mathcal{G}},0,0,0,0 \right\rbrack
\,, \label{RE_5} \\
P_{\mathcal{F}}(t) &=& \sum_{\lambda \ne 0} \frac{w_{\lambda,-}+ w_{\lambda,+}}{2} v_{\lambda,\mathcal{F}} e^{\lambda t}
\,. \label{RE_6}
\end{eqnarray}
The stationary state in Eq.\ref{RE_5} corresponds to a chemical equilibrium between the electronic ground-state populations $P^{(st)}_{\mathcal{G}}$ and $P^{(st)}_{\mathcal{G}'}$.
\subsubsection{Time-evolution of the photoreaction}
\label{Evolution_Populations}
\begin{figure}[tbh]
\includegraphics[width=1.0\linewidth]{fig8.pdf}
\caption{
Probability $P_{\mathcal{F}}(t)$ of occupying the product-state $\mathcal{F}$ inside cavity ($\tilde{\Omega}_R=0.7\mbox{ meV}$) shown as a plain yellow curve, as a function of time $t$ in units of $1/k_{0}$ (defined in Eq.\ref{arrh}).
%
The corresponding occupation probability $P^{(0)}_{\mathcal{F}}(t)$ outside cavity ($\tilde{\Omega}_R\approx 0.0\mbox{ meV}$) is shown as a dotted yellow curve.
%
For comparison, we plot the difference of occupations $P_{\mathcal{F}}(t)-P_{\mathcal{F}}^{(0)}(t)$ as a dashed yellow line.
%
Parameters are those of Fig.\ref{fig:Fig3} case $(a)$, and Tables \ref{table:Table1}, \ref{table:Table2} and \ref{table:Table3}.
%
}
\label{fig:Fig8}
\end{figure}
\begin{figure}[tbh]
\includegraphics[width=1.0\linewidth]{fig9.pdf}
\caption{
Same figure as in Fig.\ref{fig:Fig8}, but with a modified reaction driving-force $\Delta_{ef}=-0.4\mbox{ eV}$ indicated by the arrow ($b$) in Fig.\ref{fig:Fig3}.
%
}
\label{fig:Fig9}
\end{figure}
We present in Fig.\ref{fig:Fig7} the time-evolution of $P_i(t)$, corresponding to the molecule of Fig.\ref{fig:Fig1} and case ($a$) in Fig.\ref{fig:Fig3}.
As shown in Table \ref{table:Table2} and \ref{table:Table3}, the dominating relaxation rates are the radiative ones $\Gamma_{\mathcal{G}\pm}$ (see gold downward arrows in Fig.\ref{fig:Fig6}) and the dephasing rate $\gamma_{\mathcal{D}+}$ (downward blue arrow in Fig.\ref{fig:Fig6}).
We obtain that on time scales $t \gg 1/\Gamma_{\mathcal{G}\pm},1/\gamma_{\mathcal{D}+}$, all the populations in the excited-states vanish, the stationary regime being a chemical equilibrium between the states $\mathcal{G}$ and $\mathcal{G}'$ in Eq.\ref{RE_5} (see plain green and losange green curves in Fig.\ref{fig:Fig7}).
The population of the upper polariton $P_ {+}(t)$ (plain blue curve in Fig.\ref{fig:Fig7}) is a monotonically decreasing function of time, well approximated by a single exponential decay $P_+(t) \approx e^{-\Gamma_{+} t}/2$.
The upper polariton lifetime $1/\Gamma_{+} = 1/\left(\Gamma_{\mathcal{G}+} + \gamma_{\mathcal{G}+} + \gamma_{\mathcal{D}+}\right)$ results mainly from both optical cavity damping ($\Gamma_{\mathcal{G}+}$) and fast relaxation ($\gamma_{\mathcal{D}+}$) towards the dark-state manifold mediated by the vibrational dephasing mechanism.
The dark states thus play the role of a sink for the upper polariton state (this feature was already noticed in Ref.\cite{pino_quantum_2015}).
The population of the dark-states $P_{\mathcal{D}}(t)$ is shown as a plain dark curve in Fig.\ref{fig:Fig7}.
Its time-evolution is not monotonous but well approximated by $P_{\mathcal{D}}(t) \approx \gamma_{\mathcal{D}+} \left( e^{-\Gamma_{+} t} - e^{-\Gamma_{\mathcal{D}}t} \right)/\left\lbrack 2\left( \Gamma_\mathcal{D} - \Gamma_{+} \right)\right\rbrack$, with the additional dark-state lifetime $1/\Gamma_{\mathcal{D}}=1/\left(\gamma_{\mathcal{G}\mathcal{D}}+k_{\mathrm{CT},\mathcal{F}\mathcal{D}}\right)$.
The existence of a maximum of $P_{\mathcal{D}}(t)$ results from a competition between the filling of the dark-state from the upper polariton with a rate $\Gamma_{+}$, and its emptying towards the ground-state $\mathcal{G}$ and excited-state $\mathcal{F}$ with rate $\Gamma_{\mathcal{D}}$.
Compared to the upper polariton, the occupation of the lower polariton $P_ {-}(t)$ (plain red curve in Fig.\ref{fig:Fig7}) is still a monotonically decreasing function of time, but with a slower rate due to the absence of ultrafast relaxation towards the dark-state manifold.
Of particular interest for photochemistry is the time-evolution of the occupation probability for the reaction product $P_ {\mathcal{F}}(t)$ (yellow plain curve in Fig.\ref{fig:Fig7}).
We show in Fig.\ref{fig:Fig8} a zoom on $P_ {\mathcal{F}}(t)$ inside the cavity ($\tilde{\Omega}_R=0.7\mbox{ meV}$ for the plain yellow curve) and the same quantity $P^{(0)}_ {\mathcal{F}}(t)$ outside cavity ($\tilde{\Omega}_R \approx 0.0\mbox{ meV}$ for the dotted yellow curve).
For our range of parameters corresponding to the reaction driving-force $\Delta_{ef}=-0.2\mbox{ eV}$ (case of the molecule in Fig.\ref{fig:Fig1} and case $(a)$ in Fig.\ref{fig:Fig3}) and choice of initial condition, we predict that $P_ {\mathcal{F}}(t) \leq P^{(0)}_ {\mathcal{F}}(t)$ at all times.
The cavity-molecule coupling has thus an effect to slow-down the photochemical reaction compared to what is obtained outside cavity.
The same curve is plotted in Fig.\ref{fig:Fig9}, for the different value of
$\Delta_{ef}=-0.4\mbox{ eV}$ corresponding to case $(b)$ in Fig.\ref{fig:Fig3}.
In contrast to the previous case, one observes for each times that $P_ {\mathcal{F}}(t) \geq P^{(0)}_ {\mathcal{F}}(t)$, so that the effect of coupling the reactant to vacuum quantum fluctuations of the electromagnetic cavity-mode is to speed-up (and thus to enhance) the formation of the reaction product significantly, compared to the case outside cavity.
The cavity-induced slowing-down or acceleration of the appearance rate for the photoreaction product depends thus crucially on the reaction driving-force $\Delta_{ef}$ (and thus choice of the coupled-molecules), which is consistent with the analysis of the thermal CT-rate performed in Sec.\ref{Marcus_Theory_Reacton}.
The main feature observed in both Fig.\ref{fig:Fig8} and Fig.\ref{fig:Fig9}, is the non-monotonous dependence of $P_ {\mathcal{F}}(t)$ with time $t$.
We found an accurate analytical approximation of Eq.\ref{RE_6} for describing
$P_ {\mathcal{F}}(t)$ in Fig.\ref{fig:Fig8}
\begin{eqnarray}
P_ {\mathcal{F}}(t) &\approx& \sum_{\rho=\pm} c_{\lambda_\rho} e^{-\lambda_\rho t}
+
c_\mathcal{D} e^{-\Gamma_\mathcal{D} t}
+
c_\mathcal{+} e^{-\Gamma_\mathcal{+} t}
\,, \label{Analytical_P_F_1} \\
c_\mathcal{D} &=& \sum_{\rho=\pm}\frac{\eta_\rho}{ \Gamma_\mathcal{D} - \lambda_\rho}
\,, \label{Analytical_P_F_2} \\
c_+ &=& -\sum_{\rho=\pm}\frac{\eta_\rho}{ \Gamma_+ - \lambda_\rho}
\,, \label{Analytical_P_F_3} \\
c_{\lambda_\rho} &=& \rho\frac{k_{\mathrm{CT},\mathcal{F}-}}{4 \mu}
+
\eta_\rho
\left\lbrack
\frac{1}{ \Gamma_\mathcal{+} - \lambda_\rho} - \frac{1}{\Gamma_\mathcal{D} - \lambda_\rho}
\right\rbrack
\,, \label{Analytical_P_F_4}
\end{eqnarray}
with two additional decay rates $\lambda_{\rho=\pm}$ given by
\begin{eqnarray}
\lambda_\rho &=& \frac{\Gamma_{\mathcal{F}}+\Gamma_-}{2} - \rho \mu
\,,\label{Analytical_P_F_5} \\
\mu &=& \sqrt{ \left( \frac{\Gamma_{\mathcal{F}}-\Gamma_-}{2} \right)^2 + k_{\mathrm{CT},-\mathcal{F}}k_{\mathrm{CT},\mathcal{F}-}}
\,,\label{Analytical_P_F_6}
\end{eqnarray}
and prefactor
\begin{eqnarray}
\eta_\rho &=&
\frac{k_{\mathrm{CT},\mathcal{F}\mathcal{D}}\gamma_{\mathcal{D}+}}{4 \left(\Gamma_\mathcal{D} - \Gamma_+\right)\mu} \left( \mu - \rho \frac{\Gamma_{\mathcal{F}}-\Gamma_-}{2} \right)
\,. \label{Analytical_P_F_7}
\end{eqnarray}
The former expressions involve the decay-rates of the lower-polariton $\Gamma_-=\Gamma_{\mathcal{G}-}+\gamma_{\mathcal{G}-}+k_{\mathrm{CT},\mathcal{F}-}$ and
$\mathcal{F}$ excited-state $\Gamma_\mathcal{F}=\gamma_{\mathcal{G}'\mathcal{F}}+k_{\mathrm{CT},-\mathcal{F}}$.
Compared to $P_ {+}(t)$ and $P_ {\mathcal{D}}(t)$, the time-evolution of $P_ {\mathcal{F}}(t)$ as given by Eq.\ref{Analytical_P_F_1} is more complex, as it involves
four different relaxation time-scales ($1/\lambda_\pm$, $1/\Gamma_\mathcal{D}$ and $1/\Gamma_\mathcal{+}$).
Initially, $P_ {\mathcal{F}}(0)=0$, since the two polariton states are equally populated ($P_ {\pm}(0)=1/2$).
At short times $t \leq 1/\Gamma_{\mathcal{G}\pm},1/\gamma_{\mathcal{D}+}$,
the upper polaritons decays toward the dark-states manifold.
When the $\mathcal{D}$-states are significantly filled, the CT chemical reaction gets initiated,
mainly by the dominant reaction rate $k_{\mathrm{CT},\mathcal{F}\mathcal{D}}$ (see Table \ref{table:Table1})
which is modulated by the strong light-matter coupling inside cavity.
This results in a short-time increase of the $\mathcal{F}$ product-state occupancy.
The existence of a maximum of $P_ {\mathcal{F}}(t)$ for $t\approx 1/k_0$ and a later decrease of the
product-state occupancy, is due to the onset of the relaxation back to $\mathcal{G}'$ due to the non-radiative relaxation rate $\gamma_{\mathcal{G}'\mathcal{F}}$ (see Table \ref{table:Table3}) and to the cavity-mediated backward reaction rate $k_{\mathrm{CT},-\mathcal{F}}$ (see Table \ref{table:Table2}).
We note the importance of taking into account the losses induced by dissipation and non-radiative relaxation towards the environment in describing the photoreaction kinetics.
The non-monotonous behavior of $P_ {\mathcal{F}}(t)-P^{(0)}_ {\mathcal{F}}(t)$ in Fig.\ref{fig:Fig8}
and Fig.\ref{fig:Fig9} is a signature of the \textit{reacton} formation, that should be observable using pump-probe spectroscopy.
Its sign provides the information whether or not the strong-coupling of reactants to the cavity-mode enhances or inhibits the formation of the reaction product.
There is a large room of possibilities to engineer and optimize this reaction kinetics by fine-tuning of the system parameters.
\section{Conclusion and Perspectives}
\label{Perspectives}
We have investigated the chemical reactivity of solvated molecules confined inside a nanofluidic Fabry-P\'{e}rot electromagnetic cavity.
We studied the archetypal model of a photochemical reaction for which a charge-transfer process occurs from one electronic excited-state $e$ to another excited-state $f$ of the molecule, followed by a reorganisation of the nuclei molecular conformation.
Upon tuning the cavity-frequency $\omega_c$ in resonance with the molecular transition between the electronic ground and excited states $\Delta_{ge}$, a collective polariton excitation is formed, as
soon as the collective vacuum Rabi splitting $\tilde{\Omega}_R$ gets larger than the total losses of the cavity $\kappa$.
We have shown that, as a result of the interaction of the molecules and cavity with the external environment, the polariton gets dressed by both intra-molecular and solvent vibrational degrees of freedom.
We called the resulting collective excitation shared coherently between by all the reactant molecules a \textit{reacton}, by analogy with the polaron excitation in solid-state physics.
We computed and studied in detail the modification of the polariton potential energy surfaces as well as of the equilibrium positions of the molecular vibrational modes induced by the \textit{reacton} formation.
The former are responsible for a modification of the chemical reactivity of confined molecules compared to unconfined ones.
We derived an extension of Marcus theory of electron-transfer reactions, taking into account the \textit{reacton} formation, and computed the kinetics of CT reaction rates for molecular populations confined in the nanofluidic electromagnetic cavity.
We have shown the possibility to tune (acceleration or slowing down) the CT thermal reaction rate $k_{\rm{CT}}$ by changing the bare vacuum Rabi frequency $\Omega_R$, the molecule-cavity detuning $\delta$, the number of reacting molecules $N$, the driving-force of the chemical reaction $\Delta_{ef}$ and the reorganization energies $\lambda_{\mathrm{v}}$ and $\lambda_{\mathrm{S}}$.
Our approach paves the way for new possibilities in molecular engineering, using strong-coupling of the molecules to vacuum quantum fluctuations of the electromagnetic cavity-modes.
Finally, we derived the kinetics of the whole photochemical process, in which the CT process is one of many elementary steps.
For doing so, we had to include explicitly into the theoretical description the relaxation rates due to the optical damping of the cavity, dissipation and dephasing induced by the intra-molecular and solvent vibrational modes.
We developed for this purpose a generalized rate-equation approach expressed in the basis of manybody \textit{reacton} states, the solution of which provides the ultrafast picosecond dynamics of the
photochemical reaction.
Inside the cavity, we predict either an increase or a decrease of the occupation probability $P_{\mathcal{F}}(t)$ for the product-state $\mathcal{F}$ compared to outside cavity, depending on the bare reaction driving-force.
We show that the time at which a maximum amount of reaction product is obtained, results from a delicate balance between competing environment-induced dissipation tending to decrease the net rate of product formation and the enhanced chemical reactivity due to the formation of the \textit{reacton}.
The signature of the CT reaction should be visible in time-scales ranging from hundreds of femtoseconds to few picoseconds and in some cases to several hundreds of picoseconds \cite{patrizi2020synergistic}; these time-scales are easily attainable in regular pump-probe experiments.
We assign several perspectives to extend the following paper.
One of them is to investigate how to define properly a thermodynamical potential describing the \textit{reacton} thermodynamic properties inside the nanofluidic cavity.
Although pioneer studies \cite{canaguier2013thermodynamics} investigated the thermodynamics of cavity-confined molecules, a proper definition and quantitative calculation of the corresponding
\textit{reacton} chemical potential is still missing.
The former task involves to take into account into the theoretical description the spatial dependence of the cavity-mode electric field, that is responsible for spatial inhomogeneities \cite{houdre_vacuum-field_1996} in the vacuum Rabi frequency $\Omega_R$ and detuning $\delta$ experienced by each coupled molecule.
Moreover, thermal fluctuations of each molecular dipole with respect to the local electric-field direction induces the necessity to perform an additional rotational averaging \cite{craig1998molecular}, on top of the previous spatial one.
Another interesting direction of research is to investigate the case of an open chemical reactor, namely a flow of reactants in solution that enters the optical cavity, undergoes a chemical reaction inside, and finally leaves the cavity with reaction products being collected outside.
In the case of an hydrodynamic Poiseuille flow \cite{guyon1991hydrodynamique,landau1987course}, there is a characteristic time-scale $t_{L} \approx L/4v_0$, with $L$ the longitudinal dimension of the nanofluidic cavity and $v_0 = 3 D_m/2 \rho_m$ the maximum velocity at the center of the flow ($D_m$ is the mass flow, and $\rho_m$ the liquid volumic mass).
The ratio of $t_{L}$ to the typical time-scale of the chemical reaction $t_\chi \approx 1/k_{\rm{CT}}$, provides an adimensional parameter $\xi=k_{\mathrm{CT}} L/4v_0$.
While in our paper, the CT reaction is very fast compared to the flow velocity, thus resulting
in $\xi \gg 1$, it would be of interest to look for other kinds of chemical reactions
for which $\xi \approx 1$.
The former case would result in an interesting non-linear dependence of the reaction rate with the hydrodynamic flow and reactant concentration.
We hope that our study will stimulate further theoretical and experimental investigations along those directions.
\section*{Acknowledgments}
\label{Acknowledgments}
We acknowledge financial support by Agence Nationale de la Recherche project CERCa, ANR-18-CE30-0006 and the LIGHT S\&T Graduate Program (PIA3 Investment for the Future Program, ANR-17-EURE-0021).
Initial support for this work and fruitful discussions lead in the Euskampus Transnational Common Laboratory QuantumChemPhys are acknowledged.
|
1,477,468,750,610 | arxiv | \section*{Acknowledgements}
The authors would like to thank Xun Chen, Yu-Chen Wang,
and Hai-Bo Yu for useful discussions.
This work is supported by the Double First Class start-up fund
(WF220442604), the Shanghai Pujiang Program (20PJ1407800),
National Natural Science Foundation of China (No. 12090064),
and Chinese Academy of Sciences Center for Excellence in Particle
Physics (CCEPP). XGH was also supported in part by the MOST
(Grant No. MOST 106- 2112-M-002- 003-MY3 ).
|
1,477,468,750,611 | arxiv | \section{Introduction} \label{sec:intro}
Chemical characterisation of exoplanetary atmospheres is rapidly entering a golden era. Robust detections of C, H, and O-bearing molecules from infrared spectroscopy are now commonplace \citep[e.g.][]{Snellen2010,Deming2013,Sing2016}. Optical transmission spectra have offered detections of Na, K, and TiO \citep[e.g.][]{Snellen2008,Wilson2015,Sing2015,Sedaghati2017}, though are often plagued by clouds or hazes \citep{Knutson2014,Kreidberg2014,Ehrenreich2014}. H$_2$O is the most frequently observed molecule in exoplanetary atmospheres \citep{Madhusudhan2016a}, enabled by high-precision spectra from the Hubble Space Telescope's (HST) Wide Field Camera 3 (WFC3). However, H$_2$O is not the only molecule with strong features in the $\sim$1.1-1.7$\micron$ WFC3 range. Additional features, due to CH$_4$, NH$_3$, and HCN, need to be considered when modelling exoplanetary atmospheres \citep{MacDonald2017}.
Nitrogen chemistry is expected to exist in exoplanetary atmospheres \citep{Burrows1999,Lodders2002}. However, the anticipated equilibrium abundances of such species in the upper atmospheres of hot Jupiters are small: $\sim 10^{-7}$ and $\sim 10^{-8}$ for NH$_3$ and HCN respectively -- assuming solar composition, C/O = 0.5, and N/O = 0.2 at $\sim 1500$ K \citep{Madhusudhan2012,Heng2016}. Detecting such trace abundances is impractical with current observations, often leading to the exclusion of such molecules from exoplanetary spectral analyses. However, observable nitrogen chemistry may occur under some circumstances. One avenue is enhanced elemental ratios: HCN abundances increase by $\sim 10^{4}$ for C/O $\gtrsim$ 1 \citep{Madhusudhan2012}; both NH$_3$ and HCN weakly increase with N/O \citep{Heng2016}. Such enhanced ratios could be remnants of planetary formation \citep{Oberg2011,Madhusudhan2014a,Mordasini2016,Piso2016}.
Alternatively, disequilibrium chemistry can enhance NH$_3$ and HCN abundances by $\ga$ 2 orders of magnitude over equilibrium expectations at altitudes probed by transmission spectra \citep{Zahnle2009,Moses2011,Moses2013,Venot2012}. There are two principle disequilibrium avenues: transport-induced quenching (e.g., via vertical mixing) and photochemistry (e.g., by UV irradiation). Quenching occurs in atmospheric regions where a dynamical transport process is characteristically faster than a certain chemical reaction (e.g. N$_2$ + H$_2$ $\rightleftharpoons$ NH$_3$). The transport process then fixes the chemical abundances to equilibrium values from atmospheric regions where local conditions result in a commensurate chemical reaction timescale. For NH$_3$ and HCN, this occurs in the deep atmosphere \citep[pressures $\sim 1$bar --][]{Moses2011}, where equilibrium abundances are considerably higher. Vertical mixing then dredges-up these molecules to the upper atmosphere.
Photochemistry can enhance HCN abundances, at the expense of NH$_3$, CH$_4$ and N$_2$, at pressures $\la 10^{-3}$ bar \citep{Zahnle2009,Moses2011}. Photochemical deviations should become more pronounced for lower temperature planets, due to deeper quench points and slower reaction rates impeding attempts to drive products back towards equilibrium \citep{Moses2011}. These conclusions are relatively insensitive to the C/O ratio \citep{Moses2013}. An atmosphere subjected to extreme photochemistry may display abundant HCN and depleted NH$_3$ in the photosphere, whilst one with strong vertical mixing and minimal photochemistry could display abundant NH$_3$ and / or HCN.
The impact of disequilibrium nitrogen chemistry on transmission spectra has been considered before \citep[e.g.][]{Shabram2011,Moses2011}. \citet{Shabram2011} identified HCN absorption features at $\sim$ 1.5, 3.3 and 7 $\micron$, suggesting that the \emph{James Webb Space Telescope} (JWST) NIRSpec prism will be able to observe the former two. \citet{Moses2011} strongly recommended including HCN and NH$_3$ within spectral analyses. Without including these disequilibrium products, as is somewhat common in atmospheric retrievals, the prospect of detecting nitrogen chemistry has been artificially quenched.
Recently, in \citet{MacDonald2017}, we reported tentative evidence of nitrogen chemistry in the hot Jupiter HD 209458b. We identified a slope from $\sim$ 1.5-1.7 $\micron$ in the WFC3 transmission spectrum, suggesting NH$_3$ or HCN as possible contributors. At the precision of the data, either molecule provided reasonable fits. However, qualitatively different WFC3 features become apparent at higher resolution: an `NH$_3$ shoulder' redwards of the 1.4 $\micron$ H$_2$O feature, vs. a `HCN peak' around 1.55 $\micron$. The NH$_3$ feature appears to have been missed in prior studies, possibly due to not including it in models \citep{Deming2013,Madhusudhan2014b,Barstow2017} or \emph{a priori} assumed chemistry \citep{Benneke2015,Sing2016}. Incomplete opacity data below $\sim 3 \micron$ \citep[e.g.][Fig. 5]{Shabram2011} could also contribute, as many studies pre-date the latest NH$_3$ and HCN line-lists \citep{Tennyson2016}. This initial evidence has motivated retrievals to include nitrogen chemistry for other planets. For example, \citet{Kilpatrick2017} observed an apparent absorption feature at 1.55 $\micron$ in WASP-63b's transmission spectrum. Atmospheric retrievals by four different groups identified this as consistent with super-solar HCN.
In this letter, we identify spectral signatures of nitrogen chemistry in exoplanetary atmospheres that are detectable with present and upcoming instruments. We then examine transmission spectra of 9 hot Jupiters for signs of nitrogen chemistry.
\begin{figure*}[ht!]
\epsscale{1.21}
\plotone{N_chem_Fig_1}
\label{fig:cross_sections}
\caption{Near-infrared NH$_3$ and HCN absorption cross sections (smoothed for clarity). Left: NH$_3$ cross section (red solid) compared to H$_2$O (blue dashed), at 1000, 1500 and 2000 K (darker implies higher temperature) and 1 mbar. Right: HCN cross section (orange solid) compared to H$_2$O under the same conditions. Shading indicates the WFC3, K-band, and Spitzer IRAC bandpasses. The plotted wavelength range is observable by JWST NIRISS / NIRSpec. Diagnostic strong NH$_3$ and HCN absorption features are annotated.}
\end{figure*}
\newpage
\section{Atmospheric Modelling and Retrieval Framework} \label{sec:methods}
We model transmission spectra in a plane-parallel geometry for planetary atmospheres under hydrostatic equilibrium, using the POSEIDON radiative transfer and retrieval algorithm \citep{MacDonald2017}. We assume uniform-in-altitude terminator-averaged mixing ratios, terminator-averaged temperature structure, and allow for inhomogeneous clouds / hazes. We consider the major sources of opacity expected in H$_2$ dominated atmospheres: H$_2$O, CH$_4$, NH$_3$, HCN, CO, CO$_2$, Na, K \citep{Madhusudhan2016a}, along with H$_2$-H$_2$ and H$_2$-He collision-induced absorption (CIA). The opacity sources are described in \citet{Gandhi2017} and use molecular line lists from the EXOMOL \citep{Tennyson2016} and HITEMP databases \citep{Rothman2010}.
We use this forward model in two ways. In section \ref{sec:detectability}, we first generate transmission spectra to investigate signatures of nitrogen chemistry over a range of planetary parameters. Secondly, in section \ref{sec:planets}, we couple the forward model with a Bayesian parameter estimation and model comparison retrieval code. This enables us to derive constraints on nitrogen chemistry from observed spectra of a sample of hot Jupiters.
Our models have a maximum of 19 free parameters: 6 for the temperature profile, 8 for mixing ratios, 4 encoding clouds / hazes, and a reference pressure, $P_{\mathrm{ref}}$. For each parameter set, we generate transmission spectra at R=1000 from 0.2-5.2 $\micron$. The model spectra are convolved with the relevant instrument point-spread-functions and binned to the data resolution. The parameter space is mapped via the MultiNest \citep{Feroz2008,Feroz2009,Feroz2013} multimodal nested sampling algorithm, implemented by PyMultiNest \citep{Buchner2014}.
\section{Detectability of Nitrogen Chemistry} \label{sec:detectability}
We first examine the optimum near-infrared regions to search for nitrogen chemistry. We begin by comparing the cross sections of NH$_3$ and HCN to H$_2$O. We then explore how atmospheric properties alter NH$_3$ and HCN absorption signatures. Finally, we consider how these findings can be employed by ground and space based facilities to uniquely detect NH$_3$ and HCN in exoplanetary atmospheres.
\begin{figure*}[ht!]
\epsscale{1.2}
\plotone{N_chem_Fig_2}
\caption{Nitrogen chemistry absorption features in near-infrared transmission spectra. A reference model with enhanced nitrogen chemistry (section \ref{subsec:detection_factors}), is perturbed in mixing ratio, temperature, and cloud fraction. The `transit depth excess' results from subtracting a model with enhanced nitrogen chemistry from an identical model without nitrogen chemistry (see inset). The intermediate shading shows this subtraction for the unperturbed reference model. Left: reference model with enhanced NH$_3$. Right: reference model with enhanced HCN. Dashed lines indicate covered regions. The WFC3, K-band, and Spitzer IRAC bandpasses are indicated.}
\label{fig:absorption_strength}
\end{figure*}
\subsection{NH3 / HCN Absorption Features} \label{subsec:cross_sections}
Figure \ref{fig:cross_sections} contrasts the NH$_3$ and HCN cross sections to H$_2$O from 1-5$\, \micron$ at 1000, 1500, and 2000 K. Where the H$_2$O cross section possesses local minima, the cross sections of nitrogen-bearing molecules may exceed H$_2$O by $\sim$ 2 orders of magnitude. The WFC3 bandpass contains NH$_3$ and HCN features around $\sim$ 1.5-1.6 $\micron$ \citep{MacDonald2017}, along with a weaker unique NH$_3$ feature at $\sim$ 1.2 $\micron$. NH$_3$ possesses a prominent feature at $\sim$ 2.2 $\micron$ (K-band), whilst HCN has an especially strong feature at $\sim$ 3.1 $\micron$. Both molecules absorb at 4 $\micron$, between the two Spitzer IRAC bands. The K-band NH$_3$ feature is a powerful diagnostic, coinciding with minima for both H$_2$O and HCN. The cross section contrast between NH$_3$ or HCN and H$_2$O tends to increase at lower temperatures, suggesting that lower temperature planets may posses amplified nitrogen chemistry features (see section \ref{subsubsec:det_temp}). HCN peaks become sharper than NH$_3$ features at lower temperatures, which can enable unique identification in regions of overlapping absorption (e.g. the WFC3 bandpass).
\subsection{Factors Influencing Detectability} \label{subsec:detection_factors}
The relative strengths of absorption cross sections are not the only consideration governing the imprint of nitrogen chemistry into transmission spectra. We henceforth illustrate how the \emph{transit depth excess} -- here the difference between a transmission spectrum model with and without enhanced nitrogen chemistry -- varies as a function of NH$_3$ / HCN abundance, atmospheric temperature, and across the transition from clear to cloudy atmospheres. We perturb a reference hot Jupiter system with the following properties: $R_p$ = 1.2 $R_J$, $R_{*} = R_{\odot}$, $g$ = 10 $\mathrm{m \, s^{-2}}$, $T$ = 1500 K (isothermal). The volume mixing ratios, with the exception of NH$_3$ and HCN, are representative of chemical equilibrium: $\mathrm{log(H_{2}O)}$ = -3.3, $\mathrm{log(CH_{4})}$ = -6.0, $\mathrm{log(CO)}$ = -3.3, $\mathrm{log(CO_{2})}$ = -7.0 \citep{Madhusudhan2012}. These `background' abundances are held constant throughout. The reference model considers NH$_3$/H$_2$O or HCN/H$_2$O = 0.1. We take $P_{\mathrm{cloud}}$ = 1 mbar, $P_{\mathrm{ref}}$ = 10 mbar, and a terminator cloud fraction of 50\%.
\subsubsection{Mixing Ratio} \label{subsubsec:det_mix}
Figure \ref{fig:absorption_strength} (top) demonstrates that the transit depth excess is strongly correlated with the relative mixing ratios of each nitrogen-bearing species to water -- dictated by the relative cross section differences (Figure \ref{fig:cross_sections}). Since the cross sections of NH$_3$ and HCN are rarely enhanced by more than 100$\times$ over H$_2$O from 1-5$\, \micron$, it is unsurprising that absorption signatures become negligible for relative mixing ratios below $10^{-2}$. However, when nitrogen chemistry abundances become commensurate with H$_2$O, a plethora of features $\gtrsim$ 300 ppm emerge throughout the near-infrared.
\subsubsection{Temperature} \label{subsubsec:det_temp}
Figure \ref{fig:absorption_strength} (middle), illustrates two effects compete as temperatures lower: i) the H$_2$O cross section minima deepen (Figure \ref{fig:cross_sections}); ii) the atmospheric scale height decreases proportionally. The combined effect is for many NH$_3$ / HCN features to initially intensify from 2000 K $\rightarrow$ 1500 K, before the stronger features dampen from 1500 K $\rightarrow$ 1000 K as the atmosphere contracts. Generally, HCN features become sharper for cooler temperatures, as expected from the cross sections (Figure \ref{fig:cross_sections}). Overall, nitrogen chemistry absorption features remain potent over a wide range of temperatures expected in hot Jupiter atmospheres ($\sim$ 1000-2000 K), especially in the WFC3 bandpass for cooler temperatures. K-band is a robust NH$_3$ diagnostic for $T \gtrsim$ 1000 K, whilst the $\sim$ 3.1 and 4.0 $\micron$ HCN features are prominent for $T \gtrsim$ 1500 K. Thus enhanced nitrogen chemistry, if present, may even be detectable in some of the higher temperature hot Jupiters.
\subsubsection{Clouds} \label{subsubsec:det_cloud}
Figure \ref{fig:absorption_strength} (bottom), demonstrates that clouds generally dampen absorption contrasts. This is unsurprising, as a high-altitude grey cloud deck with near-complete terminator coverage indiscriminatingly blocks electromagnetic radiation. Despite this, the strongest absorption features (K-band NH$_3$ and $\sim$ 3.1 and 4.0 $\micron$ HCN) can remain prominent even for uniform terminator clouds (dark shading). Increased dampening can result from higher altitude clouds, though it is unclear if grey cloud decks can exist at $P_{\mathrm{cloud}} <$ 1 mbar \citep{Fortney2010,Parmentier2013}. Absorption features located near H$_2$O cross section minima strengthen considerably as the terminator becomes cloud-free, as NH$_3$ / HCN, rather than clouds, become the dominant opacity source in these regions. Where H$_2$O absorption is also prominent, (e.g. 3.1 $\micron$), features are less sensitive to the cloud fraction. This change in the relative amplitudes of NH$_3$ or HCN absorption features (especially 3.1 $\micron$ vs. 4.0 $\micron$) may offer an avenue to constrain the terminator cloud fraction.
\begin{figure*}[ht!]
\epsscale{1.05}
\plotone{N_chem_Fig_3}
\caption{Nitrogen chemistry candidate spectra. Best-fitting transmission spectra (plotted at R=5000) from three atmospheric retrievals are shown: no nitrogen chemistry (blue), including NH$_3$ (red), and including HCN (orange). Spectra are shown only where the transit depth excess of NH$_3$ or HCN in the best-fitting model exceeds 30ppm. The dark curves are smoothed model representations.}
\label{fig:nir_spectra}
\end{figure*}
\begin{figure*}[ht!]
\centering
\epsscale{1.11}
\plotone{N_chem_Fig_4}
\caption{Evidence of nitrogen chemistry in WFC3 hot Jupiter transmission spectra. Left: weak detection of NH$_3$ in WASP-31b (2.2$\sigma$). Right: weak detection of HCN in WASP-63b (2.3$\sigma$). The blue spectra result from removing nitrogen-bearing molecules from the best-fitting model. The dark curves are smoothed model representations.}
\label{fig:wfc3_spectra}
\end{figure*}
\subsection{A Strategy to Uniquely Detect NH$_3$ and HCN} \label{subsec:unique_detection}
Figure \ref{fig:absorption_strength} indicates that WFC3 spectra can enable detections of nitrogen chemistry. In particular, absorption at $\sim$ 1.2 $\micron$ and in K-band uniquely indicates NH$_3$. HCN absorbs strongly around 3.1 and 4.0 $\micron$. We suggest ground-based K-band photometry and/or spectroscopy as a promising avenue to assess the presence of NH$_3$. Null detections in K-band could rule out NH$_3$, whilst suggesting HCN as a possible cause of 1.55$\, \micron$ WFC3 absorption. Furthermore, robust detections of HCN via the $\sim$ 3.1 and 4.0$\, \micron$ features will be feasible with JWST.
\section{Evidence of Nitrogen Chemistry in Known Hot Jupiters} \label{sec:planets}
Having identified prominent nitrogen chemistry absorption features, we present tentative signs of these in 3 hot Jupiter atmospheres. We apply a uniform atmospheric retrieval analysis to 9 hot Jupiter spectra, spanning visible to near-infrared wavelengths ($\sim$ 0.3-5.0 $\micron$). After briefly describing our planet selection, we examine the extent to which current observations can constrain nitrogen chemistry in exoplanetary atmospheres.
\subsection{Planet Selection} \label{subsec:targets}
We focus on the \citet{Sing2016} hot Jupiter transmission spectra with WFC3, STIS, and Spitzer observations: WASP-12b, WASP-17b, WASP-19b, WASP-31b, HAT-P-1b, HAT-P-12b, HD 189733b, HD 209458b. We also retrieve the WFC3 spectrum of WASP-63b \citep{Kilpatrick2017}, where indications of HCN have been considered. Our goal is to identify planets with plausible suggestions of nitrogen chemistry, such that follow-up observations can robustly establish whether nitrogen chemistry is present in these objects. An initial retrieval was performed for each planet including all model parameters. Candidates were identified wherever the best-fitting retrieved spectrum featured a WFC3 transit depth excess due to NH$_3$ or HCN >30ppm. This resulted in 3 candidates planets: WASP-31b, WASP-63b, and HD 209458b. We provide the posterior distributions from these retrievals online\footnote{\href{https://doi.org/10.5281/zenodo.1014847}{Online posteriors}}. For each candidate, we ran three additional retrievals: with NH$_3$ (no HCN), with HCN (no NH$_3$), and without nitrogen chemistry.
\newpage
\subsection{Inferences of Nitrogen Chemistry} \label{subsec:n_chem_inference}
Figure \ref{fig:nir_spectra} displays the best-fitting spectra from retrievals with and without nitrogen chemistry from 1-5$\, \micron$. WASP-31b and WASP-63b feature large nitrogen chemistry transit depth excesses: $\sim$ 400 ppm NH$_3$ (WASP-31b) and $\sim$ 200 ppm HCN (WASP-63b) at $\sim$ 1.55 $\micron$ (Figure \ref{fig:wfc3_spectra}). HD 209458b has an $\sim$ 50 ppm NH$_3$ transit depth excess.
Uniquely identifying nitrogen-bearing species is challenging at the resolution and precision of present WFC3 observations, given overlapping NH$_3$ and HCN absorption features. This difficulty is particularly apparent for HD 209458b, as shown in \citet{MacDonald2017}. Moreover, in the present work we report more conservative estimates of evidence for nitrogen chemistry by marginalising over the cloud fraction. We also utilise higher resolution cross sections ($0.1 \mathrm{cm}^{-1}$) and Spitzer observations. As such, the significance of nitrogen chemistry in HD 209458b is lower than in \citet{MacDonald2017}. However, for moderate NH$_3$ or HCN mixing ratios relative to H$_2$O, nitrogen signatures become sufficiently strong to permit unique detections. This is the case for both WASP-31b and WASP-63b, where strong WFC3 features permit unique identification of signatures attributable respectively to NH$_3$ and HCN (Figure \ref{fig:wfc3_spectra}).
We report a weak detection of NH$_3$ in WASP-31b (2.2$\sigma$). Nested model comparison, whereby we computed the Bayesian evidences of retrievals with NH$_3$ + HCN and without NH$_3$, establishes a Bayes factor of 3.8 for NH$_3$; uniquely identifying it as the cause of the $\sim$ 400 ppm WFC3 feature around 1.5 $\micron$. Previous studies of WASP-31b were unable to fit this feature, either due to excluding NH$_3$ \citep{Sing2015,Barstow2017} or assuming chemical equilibrium \citep{Sing2016}. Our retrieval without NH$_3$ (Figure \ref{fig:nir_spectra}, blue) similarly struggles to fit these elevated points. We predict a $\sim$ 500 ppm K-band NH$_3$ feature for this planet (Figure \ref{fig:nir_spectra}). If confirmed, this represents the first inference of ammonia in an exoplanetary atmosphere.
We further reassert a weak detection of HCN in WASP-63b (2.3$\sigma$, Bayes factor = 4.7), due to a $\sim$ 200 ppm peak around 1.55 $\micron$. We predict a $\sim$ 400 ppm feature near 3.1 $\micron$ and $\sim$ 200 ppm absorption near 4.0 $\micron$ (Figure \ref{fig:nir_spectra}). These detection significances include integration over the entire parameter space, including inhomogeneous clouds; thus the transmission spectra of WASP-31b and WASP-63b \emph{cannot} be adequately fit without disequilibrium nitrogen chemistry.
\begin{figure*}[ht!]
\epsscale{1.15}
\plotone{N_chem_Fig_5}
\caption{Posterior distributions of NH$_3$ and HCN volume mixing ratios (VMR) inferred from current transmission spectra. NH$_3$ posteriors are light red, and HCN light orange.}
\label{fig:posteriors}
\end{figure*}
Derived mixing ratio posteriors for NH$_3$ and HCN are shown in Figure \ref{fig:posteriors}. The maximum a posteriori modes show abundances enhanced by $\sim$ 3-4 orders of magnitude over equilibrium expectations for WASP-31b and WASP-63b, and $\sim$ 1 order of magnitude for HD 209458b.
\subsection{Resolving Degenerate Solutions} \label{subsec:degeneracy}
The limited wavelength range of current observations permits a range of possibilities. Especially for WASP-63b, where the lack of Spitzer or optical data precludes determining the spectral continuum (Figure \ref{fig:nir_spectra}). With low resolution or limited precision data, retrievals have flexibility in adjusting other parameters to partially compensate for removing NH$_3$ or HCN. For example, molecular abundances can be degenerate with terminator cloud coverage. Such degenerate solutions cause the mixing ratio `tails' in Figure \ref{fig:posteriors}. However, present observations are sufficient to distinguish NH$_3$ / HCN features from CH$_4$, due to a lack of absorption at $\sim$ 1.7 $\micron$.
Despite WFC3 degeneracies, Figure \ref{fig:nir_spectra} indicates that model differences arise at longer wavelengths. Observing WASP-31b in K-band and WASP-63b with Spitzer will permit tighter constraints on their NH$_3$ and HCN abundances. HD 209458b is more challenging, as the low inferred NH$_3$ abundance only predicts $\sim$ 25 ppm K-band absorption. Ultimately, observations in K-band and at 3.1 or 4.0$\, \micron$ are critical to resolving model degeneracies.
\newpage
\section{Summary and Discussion} \label{sec:discussion}
Nitrogen chemistry will open a new window into disequilibrium atmospheric chemistry and planetary formation mechanisms. High NH$_3$ abundances are indicative of vertical mixing; with abundance measurements constraining the eddy diffusion coefficient \citep{Moses2011}. High HCN abundances can also indicate vertical mixing, enhanced C/O, or, through an absence of CH$_4$ and NH$_3$, photochemistry \citep{Zahnle2009,Moses2011,Venot2012}.
We have demonstrated that nitrogen-bearing molecules can be observed in WFC3 spectra. We identified a $\sim$ 400 ppm NH$_3$ feature in WASP-31b (2.2$\sigma$), and a $\sim$ 200 ppm HCN feature in WASP-63b (2.3$\sigma$). Nitrogen chemistry is potentially present on HD 209458b; though current WFC3 observations are insufficient to definitively identify a specific species, given overlapping NH$_3$ and HCN features. Ambiguities may be resolved by observing strong NH$_3$ absorption at $\sim$ 2.2$\, \micron$ (K-band) and strong HCN absorption at $\sim$ 3.1 and 4.0$\, \micron$. JWST will be ideally suited to observing the plethora of features exceeding the $\sim$ 10 ppm precision expected of NIRISS / NIRSpec \citep{Beichman2014}. Such observations will enable unique detections of NH$_3$ and HCN in many exoplanetary atmospheres.
\newpage
Observable nitrogen chemistry signatures result when NH$_3$ or HCN exceed $\sim 10^{-2} \, \times$ the H$_2$O mixing ratio. HCN features at $\sim$ 3.1 and 4.0$\, \micron$ weaken and become sharply peaked for lower temperatures, whilst most NH$_3$ features, especially in the WFC3 bandpass, strengthen and remain broad. Extensively cloudy atmospheres have dampened absorption features, though some can exceed $\sim$ 100 ppm even for uniform clouds at 1 mbar.
Our inferred NH$_3$ and HCN abundances are enhanced over equilibrium values by $\sim$ 3-4 orders of magnitude. Such high values suggest that chemical equilibrium is violated in hot Jupiter atmospheres, and should not be imposed \emph{a priori} in atmospheric retrievals. Though more work is needed to explore scenarios producing enhanced NH$_3$ or HCN, the unexpected should be embraced, not shunned, as we seek to elucidate the nature of these worlds.
\acknowledgments
R.J.M. acknowledges financial support from the Science and Technology Facilities Council (STFC), UK, towards his doctoral programme. We thank Siddharth Gandhi for sharing high-resolution opacities, Arazi Pinhas for retrieval comparisons, and the anonymous referee for helpful comments.
\vspace{5mm}
\section{Introduction} \label{sec:intro}
Chemical characterisation of exoplanetary atmospheres is rapidly entering a golden era. Robust detections of C, H, and O-bearing molecules from infrared spectroscopy are now commonplace \citep[e.g.][]{Snellen2010,Deming2013,Sing2016}. Optical transmission spectra have offered detections of Na, K, and TiO \citep[e.g.][]{Snellen2008,Wilson2015,Sing2015,Sedaghati2017}, though are often plagued by clouds or hazes \citep{Knutson2014,Kreidberg2014,Ehrenreich2014}. H$_2$O is the most frequently observed molecule in exoplanetary atmospheres \citep{Madhusudhan2016a}, enabled by high-precision spectra from the Hubble Space Telescope's (HST) Wide Field Camera 3 (WFC3). However, H$_2$O is not the only molecule with strong features in the $\sim$1.1-1.7$\micron$ WFC3 range. Additional features, due to CH$_4$, NH$_3$, and HCN, need to be considered when modelling exoplanetary atmospheres \citep{MacDonald2017}.
Nitrogen chemistry is expected to exist in exoplanetary atmospheres \citep{Burrows1999,Lodders2002}. However, the anticipated equilibrium abundances of such species in the upper atmospheres of hot Jupiters are small: $\sim 10^{-7}$ and $\sim 10^{-8}$ for NH$_3$ and HCN respectively -- assuming solar composition, C/O = 0.5, and N/O = 0.2 at $\sim 1500$ K \citep{Madhusudhan2012,Heng2016}. Detecting such trace abundances is impractical with current observations, often leading to the exclusion of such molecules from exoplanetary spectral analyses. However, observable nitrogen chemistry may occur under some circumstances. One avenue is enhanced elemental ratios: HCN abundances increase by $\sim 10^{4}$ for C/O $\gtrsim$ 1 \citep{Madhusudhan2012}; both NH$_3$ and HCN weakly increase with N/O \citep{Heng2016}. Such enhanced ratios could be remnants of planetary formation \citep{Oberg2011,Madhusudhan2014a,Mordasini2016,Piso2016}.
Alternatively, disequilibrium chemistry can enhance NH$_3$ and HCN abundances by $\ga$ 2 orders of magnitude over equilibrium expectations at altitudes probed by transmission spectra \citep{Zahnle2009,Moses2011,Moses2013,Venot2012}. There are two principle disequilibrium avenues: transport-induced quenching (e.g., via vertical mixing) and photochemistry (e.g., by UV irradiation). Quenching occurs in atmospheric regions where a dynamical transport process is characteristically faster than a certain chemical reaction (e.g. N$_2$ + H$_2$ $\rightleftharpoons$ NH$_3$). The transport process then fixes the chemical abundances to equilibrium values from atmospheric regions where local conditions result in a commensurate chemical reaction timescale. For NH$_3$ and HCN, this occurs in the deep atmosphere \citep[pressures $\sim 1$bar --][]{Moses2011}, where equilibrium abundances are considerably higher. Vertical mixing then dredges-up these molecules to the upper atmosphere.
Photochemistry can enhance HCN abundances, at the expense of NH$_3$, CH$_4$ and N$_2$, at pressures $\la 10^{-3}$ bar \citep{Zahnle2009,Moses2011}. Photochemical deviations should become more pronounced for lower temperature planets, due to deeper quench points and slower reaction rates impeding attempts to drive products back towards equilibrium \citep{Moses2011}. These conclusions are relatively insensitive to the C/O ratio \citep{Moses2013}. An atmosphere subjected to extreme photochemistry may display abundant HCN and depleted NH$_3$ in the photosphere, whilst one with strong vertical mixing and minimal photochemistry could display abundant NH$_3$ and / or HCN.
The impact of disequilibrium nitrogen chemistry on transmission spectra has been considered before \citep[e.g.][]{Shabram2011,Moses2011}. \citet{Shabram2011} identified HCN absorption features at $\sim$ 1.5, 3.3 and 7 $\micron$, suggesting that the \emph{James Webb Space Telescope} (JWST) NIRSpec prism will be able to observe the former two. \citet{Moses2011} strongly recommended including HCN and NH$_3$ within spectral analyses. Without including these disequilibrium products, as is somewhat common in atmospheric retrievals, the prospect of detecting nitrogen chemistry has been artificially quenched.
Recently, in \citet{MacDonald2017}, we reported tentative evidence of nitrogen chemistry in the hot Jupiter HD 209458b. We identified a slope from $\sim$ 1.5-1.7 $\micron$ in the WFC3 transmission spectrum, suggesting NH$_3$ or HCN as possible contributors. At the precision of the data, either molecule provided reasonable fits. However, qualitatively different WFC3 features become apparent at higher resolution: an `NH$_3$ shoulder' redwards of the 1.4 $\micron$ H$_2$O feature, vs. a `HCN peak' around 1.55 $\micron$. The NH$_3$ feature appears to have been missed in prior studies, possibly due to not including it in models \citep{Deming2013,Madhusudhan2014b,Barstow2017} or \emph{a priori} assumed chemistry \citep{Benneke2015,Sing2016}. Incomplete opacity data below $\sim 3 \micron$ \citep[e.g.][Fig. 5]{Shabram2011} could also contribute, as many studies pre-date the latest NH$_3$ and HCN line-lists \citep{Tennyson2016}. This initial evidence has motivated retrievals to include nitrogen chemistry for other planets. For example, \citet{Kilpatrick2017} observed an apparent absorption feature at 1.55 $\micron$ in WASP-63b's transmission spectrum. Atmospheric retrievals by four different groups identified this as consistent with super-solar HCN.
In this letter, we identify spectral signatures of nitrogen chemistry in exoplanetary atmospheres that are detectable with present and upcoming instruments. We then examine transmission spectra of 9 hot Jupiters for signs of nitrogen chemistry.
\begin{figure*}[ht!]
\epsscale{1.21}
\plotone{N_chem_Fig_1}
\label{fig:cross_sections}
\caption{Near-infrared NH$_3$ and HCN absorption cross sections (smoothed for clarity). Left: NH$_3$ cross section (red solid) compared to H$_2$O (blue dashed), at 1000, 1500 and 2000 K (darker implies higher temperature) and 1 mbar. Right: HCN cross section (orange solid) compared to H$_2$O under the same conditions. Shading indicates the WFC3, K-band, and Spitzer IRAC bandpasses. The plotted wavelength range is observable by JWST NIRISS / NIRSpec. Diagnostic strong NH$_3$ and HCN absorption features are annotated.}
\end{figure*}
\newpage
\section{Atmospheric Modelling and Retrieval Framework} \label{sec:methods}
We model transmission spectra in a plane-parallel geometry for planetary atmospheres under hydrostatic equilibrium, using the POSEIDON radiative transfer and retrieval algorithm \citep{MacDonald2017}. We assume uniform-in-altitude terminator-averaged mixing ratios, terminator-averaged temperature structure, and allow for inhomogeneous clouds / hazes. We consider the major sources of opacity expected in H$_2$ dominated atmospheres: H$_2$O, CH$_4$, NH$_3$, HCN, CO, CO$_2$, Na, K \citep{Madhusudhan2016a}, along with H$_2$-H$_2$ and H$_2$-He collision-induced absorption (CIA). The opacity sources are described in \citet{Gandhi2017} and use molecular line lists from the EXOMOL \citep{Tennyson2016} and HITEMP databases \citep{Rothman2010}.
We use this forward model in two ways. In section \ref{sec:detectability}, we first generate transmission spectra to investigate signatures of nitrogen chemistry over a range of planetary parameters. Secondly, in section \ref{sec:planets}, we couple the forward model with a Bayesian parameter estimation and model comparison retrieval code. This enables us to derive constraints on nitrogen chemistry from observed spectra of a sample of hot Jupiters.
Our models have a maximum of 19 free parameters: 6 for the temperature profile, 8 for mixing ratios, 4 encoding clouds / hazes, and a reference pressure, $P_{\mathrm{ref}}$. For each parameter set, we generate transmission spectra at R=1000 from 0.2-5.2 $\micron$. The model spectra are convolved with the relevant instrument point-spread-functions and binned to the data resolution. The parameter space is mapped via the MultiNest \citep{Feroz2008,Feroz2009,Feroz2013} multimodal nested sampling algorithm, implemented by PyMultiNest \citep{Buchner2014}.
\section{Detectability of Nitrogen Chemistry} \label{sec:detectability}
We first examine the optimum near-infrared regions to search for nitrogen chemistry. We begin by comparing the cross sections of NH$_3$ and HCN to H$_2$O. We then explore how atmospheric properties alter NH$_3$ and HCN absorption signatures. Finally, we consider how these findings can be employed by ground and space based facilities to uniquely detect NH$_3$ and HCN in exoplanetary atmospheres.
\begin{figure*}[ht!]
\epsscale{1.2}
\plotone{N_chem_Fig_2}
\caption{Nitrogen chemistry absorption features in near-infrared transmission spectra. A reference model with enhanced nitrogen chemistry (section \ref{subsec:detection_factors}), is perturbed in mixing ratio, temperature, and cloud fraction. The `transit depth excess' results from subtracting a model with enhanced nitrogen chemistry from an identical model without nitrogen chemistry (see inset). The intermediate shading shows this subtraction for the unperturbed reference model. Left: reference model with enhanced NH$_3$. Right: reference model with enhanced HCN. Dashed lines indicate covered regions. The WFC3, K-band, and Spitzer IRAC bandpasses are indicated.}
\label{fig:absorption_strength}
\end{figure*}
\subsection{NH3 / HCN Absorption Features} \label{subsec:cross_sections}
Figure \ref{fig:cross_sections} contrasts the NH$_3$ and HCN cross sections to H$_2$O from 1-5$\, \micron$ at 1000, 1500, and 2000 K. Where the H$_2$O cross section possesses local minima, the cross sections of nitrogen-bearing molecules may exceed H$_2$O by $\sim$ 2 orders of magnitude. The WFC3 bandpass contains NH$_3$ and HCN features around $\sim$ 1.5-1.6 $\micron$ \citep{MacDonald2017}, along with a weaker unique NH$_3$ feature at $\sim$ 1.2 $\micron$. NH$_3$ possesses a prominent feature at $\sim$ 2.2 $\micron$ (K-band), whilst HCN has an especially strong feature at $\sim$ 3.1 $\micron$. Both molecules absorb at 4 $\micron$, between the two Spitzer IRAC bands. The K-band NH$_3$ feature is a powerful diagnostic, coinciding with minima for both H$_2$O and HCN. The cross section contrast between NH$_3$ or HCN and H$_2$O tends to increase at lower temperatures, suggesting that lower temperature planets may posses amplified nitrogen chemistry features (see section \ref{subsubsec:det_temp}). HCN peaks become sharper than NH$_3$ features at lower temperatures, which can enable unique identification in regions of overlapping absorption (e.g. the WFC3 bandpass).
\subsection{Factors Influencing Detectability} \label{subsec:detection_factors}
The relative strengths of absorption cross sections are not the only consideration governing the imprint of nitrogen chemistry into transmission spectra. We henceforth illustrate how the \emph{transit depth excess} -- here the difference between a transmission spectrum model with and without enhanced nitrogen chemistry -- varies as a function of NH$_3$ / HCN abundance, atmospheric temperature, and across the transition from clear to cloudy atmospheres. We perturb a reference hot Jupiter system with the following properties: $R_p$ = 1.2 $R_J$, $R_{*} = R_{\odot}$, $g$ = 10 $\mathrm{m \, s^{-2}}$, $T$ = 1500 K (isothermal). The volume mixing ratios, with the exception of NH$_3$ and HCN, are representative of chemical equilibrium: $\mathrm{log(H_{2}O)}$ = -3.3, $\mathrm{log(CH_{4})}$ = -6.0, $\mathrm{log(CO)}$ = -3.3, $\mathrm{log(CO_{2})}$ = -7.0 \citep{Madhusudhan2012}. These `background' abundances are held constant throughout. The reference model considers NH$_3$/H$_2$O or HCN/H$_2$O = 0.1. We take $P_{\mathrm{cloud}}$ = 1 mbar, $P_{\mathrm{ref}}$ = 10 mbar, and a terminator cloud fraction of 50\%.
\subsubsection{Mixing Ratio} \label{subsubsec:det_mix}
Figure \ref{fig:absorption_strength} (top) demonstrates that the transit depth excess is strongly correlated with the relative mixing ratios of each nitrogen-bearing species to water -- dictated by the relative cross section differences (Figure \ref{fig:cross_sections}). Since the cross sections of NH$_3$ and HCN are rarely enhanced by more than 100$\times$ over H$_2$O from 1-5$\, \micron$, it is unsurprising that absorption signatures become negligible for relative mixing ratios below $10^{-2}$. However, when nitrogen chemistry abundances become commensurate with H$_2$O, a plethora of features $\gtrsim$ 300 ppm emerge throughout the near-infrared.
\subsubsection{Temperature} \label{subsubsec:det_temp}
Figure \ref{fig:absorption_strength} (middle), illustrates two effects compete as temperatures lower: i) the H$_2$O cross section minima deepen (Figure \ref{fig:cross_sections}); ii) the atmospheric scale height decreases proportionally. The combined effect is for many NH$_3$ / HCN features to initially intensify from 2000 K $\rightarrow$ 1500 K, before the stronger features dampen from 1500 K $\rightarrow$ 1000 K as the atmosphere contracts. Generally, HCN features become sharper for cooler temperatures, as expected from the cross sections (Figure \ref{fig:cross_sections}). Overall, nitrogen chemistry absorption features remain potent over a wide range of temperatures expected in hot Jupiter atmospheres ($\sim$ 1000-2000 K), especially in the WFC3 bandpass for cooler temperatures. K-band is a robust NH$_3$ diagnostic for $T \gtrsim$ 1000 K, whilst the $\sim$ 3.1 and 4.0 $\micron$ HCN features are prominent for $T \gtrsim$ 1500 K. Thus enhanced nitrogen chemistry, if present, may even be detectable in some of the higher temperature hot Jupiters.
\subsubsection{Clouds} \label{subsubsec:det_cloud}
Figure \ref{fig:absorption_strength} (bottom), demonstrates that clouds generally dampen absorption contrasts. This is unsurprising, as a high-altitude grey cloud deck with near-complete terminator coverage indiscriminatingly blocks electromagnetic radiation. Despite this, the strongest absorption features (K-band NH$_3$ and $\sim$ 3.1 and 4.0 $\micron$ HCN) can remain prominent even for uniform terminator clouds (dark shading). Increased dampening can result from higher altitude clouds, though it is unclear if grey cloud decks can exist at $P_{\mathrm{cloud}} <$ 1 mbar \citep{Fortney2010,Parmentier2013}. Absorption features located near H$_2$O cross section minima strengthen considerably as the terminator becomes cloud-free, as NH$_3$ / HCN, rather than clouds, become the dominant opacity source in these regions. Where H$_2$O absorption is also prominent, (e.g. 3.1 $\micron$), features are less sensitive to the cloud fraction. This change in the relative amplitudes of NH$_3$ or HCN absorption features (especially 3.1 $\micron$ vs. 4.0 $\micron$) may offer an avenue to constrain the terminator cloud fraction.
\begin{figure*}[ht!]
\epsscale{1.05}
\plotone{N_chem_Fig_3}
\caption{Nitrogen chemistry candidate spectra. Best-fitting transmission spectra (plotted at R=5000) from three atmospheric retrievals are shown: no nitrogen chemistry (blue), including NH$_3$ (red), and including HCN (orange). Spectra are shown only where the transit depth excess of NH$_3$ or HCN in the best-fitting model exceeds 30ppm. The dark curves are smoothed model representations.}
\label{fig:nir_spectra}
\end{figure*}
\begin{figure*}[ht!]
\centering
\epsscale{1.11}
\plotone{N_chem_Fig_4}
\caption{Evidence of nitrogen chemistry in WFC3 hot Jupiter transmission spectra. Left: weak detection of NH$_3$ in WASP-31b (2.2$\sigma$). Right: weak detection of HCN in WASP-63b (2.3$\sigma$). The blue spectra result from removing nitrogen-bearing molecules from the best-fitting model. The dark curves are smoothed model representations.}
\label{fig:wfc3_spectra}
\end{figure*}
\subsection{A Strategy to Uniquely Detect NH$_3$ and HCN} \label{subsec:unique_detection}
Figure \ref{fig:absorption_strength} indicates that WFC3 spectra can enable detections of nitrogen chemistry. In particular, absorption at $\sim$ 1.2 $\micron$ and in K-band uniquely indicates NH$_3$. HCN absorbs strongly around 3.1 and 4.0 $\micron$. We suggest ground-based K-band photometry and/or spectroscopy as a promising avenue to assess the presence of NH$_3$. Null detections in K-band could rule out NH$_3$, whilst suggesting HCN as a possible cause of 1.55$\, \micron$ WFC3 absorption. Furthermore, robust detections of HCN via the $\sim$ 3.1 and 4.0$\, \micron$ features will be feasible with JWST.
\section{Evidence of Nitrogen Chemistry in Known Hot Jupiters} \label{sec:planets}
Having identified prominent nitrogen chemistry absorption features, we present tentative signs of these in 3 hot Jupiter atmospheres. We apply a uniform atmospheric retrieval analysis to 9 hot Jupiter spectra, spanning visible to near-infrared wavelengths ($\sim$ 0.3-5.0 $\micron$). After briefly describing our planet selection, we examine the extent to which current observations can constrain nitrogen chemistry in exoplanetary atmospheres.
\subsection{Planet Selection} \label{subsec:targets}
We focus on the \citet{Sing2016} hot Jupiter transmission spectra with WFC3, STIS, and Spitzer observations: WASP-12b, WASP-17b, WASP-19b, WASP-31b, HAT-P-1b, HAT-P-12b, HD 189733b, HD 209458b. We also retrieve the WFC3 spectrum of WASP-63b \citep{Kilpatrick2017}, where indications of HCN have been considered. Our goal is to identify planets with plausible suggestions of nitrogen chemistry, such that follow-up observations can robustly establish whether nitrogen chemistry is present in these objects. An initial retrieval was performed for each planet including all model parameters. Candidates were identified wherever the best-fitting retrieved spectrum featured a WFC3 transit depth excess due to NH$_3$ or HCN >30ppm. This resulted in 3 candidates planets: WASP-31b, WASP-63b, and HD 209458b. We provide the posterior distributions from these retrievals online\footnote{\href{https://doi.org/10.5281/zenodo.1014847}{Online posteriors}}. For each candidate, we ran three additional retrievals: with NH$_3$ (no HCN), with HCN (no NH$_3$), and without nitrogen chemistry.
\newpage
\subsection{Inferences of Nitrogen Chemistry} \label{subsec:n_chem_inference}
Figure \ref{fig:nir_spectra} displays the best-fitting spectra from retrievals with and without nitrogen chemistry from 1-5$\, \micron$. WASP-31b and WASP-63b feature large nitrogen chemistry transit depth excesses: $\sim$ 400 ppm NH$_3$ (WASP-31b) and $\sim$ 200 ppm HCN (WASP-63b) at $\sim$ 1.55 $\micron$ (Figure \ref{fig:wfc3_spectra}). HD 209458b has an $\sim$ 50 ppm NH$_3$ transit depth excess.
Uniquely identifying nitrogen-bearing species is challenging at the resolution and precision of present WFC3 observations, given overlapping NH$_3$ and HCN absorption features. This difficulty is particularly apparent for HD 209458b, as shown in \citet{MacDonald2017}. Moreover, in the present work we report more conservative estimates of evidence for nitrogen chemistry by marginalising over the cloud fraction. We also utilise higher resolution cross sections ($0.1 \mathrm{cm}^{-1}$) and Spitzer observations. As such, the significance of nitrogen chemistry in HD 209458b is lower than in \citet{MacDonald2017}. However, for moderate NH$_3$ or HCN mixing ratios relative to H$_2$O, nitrogen signatures become sufficiently strong to permit unique detections. This is the case for both WASP-31b and WASP-63b, where strong WFC3 features permit unique identification of signatures attributable respectively to NH$_3$ and HCN (Figure \ref{fig:wfc3_spectra}).
We report a weak detection of NH$_3$ in WASP-31b (2.2$\sigma$). Nested model comparison, whereby we computed the Bayesian evidences of retrievals with NH$_3$ + HCN and without NH$_3$, establishes a Bayes factor of 3.8 for NH$_3$; uniquely identifying it as the cause of the $\sim$ 400 ppm WFC3 feature around 1.5 $\micron$. Previous studies of WASP-31b were unable to fit this feature, either due to excluding NH$_3$ \citep{Sing2015,Barstow2017} or assuming chemical equilibrium \citep{Sing2016}. Our retrieval without NH$_3$ (Figure \ref{fig:nir_spectra}, blue) similarly struggles to fit these elevated points. We predict a $\sim$ 500 ppm K-band NH$_3$ feature for this planet (Figure \ref{fig:nir_spectra}). If confirmed, this represents the first inference of ammonia in an exoplanetary atmosphere.
We further reassert a weak detection of HCN in WASP-63b (2.3$\sigma$, Bayes factor = 4.7), due to a $\sim$ 200 ppm peak around 1.55 $\micron$. We predict a $\sim$ 400 ppm feature near 3.1 $\micron$ and $\sim$ 200 ppm absorption near 4.0 $\micron$ (Figure \ref{fig:nir_spectra}). These detection significances include integration over the entire parameter space, including inhomogeneous clouds; thus the transmission spectra of WASP-31b and WASP-63b \emph{cannot} be adequately fit without disequilibrium nitrogen chemistry.
\begin{figure*}[ht!]
\epsscale{1.15}
\plotone{N_chem_Fig_5}
\caption{Posterior distributions of NH$_3$ and HCN volume mixing ratios (VMR) inferred from current transmission spectra. NH$_3$ posteriors are light red, and HCN light orange.}
\label{fig:posteriors}
\end{figure*}
Derived mixing ratio posteriors for NH$_3$ and HCN are shown in Figure \ref{fig:posteriors}. The maximum a posteriori modes show abundances enhanced by $\sim$ 3-4 orders of magnitude over equilibrium expectations for WASP-31b and WASP-63b, and $\sim$ 1 order of magnitude for HD 209458b.
\subsection{Resolving Degenerate Solutions} \label{subsec:degeneracy}
The limited wavelength range of current observations permits a range of possibilities. Especially for WASP-63b, where the lack of Spitzer or optical data precludes determining the spectral continuum (Figure \ref{fig:nir_spectra}). With low resolution or limited precision data, retrievals have flexibility in adjusting other parameters to partially compensate for removing NH$_3$ or HCN. For example, molecular abundances can be degenerate with terminator cloud coverage. Such degenerate solutions cause the mixing ratio `tails' in Figure \ref{fig:posteriors}. However, present observations are sufficient to distinguish NH$_3$ / HCN features from CH$_4$, due to a lack of absorption at $\sim$ 1.7 $\micron$.
Despite WFC3 degeneracies, Figure \ref{fig:nir_spectra} indicates that model differences arise at longer wavelengths. Observing WASP-31b in K-band and WASP-63b with Spitzer will permit tighter constraints on their NH$_3$ and HCN abundances. HD 209458b is more challenging, as the low inferred NH$_3$ abundance only predicts $\sim$ 25 ppm K-band absorption. Ultimately, observations in K-band and at 3.1 or 4.0$\, \micron$ are critical to resolving model degeneracies.
\newpage
\section{Summary and Discussion} \label{sec:discussion}
Nitrogen chemistry will open a new window into disequilibrium atmospheric chemistry and planetary formation mechanisms. High NH$_3$ abundances are indicative of vertical mixing; with abundance measurements constraining the eddy diffusion coefficient \citep{Moses2011}. High HCN abundances can also indicate vertical mixing, enhanced C/O, or, through an absence of CH$_4$ and NH$_3$, photochemistry \citep{Zahnle2009,Moses2011,Venot2012}.
We have demonstrated that nitrogen-bearing molecules can be observed in WFC3 spectra. We identified a $\sim$ 400 ppm NH$_3$ feature in WASP-31b (2.2$\sigma$), and a $\sim$ 200 ppm HCN feature in WASP-63b (2.3$\sigma$). Nitrogen chemistry is potentially present on HD 209458b; though current WFC3 observations are insufficient to definitively identify a specific species, given overlapping NH$_3$ and HCN features. Ambiguities may be resolved by observing strong NH$_3$ absorption at $\sim$ 2.2$\, \micron$ (K-band) and strong HCN absorption at $\sim$ 3.1 and 4.0$\, \micron$. JWST will be ideally suited to observing the plethora of features exceeding the $\sim$ 10 ppm precision expected of NIRISS / NIRSpec \citep{Beichman2014}. Such observations will enable unique detections of NH$_3$ and HCN in many exoplanetary atmospheres.
\newpage
Observable nitrogen chemistry signatures result when NH$_3$ or HCN exceed $\sim 10^{-2} \, \times$ the H$_2$O mixing ratio. HCN features at $\sim$ 3.1 and 4.0$\, \micron$ weaken and become sharply peaked for lower temperatures, whilst most NH$_3$ features, especially in the WFC3 bandpass, strengthen and remain broad. Extensively cloudy atmospheres have dampened absorption features, though some can exceed $\sim$ 100 ppm even for uniform clouds at 1 mbar.
Our inferred NH$_3$ and HCN abundances are enhanced over equilibrium values by $\sim$ 3-4 orders of magnitude. Such high values suggest that chemical equilibrium is violated in hot Jupiter atmospheres, and should not be imposed \emph{a priori} in atmospheric retrievals. Though more work is needed to explore scenarios producing enhanced NH$_3$ or HCN, the unexpected should be embraced, not shunned, as we seek to elucidate the nature of these worlds.
\acknowledgments
R.J.M. acknowledges financial support from the Science and Technology Facilities Council (STFC), UK, towards his doctoral programme. We thank Siddharth Gandhi for sharing high-resolution opacities, Arazi Pinhas for retrieval comparisons, and the anonymous referee for helpful comments.
\vspace{5mm}
|
1,477,468,750,612 | arxiv | \section{Introduction}
We consider the one-dimensional Euler-Poisson system in a non-dimensional form:
\begin{subequations}\label{EP}
\begin{align}[left = \empheqlbrace\,]
& \rho_t + (\rho u)_x = 0, \label{EP_1} \\
& \rho(u_t + u u_x) + K\rho_x = - \rho\phi_x, \label{EP_2} \\
& - \phi_{xx} = \rho - e^\phi. \label{EP_3}
\end{align}
\end{subequations}
Here $\rho>0$, $u$ and $\phi$ are the unknown functions of $(x,t) \in \mathbb{R}\times \mathbb{R}^+$ representing the ion density, the fluid velocity for ions, and the electric potential, respectively. $K = T_i/T_e \geq 0$ is a constant of the ratio of the ion temperature $T_i$ to the electron temperature $T_e$. The system \eqref{EP} is referred to as the \textit{isothermal} model when $K>0$, and the \textit{pressureless} model when $K=0$, respectively.
The Euler-Poisson system \eqref{EP} is a fundamental fluid model describing the dynamics of ions in an electrostatic plasma \cite{Ch,Dav,Pecseli}, and it is often employed to study various phenomena of plasma such as plasma sheaths \cite{hk, suzuki} and plasma solitons \cite{BK2,BK,HS}. Especially, to study plasma waves, the limit problems seeking the connections with some well-known dispersive models have been investigated, for instance, KdV limit \cite{BK2, Guo,HNS,LS}, KP-II and Zakharov-Kuznetsov limits \cite{LLS, Pu}, and NLS limit \cite{PuNLS}.
The Euler-Poisson system \eqref{EP} is the one-fluid model of ions, where the electron density $\rho_e$ is assumed to satisfy the \textit{Boltzmann relation}
\begin{equation*}\label{Boltzmann}
\rho_e=e^{\phi}.
\end{equation*}
Based on the physical fact that
the electron mass $m_e$ is much lighter than the ion mass $m_i$, i.e, $m_e/m_i \ll 1$,
the relation can be formally derived from the two-fluid model of ions and electrons by suppressing the constant of electron mass $(m_e=0)$, referred to as the \textit{massless electron} assumption.
We refer to \cite{Ch} for more details of physicality and derivation, and also to \cite{GGPS} for a mathematical justification of the massless electron limit.
Due to the nature of electrically reactive fluids, plasmas exhibit unique phenomena different from the usual gas.
Correspondingly, the Euler-Poisson system \eqref{EP}, where electrical effect is described by the Poisson equation with the Boltzmann relation, exhibits interesting and rich dynamics, significantly different from that of the compressible Euler equations. One of the most interesting feathures is that \eqref{EP} admits special types of solutions such as traveling solitary waves, \cite{Cor,LS,Sag}, whose linear stability has been studied in \cite{HS} and \cite{BK} for the pressureless case and for the isothermal case, respectively.
As far as existence of smooth solutions is concerned, while in general smooth solutions to nonlinear hyperbolic equations fail to exist globally in time, it is interesting that this special solution can persist globally.
A question of global existence or finite time blow-up of smooth solutions naturally arises in the study of large-time dynamics of the Euler-Poisson system \eqref{EP}, including nonlinear stability of the solitary waves.
In the present paper, we investigate formation of singularities for the 1D Euler-Poisson system \eqref{EP}. For the isothermal case, i.e., $K>0$, we show that smooth solutions to \eqref{EP} develop $C^1$ blow-up in a finite time when the gradients of Riemann functions are initially large. When the blow-up occurs, we find that the density and velocity stay bounded, while their derivatives blow up. For the pressureless case, i.e., $K=0$, we propose a condition for formation of singularities, requiring no largeness of the gradient of velocity. It is known that if the initial velocity has negatively large gradient at some point, the smooth solution to the pressureless system \eqref{EP} leaves $C^1$ class in a finite time, \cite{Liu}.
In contrast, our condition does not require the large gradient of the initial velocity. In particular, our result demonstrates that the density and the derivative of velocity blow up even if the initial velocity has trivial gradient. In fact, it is the electric potential that induces development of singularities.
For instance, when the initial local density is sufficiently lower than the background density, i.e., ion density is sufficiently rarefied, the electrostatic potential is determined by the distribution of ions in a way that the fluid momentum with negative gradient is generated at later times, resulting in the finite-time singularity formation. We refer to \cite{PHGOA} for a relevant numerical study for the pressureless Euler-Poisson system.
We present several numerical experiments supporting our results in Section~\ref{numerical},
where we also provide numerical examples showing that the pressureless model and the isothermal model exhibit the radically different behaviors in the solutions, see Table \ref{Table2}.
In the literature of plasma physics, the isothermal Euler-Poisson system is the most common and important. Yet the pressureless Euler-Poisson system, i.e., \eqref{EP} with $K=0$, is often considered as a simplified model for ions in a certain physical situation where the ion temperature $T_i$ is much smaller than the electron temperature $T_e$.
In other words, the pressureless Euler-Poisson system is an ideal model for \textit{cold ions} (a plasma with $T_i/T_e \ll 1$).
From a mathematical point of view, the pressureless model is weakly coupled (the hyperbolic part is decoupled) so that one can exploit its simpler structure in the analysis. However, the presence of the pressure makes the hyperbolic part of \eqref{EP} strongly coupled, which makes it harder to mathematically analyze. Not suprisingly, properties of solutions to the isothermal model are significantly different from those to the pressureless model in certain regimes. We shall discuss these issues in detail, in particular, in terms of examples of the blow-up solutions and solitary waves in Section~\ref{numerical}.
To the best of our knowledge, there is no result on the global well-posedness of smooth solutions to the Euler--Poisson system with the Boltzmann relation for the 1D and 2D cases. In fact, global existence of weak entropy solutions for the 1D isothermal case is proved in \cite{CP}. For the 3D isothermal case, smooth irrotational flows can exists globally in time, \cite{GP}. We remark that
our numerical experiments demonstrate that some smooth solutions converge to a background constant state $(\rho, u, \phi)=(1,0,0)$ as time goes by,
see Figure~\ref{Fig4}.
If the smooth solution exists globally in time, one can further ask whether the solution scatters to the constant state. This can be conjectured by the dispersion relation of the associated linear system,
\begin{equation*}\label{dispersion}
\omega(\xi)
= \pm i \xi \sqrt{K+ \frac{1}{1+\xi^2}}.
\end{equation*}
The questions of global existence of smooth solutions and their long time behavior are intriguing and challenging since the system is \textit{weakly dispersive}.
\subsection{Main results}
We consider the Euler-Poisson system \eqref{EP} around a constant state, i.e.,
\begin{equation}\label{Farfield1}
(\rho,u,\phi)(x,t) \to (1,0,0) \quad \text{as } |x| \to \infty.
\end{equation}
We remark that any constant state $(\rho_*, u_*, \phi_*)$ can be normalized into $(\rho_*, u_*, \phi_*)= (1,0,0)$ due to the Galilean transformation for the velocity, normalization of density, and eletrostatic potential reference determined by the density $\phi_* = \ln \rho_*$ with $\rho_*\ne 0$.
The system \eqref{EP}--\eqref{Farfield1} admits a unique smooth solution locally in time for sufficiently smooth initial data, see \cite{LLS}.\footnote{For instance, $(\rho_0-1,u_0)\in H^2(\mathbb{R})\times H^3(\mathbb{R})$ when $K=0$, and $(\rho_0-1,u_0)\in H^2(\mathbb{R})\times H^2(\mathbb{R})$ when $K>0$.} Furthermore, as long as the smooth solution exists, the energy
\begin{equation}\label{H-def}
H(t):= \int_\mathbb{R} \frac{1}{2}\rho u^2 + P(\rho) + \frac12 |\partial_x\phi|^2 +(\phi-1)e^\phi + 1 \,dx,
\end{equation}
where
\begin{equation*}
P(\rho):=K(\rho\ln\rho - \rho + 1), \quad (K \geq 0),
\end{equation*}
is conserved, that is,
\begin{equation}\label{EnergyConser}
H(t)=H(0).
\end{equation}
Here we note that when $K>0$, the {\it{relative pressure}} $P(\rho)$ verifies that
\begin{equation}\label{pos_pressure}
P(\rho) >0 \text{ for } \rho \in (0,1)\cup (1,\infty),
\end{equation}
and that $P(1)=P'(1)=0$ and $P''(\rho)= K\rho^{-1}>0$.
\subsubsection{Isothermal Case}
To state our first theorem, we introduce the Riemann functions \cite{Rie} associated with the isothermal Euler equations:
\begin{subequations}\label{RI}
\begin{align}
& r = r(\rho,u) := u + \int_1^\rho \frac{\sqrt{p'(\xi)}}{\xi}\,d\xi = u + \sqrt{K}\ln \rho, \\
& s = s(\rho,u) := u - \int_1^\rho \frac{\sqrt{p'(\xi)}}{\xi}\,d\xi = u - \sqrt{K}\ln \rho,
\end{align}
\end{subequations}
where $p(\rho)$ be the pressure term in \eqref{EP}, i.e., $p(\rho) := K \rho$.
We note that the solution to \eqref{EP}--\eqref{Farfield1} satisfies that
\begin{equation}\label{RI-infty}
( r, s )(x,t) \to (0, 0) \quad \text{as } |x| \to \infty.
\end{equation}
In what follows, let $(r_0, s_0)(x) := (r,s)(x,0)$.
\begin{theorem}[Isothermal case, $K>0$]\label{MainThm_Warm}
For any given positive numbers $T_0$ and $\varepsilon$, there exist $\delta_0(T_0,\varepsilon) = \delta_0 \in (0,\varepsilon)$ and $M(T_0,\delta_0)=M>0$ such that for all $\delta\in(0,\delta_0)$, the following statement holds: if
\begin{subequations}\label{Thm_Con}
\begin{align}
& \sup_{x\in\mathbb{R}} |\rho_0(x)-1| \leq \delta , \label{Thm_Con_1}\\
& \sup_{x\in\mathbb{R}}|u_0(x)| \leq \delta, \label{Thm_Con_2}\\
& H(0) \leq \delta, \label{Thm_Con_3}\\
& -\rho_0^{-1/2}(x)\partial_x r_{0}(x) \geq M \text{ or } -\rho_0^{-1/2}(x)\partial_x s_{0}(x) \geq M \quad \text{for some } x\in\mathbb{R}, \label{Thm_Con_4}
\end{align}
\end{subequations}
then the maximal existence time $T_\ast$ of the classical solution to the isothermal Euler-Poisson system \eqref{EP} satisfying \eqref{Farfield1} does not exceed $T_0$.
Moreover, it holds that
\begin{equation}\label{gradient-blowup-u}
\| ( \partial_x \rho , \partial_x u) (\cdot,t) \|_{L^\infty(\mathbb{R})} \nearrow \infty \quad \text{ as } t\nearrow T_\ast
\end{equation}
while
\begin{equation}\label{rs-l-infty-bd}
\sup_{t\in[0, T_\ast)} \| ( \rho , u, \phi, \partial_x \phi, \partial_x^2 \phi ) (\cdot,t) \|_{L^\infty(\mathbb{R})} <\infty.
\end{equation}
\end{theorem}
Theorem \ref{MainThm_Warm} indicates that smooth solutions to the isothermal model \eqref{EP} develop $C^1$ blow-up in a finite time when the initial state $(\rho_0,u_0)$ {\it{near the electrically neutral regime}} has relatively large gradient.
In fact, the condition \eqref{Thm_Con_3} with small $\delta>0$, i.e., $H(0)$ being small, implies $\phi\approx 0$ initially (see Lemma \ref{phi-bd}).
We note that $H(0)$ is controlled by $\| (\rho_0-1, u_0)\|_{L^2}$ due to the elliptic estimates for the Poisson equation \eqref{EP_3} (see Section \ref{Appen1}):
\begin{equation}\label{EnergyBd-0}
0 \leq H(0) \leq \frac{\sup_{x \in \mathbb{R}}\rho_0}{2} \int_{\mathbb{R}} |u_0|^2\,dx + (\frac{1}{\kappa_0}+C\delta) \int_{\mathbb{R}} |\rho_0-1|^2\,dx,
\end{equation}
where
$\kappa_0:= (1-\inf\rho_0)/(-\log \inf\rho_0)$.
When the singularity occurs, we find that the density and velocity stay bounded, while their derivatives blow up. This is one of the interesting features when the pressure is present, while the pressureless case exhibits the blow-up of $L^\infty$ norm of the density. We present some numerical experiments supporting our result in Section~\ref{numerical}, see Figure \ref{Fig3} and \ref{Fig2}.
Along the characteristics associated with the distinct eigenvalues of the hyperbolic part of \eqref{EP},
\begin{equation}\label{Eigen}
\lambda^+ = \lambda^+(\rho,u) := u + \sqrt{K}, \quad \lambda^- = \lambda^-(\rho,u) := u - \sqrt{K},
\end{equation}
the corresponding Riemann functions \eqref{RI} satisfy
\begin{equation}\label{RI_1}
r' = -\phi_x, \quad s^\backprime = -\phi_x,
\end{equation}
where
\[
' := \partial_t + \lambda^+ \partial_x, \quad ^\backprime :=\partial_t + \lambda^- \partial_x,
\]
respectively.
Following the elegant calculation of Lax \cite{Lax}, we obtain that
\begin{subequations}\label{RI_1a}
\begin{align}
& (-\rho^{-1/2}r_x)' - \rho^{1/2}\frac{(\rho^{-1/2}r_x)^2}{2} = \rho^{-1/2}\phi_{xx} = \rho^{-1/2}(e^\phi-\rho), \\
& (-\rho^{-1/2}s_x)^\backprime - \rho^{1/2} \frac{(-\rho^{-1/2}r_x)^2}{2} = \rho^{-1/2}\phi_{xx} = \rho^{-1/2}(e^\phi-\rho).
\end{align}
\end{subequations}
For the Euler equations,
it is a well known result of \cite{Lax} that if $r_x$ or $s_x$ is initially negative at some point, $\rho_x$ and $u_x$ will blow up in a finite time.
However, for the case of \eqref{EP}, where the non-local effect due to the Poisson equation comes into play, the Riemann functions are not conserved along the characteristics so that the forementioned blow-up analysis for the Euler equations is no longer applicable.
To resolve this issue, we borrow the idea developed in \cite{Daf1} to keep track of the time-evolution of the $C^1$ norms of the Riemann functions along the characteristics. A similar approach is also adopted in \cite{Daf2,DH,WC}, and we refer to \cite{WC} for the Euler-Poisson system with heat diffusion and damping relaxation, which governs electron dynamics with a fixed background ion.
In our analysis, we obtain the uniform bounds for $\phi$ and $\phi_x$ by making use of the energy conservation. More precisely, we first show that the amplitude of $\phi$ is bounded \textit{uniformly in $x$ and $t$} as long as the smooth solution exists (Lemma \ref{phi-bd}) and that this uniform bound can be controlled only by the size of initial energy $H(0)$. With the aid of the convexity of $P(\rho)$, this fact further implies that the uniform bound for $\phi_x$ is also controlled by the initial energy $H(0)$ (Lemma \ref{Lemma_P1}). We remark that in contrast to the proof of Lemma \ref{phi-bd}, the proof of Lemma \ref{Lemma_P1} relies on the fact that $K>0$.
\subsubsection{Pressureless Case}
To state our second theorem, let us define a function $V_-: (-\infty,0] \to [0,\infty)$ by
\[
V_-(z):= \int_z^0 \sqrt{2\left((\tau-1)e^\tau + 1\right)}\,d\tau \; \text{ for } z \in (-\infty,0].
\]
By inspection, we see that $V_-$ is well-defined since $(\tau-1)e^\tau + 1$ is nonnegative, it is strictly decreasing in $(-\infty,0]$, and it has the inverse function $V_-^{-1}:[0,+\infty) \to (-\infty,0]$.
\begin{theorem}[Presssureless case, $K=0$]\label{MainTheorem}
For the initial data satisfying
\begin{equation}\label{ThmCon2}
exp\left(V_-^{-1}(H(0))\right) > 2\rho_0(\alpha) \text{ for some } \alpha \in \mathbb{R},
\end{equation}
the maximal existence time $T_*$ of the classical solution to the pressureless Euler-Poisson system \eqref{EP} satisfying \eqref{Farfield1} is finite. In particular, it holds that
\[
\lim_{t \nearrow T_\ast}\sup_{x \in \mathbb{R}}\rho(x,t) = +\infty \quad \text{ and } \quad \inf_{x \in \mathbb{R}}u_x(x,t) \approx \frac{1}{t-T_\ast}
\]
for all $t<T_\ast$ sufficiently close to $T_\ast$.
\end{theorem}
Theorem \ref{MainTheorem} demonstrates that singularities in solutions to the pressureless model \eqref{EP} can occur in a finite time if the initial density at some point is small compared to the initial energy.
In fact, the negativity of the initial velocity gradient is not required.
We remark that there is a fairly wide class of the initial data satisfying the condition \eqref{ThmCon2}. From the elliptic estimates for the Poisson equation \eqref{EP_3}, we have (see Section \ref{Appen1})
\begin{equation}\label{EnergyBd}
0 \leq H(0) \leq \frac{\sup_{x \in \mathbb{R}}\rho_0}{2} \int_{\mathbb{R}} |u_0|^2\,dx + \frac{1}{K_0} \int_{\mathbb{R}} |\rho_0-1|^2\,dx =: C(\rho_0,u_0),
\end{equation}
where
$K_0:= (1-\inf\rho_0)/(-\log \inf\rho_0)$.
On the other hand, since $\lim_{\zeta \searrow 0} V_-^{-1}(\zeta) = 0$, for any given constant $0<c<1/2$, there is $\delta_c>0$ such that $\zeta<\delta_c$ implies $\exp(V_-^{-1}(\zeta))>2c$. Thus, \eqref{ThmCon2} holds for all initial data satisfying $\inf \rho_0=c \in (0,1/2) $ and $C(\rho_0,u_0)<\delta_c \ll 1$. In particular, one can take $u_0 \equiv 0$.
For the pressureless case, along the characteristic curve $x(\alpha,t)$ associated with the fluid velocity $u$, issuing from an initial point $\alpha \in \mathbb{R}$ (see \eqref{CharODE}), one can easily obtain from \eqref{EP} that
\begin{equation}\label{Diff_Eq_1}
D\rho/Dt = -u_x \rho, \quad Du_x/Dt = -u_x^2 + \rho - e^\phi, \quad (D/Dt:= \partial_t + u \partial_x).
\end{equation}
The behavior of $\rho$ and $u_x$ depends not only on the initial data, but the potential $\phi$ along the characteristic curve due to the nonlocal nature of the system \eqref{EP}.
In \cite{Liu}, one sufficient condition for blow-up was obtained by discarding $e^\phi$ in \eqref{Diff_Eq_1} and solving the resulting (closed) system of differential inequalities for $\rho$ and $u_x$. The solution blow up if the initial data satisfies $\partial_x u_0 \leq -\sqrt{2\rho_0}$ at some point, i.e., the gradient of velocity is large negatively compared to the density.
On the other hand, our analysis takes account of the non-local effect, and as such, the blow up criterion \eqref{ThmCon2} involves the non-local quantity.
As in the isothermal case, one can invoke the energy conservation to show that $\phi$ is uniformly bounded in $x$ and $t$ (Lemma \ref{phi-bd}).
Next, we define
\[
w(\alpha,t):= \frac{\partial x}{\partial \alpha}(\alpha,t)
\]
and derive a second-order ODE \eqref{2ndOrdODE} for $w$. Using Lemma \ref{phi-bd}, we find that $w$ vanishes at a finite time $T_\ast$ if and only if the solution blows up in the $C^1$ topology, i.e., $u_x\searrow -\infty$ as $t \nearrow T_\ast$ at a non-integrable order in time $t$ (Lemma~\ref{Lem_Blowup}). Our goal is then to find some sufficient conditions guaranteeing $w$ vanishes in a finite time. By applying Lemma \ref{phi-bd}, we employ a comparison argument for the differential inequality to study the behavior of $w$.
The derivation of \eqref{2ndOrdODE} is related to the well-known fact that the Riccati equation can be reduced to a second-order linear ODE (\cite{Ince}, pp.23–25). The Lagrangian formulation is also adopted for some simplified Euler-Poisson systems, for instance, the ones with zero background \cite{HJL} and constant background \cite{Dav2}. Due to the absence of the Boltzmann relation, the ODE systems for these models corresponding to \eqref{Diff_Eq_1} do not involve the nonlocal term, and one obtains exact solutions of the associated ODEs. (See also Chapter 3 in \cite{Dav} or p.301 in \cite{Pecseli}.) The works of \cite{CCTT, ELT,LT1, LT} study the so-called critical threshold for some types of the pressureless Euler-Poisson systems. An interesting open question is whether such critical threshold exists for the pressureless Euler-Poisson system with the Boltzmann relation.
The paper is organized as follows. In Section \ref{Sect2.1}, we prove the uniform bounds of $\phi$ and $\phi_x$ in $x$ and $t$. Theorem \ref{MainThm_Warm} and Theorem \ref{MainTheorem} are proved in Section \ref{Sect2.2} and Section \ref{Sect2.3}, respectively. In Section \ref{numerical}, we present several numerical experiments supporting our results as well as numerical examples in which the solutions to the pressureless model and the isothermal model behave differently.
\section{Proof of Main Theorems}\label{Sect2}
This section is devoted to the proof of our main theorems. We first present some preliminary lemmas that will be crucially used later. We establish the uniform bounds of $\phi$ and $\phi_x$ in $x$ and $t$.
\subsection{Uniform bounds of $\phi$ and $\phi_x$.}\label{Sect2.1}
Let us define the functions
\begin{equation*}
V(z):=
\left\{
\begin{array}{l l}
V_+(z):= \displaystyle{ \int_0^z \sqrt{2U(\tau)}\,d\tau } \; \text{ for } z \geq 0, \\
V_-(z):= \displaystyle{ \int_z^0 \sqrt{2U(\tau)}\,d\tau } \; \text{ for } z \leq 0,
\end{array}
\right.
\end{equation*}
where $U(\tau):=(\tau-1)e^\tau + 1$ is nonnegative for all $\tau\in \mathbb{R}$ and satisfies
\[
U(\tau) \to +\infty \text{ as } \tau \to +\infty, \quad U(\tau)\to 1 \text{ as } \tau \to -\infty
\]
(see Figure \ref{U&f}). Hence, $V_+$ and $V_-$ have the inverse functions $V_+^{-1}:[0,+\infty) \to [0,+\infty)$ and $V_-^{-1}:[0,+\infty) \to (-\infty,0]$, respectively. Furthermore, $V$ is of $C^2(\mathbb{R})$.
\begin{lemma} \label{phi-bd}
As long as the smooth solution to \eqref{EP}--\eqref{Farfield1} exists for $t\in[0,T]$, it holds that
\[
V_-^{-1}\left( H(0) \right) \leq \phi(x,t) \leq V_+^{-1}\left( H(0) \right) \quad \text{ for all } (x,t) \in \mathbb{R} \times [0,T].
\]
\end{lemma}
\begin{proof}
Since $V \in C^1(\mathbb{R})$ and $V \geq 0$, we have that for all $t \geq 0$ and $x\in\mathbb{R}$,
\begin{equation*}
\begin{split}
0\leq V \left(\phi(x,t) \right)
& = \int_{-\infty}^x \frac{dV}{dz}(\phi(y,t))\phi_y\,dy \\
& \leq \int_{-\infty}^x \left| \frac{dV}{dz}(\phi(y,t))\right| |\phi_y|\,dy \\
& \leq \int_{-\infty}^\infty U(\phi)\,dy + \frac{1}{2} \int_{-\infty}^\infty |\phi_y|^2\,dy \\
& \leq \int_\mathbb{R} \frac{1}{2}\rho u^2 + K(\rho\ln\rho-\rho+1) + \frac12 |\partial_x\phi|^2 +(\phi-1)e^\phi + 1 \,dx \\
& = H(t) = H(0),
\end{split}
\end{equation*}
where the last equality holds due to the energy conservation \eqref{EnergyConser} and the second to the last inequality holds due to \eqref{pos_pressure}. This completes the proof.
\end{proof}
\begin{lemma}\label{Lemma_P1}
Let $K>0$. Assume that $|\rho(x,t)-1|<1$ for all $(x,t) \in \mathbb{R} \times [0,T]$. Then there is a constant $C_0(K)>0$ such that
\begin{equation}\label{Lemma_P1_eq}
|\phi_x (x,t)|^2 \leq C_0(K) \cdot O\big(|H(0)|\big) \quad \text{as} \quad |H(0)| \to 0
\end{equation}
for all $(x,t) \in \mathbb{R} \times [0,T]$.
\end{lemma}
\begin{proof}
Multiplying the Poisson equation by $-\phi_x$, and then integrating in $x$,
\begin{equation}
\begin{split}\label{phi-x-2}
\frac{\phi_x^2}{2}
& = \int_{-\infty}^x(\rho-1)(-\phi_x)\,dx + \int_{-\infty}^x(e^\phi-1)\phi_x\,dx \\
& \leq \frac{1}{2} \int_{\mathbb{R}} | \rho-1 |^2 dx + \frac{1}{2} \int_{\mathbb{R}} | \phi_x|^2 dx + e^\phi - \phi -1 \\
& \le \frac{1}{2} \int_{\mathbb{R}} | \rho-1 |^2 dx + O\big(|H(0)|\big) \quad \text{as} \quad |H(0)| \to 0.
\end{split}
\end{equation}
Here we have used the fact that
\[ \frac12 \int_{\mathbb{R}} |\phi_x |^2 dx \le H(t) = H(0)\]
and, by the Taylor expansion with Lemma~\ref{phi-bd}, that
\[ e^\phi - 1 - \phi \le O\big(|H(0)|\big) \quad \text{as} \quad |H(0)| \to 0.\]
As long as $|\rho(x,t)-1|<1$ for all $(x,t) \in \mathbb{R} \times [0,T]$, it holds that
\begin{equation}\label{phi-x-3}
\int_{-\infty}^\infty \frac{1}{4}|\rho-1|^2\,dx \leq \int_{-\infty}^\infty \rho \ln \rho - \rho +1 \,dx \leq \frac{H(t)}{K} = \frac{H(0)}{K}
\end{equation}
for all $t\in[0,T]$.
The first inequality in \eqref{phi-x-3} holds thanks to the Taylor expansion, and the second inequality in \eqref{phi-x-3} holds due to \eqref{H-def}. Combining \eqref{phi-x-2} and \eqref{phi-x-3}, we obtain \eqref{Lemma_P1_eq}. We are done.
\end{proof}
\begin{figure}[h]
\resizebox{110mm}{!}{\includegraphics{U_f1.eps}}
\caption{(a): The graph of $U(\tau)=(\tau-1)e^\tau +1$. (b): The graph of $V(z)$. By Lemma \ref{phi-bd}, $\phi$ is confined in the interval $[V_{-}^{-1}(H(0)), V_{+}^{-1}(H(0))]$. } \label{U&f} \end{figure}
Now we are ready to prove the main theorems.
\subsection{Proof of Theorem \ref{MainThm_Warm}}\label{Sect2.2}
Let
\begin{equation}
W := r_x, \quad Z := s_x.
\end{equation}
By taking $\partial_x$ of \eqref{RI}, we have
\begin{equation}\label{RI_2}
u_x = \frac{W+Z}{2}, \quad \rho_x = \frac{\rho(W-Z)}{2\sqrt{K}}.
\end{equation}
Taking $\partial_x$ of \eqref{RI_1}, and then using \eqref{Eigen} and \eqref{RI_2}, we get
\begin{align}\label{RI_3}
\begin{split}
\begin{split}
-\phi_{xx}
& = W' + \lambda^+_\rho \rho_x W + \lambda^+_u u_x W \\
& = W' + \frac{W^2}{2} + \frac{ZW}{2},
\end{split} \\
\begin{split}
-\phi_{xx}
& = Z^\backprime + \lambda^-_\rho \rho_x Z + \lambda^-_u u_x Z \\
& = Z^\backprime + \frac{Z^2}{2} + \frac{ ZW}{2}.
\end{split}
\end{split}
\end{align}
On the other hand, from \eqref{EP_1} and \eqref{RI_2}, we have
\begin{equation}\label{RI_4}
\rho' = -\rho Z, \quad \rho^\backprime = -\rho W.
\end{equation}
Multiplying \eqref{RI_3} by the integrating factor $\rho^{-1/2}$, and then using \eqref{RI_4} and the Poisson equation \eqref{EP_3}, we obtain
\begin{subequations} \label{RI_6}
\begin{align}
& f' = \rho^{1/2}\frac{f^2}{2} + \rho^{-1/2}(e^\phi - \rho), \\
& g^\backprime = \rho^{1/2} \frac{g^2}{2} + \rho^{-1/2}(e^\phi - \rho),
\end{align}
\end{subequations}
where
\begin{equation*}
f := -\rho^{-1/2}W, \quad g:= -\rho^{-1/2}Z.
\end{equation*}
We define the Lipschitz functions $R(t)$ and $S(t)$ on $[0,T]$ by
\begin{equation}\label{Def_RS}
R(t) := \max_{x\in \mathbb{R}}|r(x,t)|, \quad S(t) := \max_{x\in \mathbb{R}}|s(x,t)|.
\end{equation}
Here $R(t)$ and $S(t)$ exist as long as the smooth solution to \eqref{EP} satisfies \eqref{RI-infty}.
Let $\varepsilon \in (0,\tfrac{1}{4})$ be a given number. There is $T_1>0$ such that the solution to \eqref{EP} with the initial data satisfying \eqref{Thm_Con_1}--\eqref{Thm_Con_3} (with $\delta<\varepsilon$) satisfies that for all $(x,t)\in \mathbb{R}\times [0,T_1]$,
\begin{equation}\label{AP1}
|\rho(x,t) - 1| \leq 2\varepsilon.
\end{equation}
We fix $t\in [0,T_1)$ and choose points $\hat{x},\check{x} \in \mathbb{R}$ such that
\begin{equation*}
R(t) = |r(\hat{x},t)|, \quad S(t) = |s(\check{x},t)|.
\end{equation*}
For any $h \in (0,t)$, we have from \eqref{Def_RS} that
\begin{align*}
\begin{split}
R(t - h) \geq |r(\hat{x} - h \lambda^+(\rho(\hat{x},t),u(\hat{x},t)) ,t - h)|, \\
S(t - h) \geq |s(\check{x} - h \lambda^-(\rho(\check{x},t),u(\check{x},t)) ,t - h)|.
\end{split}
\end{align*}
Then, it is straightforward to check that
\begin{equation}\label{AP_R1}
\lim_{h \to 0^+} \frac{R(t-h) - R(t)}{-h} \leq |r'|(\hat{x},t),
\end{equation}
provided that the limit on the LHS of \eqref{AP_R1} exists. Indeed, if $r(\hat{x},t) \neq 0$, then
\[
\begin{split}
\lim_{h \to 0^+} \frac{R(t-h) - R(t)}{-h}
& \leq |r|'(\hat{x},t) \\
& = \frac{r(\hat{x},t)}{|r|(\hat{x},t)}r'(\hat{x},t) \\
& \leq |r'|(\hat{x},t),
\end{split}
\]
and if $r(\hat{x},t)=0$, then
\[
\begin{split}
\lim_{h \to 0^+}\frac{R(t-h) - R(t)}{-h}
& = \lim_{h \to 0^+}\frac{R(t-h) }{-h} \\
& \leq 0 \\
& \leq |r'|(\hat{x},t).
\end{split}
\]
In a similar fashion, we have that
\begin{equation}\label{AP_R1_1}
\lim_{h \to 0^+} \frac{S(t-h) - S(t)}{-h} \leq |s^\backprime|(\check{x},t),
\end{equation}
provided that the limit on the LHS of \eqref{AP_R1_1} exists. Using \eqref{RI_1}, Lemma~\ref{Lemma_P1} and \eqref{Thm_Con_3}, we obtain from \eqref{AP_R1} and \eqref{AP_R1_1} that there is a constant $C_1>0$ such that
\begin{equation}\label{AP_R2}
\begin{split}
\frac{d}{dt}\left( R(t)+ S(t) \right)
& \leq |r'|(\hat{x},t) + |s^\backprime|(\check{x},t) \\
& \leq 2\max_{x}|\phi_x(\cdot,t)| \\
& \leq C_1 \delta^{1/2}
\end{split}
\end{equation}
for almost all $t\in [0,T_1]$. Integrating \eqref{AP_R2} in $t$, we get
\begin{equation}\label{AP_R3}
R(t) + S(t) \leq R(0) + S(0) + t C_1 \delta^{1/2}
\end{equation}
for all $t\in[0,T_1]$. Then, from \eqref{RI}, \eqref{Def_RS} and \eqref{AP_R3}, we notice that
\begin{equation}\label{AP11}
\begin{split}
|\rho-1|
& =|\exp\left(\frac{r-s}{2\sqrt{K}} \right) - 1| \\
& \leq \exp \left| \frac{r-s}{2\sqrt{K}} \right| - 1 \\
& \leq \exp \left( \frac{R(0) + S(0) + t C_1\delta^{1/2}}{2\sqrt{K}}\right) - 1 \\
& \leq \exp \left( \frac{2\max|u_0| + 2\sqrt{K}\max|\ln\rho_0| + t C_1\delta^{1/2} }{2\sqrt{K}}\right) - 1.
\end{split}
\end{equation}
For any given $T_0>0$, we choose $\delta_0=\delta_0(T_0,\varepsilon) \in (0,\varepsilon)$ sufficiently small such that for all $\delta\in(0,\delta_0)$, it holds that
\begin{equation}\label{AP11_1}
\exp \left( \frac{2\max|u_0| + 2\sqrt{K}\max|\ln\rho_0| + T_0 C_1\delta^{1/2} }{2\sqrt{K}}\right) - 1 < \varepsilon,
\end{equation}
provided that $\max|u_0|<\delta$ and $\max|\rho_0-1| <\delta$.
Let $\alpha$ and $\beta$ be the numbers such that
\begin{equation}\label{AP2}
2\beta \geq \rho^{1/2} \geq 2\alpha >0 \quad \textrm{ for all } \rho \in [1-\varepsilon,1+\varepsilon]
\end{equation}
holds. For instance, let $\alpha=\frac{\sqrt{1-\varepsilon}}{2 }$ and $\beta = \frac{\sqrt{1+\varepsilon} }{2 }$.
Now, with $\delta \in (0,\delta_0)$, we solve \eqref{EP} with the initial data satisfying \eqref{Thm_Con_1}--\eqref{Thm_Con_3}. Let $T_\ast$ be the maximal existence time for the classical solution. We suppose to the contrary that $T_0<T_\ast$.
Using the continuity argument, we claim that
\begin{equation}\label{AP1_1}
|\rho(x,t) - 1| \leq 2\varepsilon \quad \text{ for all } (x,t)\in\mathbb{R}\times[0,T_0].
\end{equation}
We define the continuous function $Y(t):=\sup_{0 \leq s \leq t}\sup_{x \in \mathbb{R}}|\rho(x,s)-1|$. Then, $Y(0)\leq \delta<\varepsilon$.
Suppose to the contrary that $Y(t) > 2\varepsilon$ for some $t\in[0,T_0]$. Then by continuity, there is $t_0\in[0,T_0]$ such that $Y(t_0)=2\varepsilon$ and
\[
|\rho(x,t)-1| \leq 2\varepsilon \quad \text{ for all } (x,t)\in\mathbb{R}\times[0,t_0].
\]
Then, from the previous calculation, \eqref{AP11} and \eqref{AP11_1}, we have that
\begin{equation*}
|\rho(x,t)-1| \leq \varepsilon \quad \text{ for all } (x,t)\in\mathbb{R}\times[0,t_0].
\end{equation*}
Hence we obtain that $2\varepsilon = Y(t_0) \leq \varepsilon $, which is a contradiction. This proves \eqref{AP1_1}.
Let $C_2 = C_2(\delta_0) := \max\{ e^{ V^{-1}_+(\delta_0)}-1 , 1- e^{ V^{-1}_-(\delta_0)} \}>0$. Note that $C_2(\delta_0) \to 0$ as $\delta_0 \to 0$. Let $\gamma=\gamma(\varepsilon) := (2\beta)^{-1} ( C_2 + 2\varepsilon)$. Then, we get that
\begin{equation}\label{gam-bd}
\begin{split}
|\rho^{-1/2}(e^\phi - \rho)|
& \leq \rho^{-1/2}\left(|e^\phi -1| + |1 - \rho| \right) \\
& \leq (2\beta)^{-1} ( C_2 + 2\varepsilon)\\
& = \gamma
\end{split}
\end{equation}
for all $(x,t)\in\mathbb{R}\times[0,T_0]$.
Now we choose $M(T_0,\varepsilon)>0$ sufficiently large such that
\begin{equation}\label{AP12}
M \geq \gamma T_0 + \frac{1}{\alpha T_0}.
\end{equation}
We define the Lipschitz functions
\begin{equation}\label{Def_F+G+-0}
F^+(t):=\max_{y\in\mathbb{R}} f(y,t), \ \ G^+(t):=\max_{y\in\mathbb{R}} g(y,t).
\end{equation}
Notice that $F^+(t)$ and $G^+(t)$ exist as long as the smooth solution to \eqref{EP} satisfies the end-state condition \eqref{RI-infty}. In fact, if $f(x,\cdot)<0$ for all $x \in \mathbb{R}$, then $\partial_x r(x,\cdot)>0$ for all $x \in \mathbb{R}$, which contradicts to \eqref{RI-infty}. Hence, $f(x_0,\cdot)\geq 0$ for some $x_0\in\mathbb{R}$, and it follows from \eqref{RI-infty} that $F^+(t)$ exists.
We fix $t\in[0,T_0]$ and choose points $\hat{y},\check{y} \in \mathbb{R}$ such that
\begin{equation}\label{Def_F+G+}
F^+(t) = f(\hat{y},t), \quad G^+(t) = g(\check{y},t).
\end{equation}
For any $h \in (0,T_0 - t)$, we have
\begin{align}\label{AP3}
\begin{split}
F^+(t + h) \geq f(\hat{y} + h \lambda_+(\rho(\hat{y},t),u(\hat{y},t)) ,t + h), \\
G^+(t + h) \geq g(\check{y} + h \lambda_-(\rho(\check{y},t),u(\check{y},t)) ,t + h).
\end{split}
\end{align}
Then, it holds that
\begin{equation}\label{AP_31}
\begin{split}
\lim_{h \to 0^+} \frac{F^+(t+h) - F^+(t)}{h} \geq f'(\hat{y},t), \\
\lim_{h \to 0^+} \frac{G^+(t+h) - G^+(t)}{h} \geq g^\backprime(\check{y},t),
\end{split}
\end{equation}
provided that the limit on the LHS of \eqref{AP_31} exists. Using \eqref{RI_6}, \eqref{AP2}, and \eqref{gam-bd}, we get
\begin{align}\label{AP_32}
\begin{split}
f'(\hat{y},t) \geq \alpha f^2(\hat{y},t) - \gamma = \alpha (F^+(t))^2 -\gamma , \\
g^\backprime(\hat{y},t) \geq \alpha g^2(\check{y},t) - \gamma = \alpha (G^+(t))^2 -\gamma.
\end{split}
\end{align}
By \eqref{AP_31} and \eqref{AP_32}, it holds that for almost all $t \in[0,T_0]$,
\begin{align}\label{AP4}
\begin{split}
\frac{d}{dt}F^+(t) \geq \alpha (F^+(t))^2 -\gamma, \\
\frac{d}{dt}G^+(t) \geq \alpha (G^+(t))^2 -\gamma.
\end{split}
\end{align}
We assume that $f(x,0) \geq M$ for some $x\in\mathbb{R}$, see \eqref{Thm_Con_4}. The other case can be treated similarly.
Let us define the function
\begin{equation}\label{AP5}
X(t):= F^+(t) - \gamma(T_0 -t), \quad t \in [0, T_0].
\end{equation}
By \eqref{AP4} and \eqref{AP5}, we see that
\begin{equation}\label{AP6}
\begin{split}
\frac{dX}{dt} \geq \alpha
& \left( X(t) + \gamma(T_0-t)\right)^2
\end{split}
\end{equation}
for almost all $t\in[0,T_0]$.
From \eqref{AP5}, \eqref{AP12}, \eqref{Thm_Con_4}, we have
\begin{equation}\label{AP12_1}
X(0) \ge M - \gamma T_0 \geq \frac{1}{\alpha T_0}.
\end{equation}
Since $X'(t) \geq 0$ from \eqref{AP6}, it follows from \eqref{AP12_1} that $X(t)>0$ for all $t\in[0,T_0]$, and hence we obtain from \eqref{AP6} that
\begin{equation}\label{AP7}
\frac{dX}{dt} \geq \alpha X^2(t)
\end{equation}
for almost all $t\in[0,T_0]$. Integrating \eqref{AP7}, we have
\begin{equation}\label{AP7_1}
X(t) \geq \frac{X(0)}{1-\alpha X(0) t} \quad \textrm{for all } t\in[0,T_0].
\end{equation}
From \eqref{AP12_1} and \eqref{AP7_1}, we conclude that $X(t)$ blows up for some $t\in[0,T_0]$. This contradicts the hypothesis that $T_0<T_\ast$. Therefore, $T_\ast\leq T_0$.
Next we prove the boundedness of the solution.
Let $x^+(t)$ be the characteristic curve associated with $\lambda^+$ issuing from $x_0$, i.e.,
\[ \dot{x}^+(t) =\lambda^+(r(x^+(t),t), s(x^+(t),t)) , \quad x^+(0) =x_0. \]
Then by \eqref{RI_1}, one has
\[ \frac{d}{dt} r(x^+(t),t) = -\phi_x (x^+(t), t),\]
and upon integration, we have
\[ r(x^+(t), t) = r_0(x_0) - \int_0^t \phi_x(x^+(\tau), \tau ) d\tau.\]
Now using Lemma~\ref{Lemma_P1}, we obtain
\[ \| r(\cdot, t) \|_{L^\infty(\mathbb{R})} \le \| r_0\|_{L^\infty(\mathbb{R})} + C T_\ast \delta^{1/2}.\]
The estimate for $s$,
\[ \| s(\cdot, t) \|_{L^\infty(\mathbb{R})} \le \| s_0\|_{L^\infty(\mathbb{R})} + C T_\ast \delta^{1/2},\]
can be obtained in a similar way from \eqref{RI_1}. These together with \eqref{RI} imply
\begin{equation*}
\sup_{t\in[0, T_\ast)} \| ( \rho , u ) (\cdot,t) \|_{L^\infty(\mathbb{R})} <\infty,
\end{equation*}
and thanks to Lemma~\ref{phi-bd} and Lemma~\ref{Lemma_P1},
\begin{equation*}
\sup_{t\in[0, T_\ast)} \| \partial_x^k \phi (\cdot,t) \|_{L^\infty(\mathbb{R})} <\infty, \quad k=0,1,2.
\end{equation*}
This proves \eqref{rs-l-infty-bd}.
This completes the proof of Theorem~\ref{MainThm_Warm}. \qed
\begin{remark}[Lower bound of the existence time]
Let us define the Lipschitz functions
\begin{equation}
F(t):= \max_{x \in \mathbb{R}}|f(x,t)|, \quad G(t):=\max_{x \in \mathbb{R}}|G(x,t)|
\end{equation}
on $[0,T]$. In a similar fashion as the previous calculation, we obtain that
\begin{equation}
\frac{d}{dt}F(t) \leq \beta F^2(t) + \gamma \leq \beta \left( F(t)+\frac{\sqrt{\gamma}}{\sqrt{\beta}}\right)^2.
\end{equation}
Letting $Y(t)=F(t) + \tfrac{\sqrt{{\gamma}}}{{\sqrt{\beta}}}$, then solving the resulting differential inequality for $Y$, we obtain
\[
Y(t) \leq \frac{Y(0)}{1-\beta Y(0)t}.
\]
Hence, the solution exists at least on $[0,T_m)$, where
\[
T_m:= \min\left\lbrace \left[\beta\left( \max_{x\in \mathbb{R}}|f_0| + \frac{\sqrt{\gamma}}{\sqrt{\beta}}\right)\right]^{-1}, \left[\beta\left( \max_{x\in \mathbb{R}}|g_0| + \frac{\sqrt{\gamma}}{\sqrt{\beta}}\right)\right]^{-1} \right\rbrace.
\]
\end{remark}
\subsection{Proof of Theorem~\ref{MainTheorem}}\label{Sect2.3}
This subsection is devoted to the proof of Theorem~\ref{MainTheorem}.
For $u\in C^1$, the characteristic curves $x(\alpha,t)$ are defined as the solution to the ODE
\begin{equation}\label{CharODE}
\dot{x} = u(x(\alpha,t),t), \quad x(\alpha,0)=\alpha \in \mathbb{R}, \quad t \geq 0,
\end{equation}
where $\dot{•}:=d/dt$ and the initial position $\alpha$ is considered as a parameter.
Since $x(\alpha,t)$ is differentiable in $\alpha$, we obtain from \eqref{CharODE} that
\begin{equation}\label{var_ODE}
\dot{w} = u_x(x(\alpha,t),t) w, \quad w (\alpha,0)=1, \quad t \geq 0,
\end{equation}
where
\[
w=w(\alpha,t):= \frac{\partial x}{\partial \alpha}(\alpha,t).
\]
We show that $w$ satisfies a certain second-order ordinary differential equation. By integrating \eqref{EP_2} along $x(\alpha,t)$, we obtain that
\begin{equation}\label{CharODE4}
\dot{x} = u(x(\alpha,t),t) = u_0(\alpha) - \int_0^t \phi_x(x(\alpha,s),s) \,ds.
\end{equation}
Differentiating \eqref{CharODE4} in $\alpha$,
\begin{equation}\label{CharODE5}
\dot{w} = \partial_\alpha u_{0}(\alpha) - \int_0^t \phi_{xx}(x(\alpha,s),s) w(\alpha,s) \,ds.
\end{equation}
Since the RHS of \eqref{CharODE5} is differentiable in $t$, so is the LHS. Hence, we get
\begin{equation}\label{CharODE6}
\ddot{w} = - \phi_{xx} w = (\rho- e^\phi) w,
\end{equation}
where we have used \eqref{EP_3}.
On the other hand, using \eqref{EP_1} and \eqref{var_ODE}, we obtain that
\[
\begin{split}
\frac{d}{dt}\left( \rho (x(\alpha,t),t) w(\alpha,t) \right)
& = - \rho u_x w + \rho u_x w = 0,
\end{split}
\]
which yields
\begin{equation}\label{wrho}
\rho (x(\alpha,t),t) w(\alpha,t) = \rho_0(\alpha).
\end{equation}
Finally, combining \eqref{var_ODE}, \eqref{CharODE6}, \eqref{wrho}, we see that $w(\alpha,t)$ satisfies the second-order nonhomogeneous equation
\begin{equation}\label{2ndOrdODE}
\ddot{w} + e^{\phi(x(\alpha,t),t)} w = \rho_0(\alpha), \quad w(\alpha,0)=1 , \quad \dot{w}(\alpha,0) = u_{0x}(\alpha).
\end{equation}
From \eqref{wrho}, it is obvious that for each $\alpha \in \mathbb{R}$,
\begin{equation*}
\left.
\begin{array}{l l}
0<w(\alpha,t)<+\infty \quad & \Longleftrightarrow \quad 0<\rho(x(\alpha,t),t)<+\infty, \\
\lim_{t \nearrow T_*} w(\alpha,t)=0 \quad & \Longleftrightarrow \quad \lim_{t \nearrow T_*} \rho(x(\alpha,t),t) = +\infty.
\end{array}
\right.
\end{equation*}
Using Lemma \ref{phi-bd}, we show that $\sup_{x\in\mathbb{R} } |\rho(x,t)|$ and $\sup_{x\in\mathbb{R} } |u_x(x,t)|$ blow up at the same time, if one of them blows up at a finite time $T_\ast$.
\begin{lemma}\label{Lem_Blowup}
Suppose that the classical solution to \eqref{EP} with $K=0$ exists for all $0 \leq t < T_\ast < +\infty$. Then the following statements hold.
\begin{enumerate}
\item For each $\alpha \in \mathbb{R}$, the following holds true:
\begin{equation}\label{Eq_1}
\lim_{t \nearrow T_\ast}w(\alpha,t) = 0
\end{equation}
if and only if
\begin{equation}\label{Eq_2}
\liminf_{t \nearrow T_\ast } u_x\left(x(\alpha,t),t \right) = -\infty.
\end{equation}
\item If one of \eqref{Eq_1}--\eqref{Eq_2} holds for some $\tilde{\alpha}\in\mathbb{R}$, then there are uniform constants $c_0,c_1>0$ such that
\begin{equation}\label{Eq_3}
\frac{c_0}{t-T_\ast} < u_x\left(x(\tilde{\alpha},t),t \right) < \frac{c_1}{t-T_\ast}
\end{equation}
for all $t<T_\ast$ sufficiently close to $T_\ast$.
\end{enumerate}
\end{lemma}
\begin{remark}
\begin{enumerate}
\item By integrating \eqref{var_ODE}, we obtain
\begin{equation}\label{var_ODE1}
w(\alpha,t) =\exp \left( \int_0^t u_x(x(\alpha,s),s) \,ds \right).
\end{equation}
While it is easy by \eqref{var_ODE1} to see that \eqref{Eq_1} implies \eqref{Eq_2}, the converse is not obvious since one cannot exclude the possibility that $u_x$ diverge in some other earlier time, say $T_0<T_\ast$ with an integrable order in $t$, for which we still have $w(\alpha, T_0)>0$.
For the proof of the converse and obtaining the blow-up rate \eqref{Eq_3}, Lemma \ref{phi-bd}, the uniform boundedness of $\phi$, will be crucially used.
\item From \eqref{Taylor_w} and \eqref{Taylor_w3}, we see that if $\dot{w}(T_\ast)<0$, the vanishing (or blow-up) order of $w$ (or $\rho$) is $(t-T_\ast)$ (or $(t-T_\ast)^{-1}$) and if $\dot{w}(T_\ast)=0$, the vanishing (or blow-up) order of $w$ (or $\rho$) is $(t-T_\ast)^2$ (or $(t-T_\ast)^{-2}$).
\end{enumerate}
\end{remark}
\begin{proof}[Proof of Lemma \ref{Lem_Blowup}]
We suppress the parameter $\alpha$ for notational simplicity. We first make a few basic observations. By the assumption, we have that $w(t)>0$ for all $t\in[0,T_\ast)$. From \eqref{2ndOrdODE} and the fact that $e^\phi w(t) >0$, we obtain that
\begin{equation}\label{2ndOrdODE2}
\ddot{w}(t) < \ddot{w}(t) + e^{\phi(x(\alpha, t),t)} w(t) = \rho_0,
\end{equation}
for which we integrate \eqref{2ndOrdODE2} in $t$ twice to deduce that $w(t)$ is bounded above on $[0,T_\ast)$.
This together with \eqref{2ndOrdODE} and Lemma \ref{phi-bd} implies that $|\ddot{w}(t)|$ is bounded on the interval $[0,T_\ast)$.
Using this for
\[
\dot{w}(t) -\dot{w}(s) = \int_s^t \ddot{w}(\tau)\,d\tau,
\]
we see that $\dot{w}(t)$ is uniformly continuous on $[0,T_\ast)$.
Hence,
we see that the following limit
\[
\dot{w}(T_\ast):=\lim_{t \nearrow T_\ast} \dot{w}(t)\in(-\infty,+\infty)
\]
exists. In a similar fashion, one can check that
\[
w(T_\ast):=\lim_{t \nearrow T_\ast}w(t)\in[0,+\infty).
\]
We prove the first statement. It is obvious from \eqref{var_ODE1} that \eqref{Eq_1} implies \eqref{Eq_2}. To show that \eqref{Eq_2} implies \eqref{Eq_1}, we suppose $\lim_{t \nearrow T_\ast} w(t) >0$. Then, since $w(0)=1$, $w(t)$ has a strictly positive lower bound on $[0,T_\ast)$. From \eqref{Eq_2}, we may choose a sequence $t_k$ such that $u_x(t_k) \to -\infty$ as $t_k \nearrow T_\ast$. Now using \eqref{var_ODE}, we obtain that
\[
u_x(t_k)w(t_k) - u_x(s)w(s) = \dot{w}(t_k)-\dot{w}(s) = \int_s^{t_k} \ddot{w}(\tau)\,d\tau,
\]
which leads a contradiction by letting $t_k \nearrow T_\ast$. Hence, \eqref{Eq_1} holds.
Now we prove the second statement. Due to the first statement, it is enough to assume that \eqref{Eq_1} holds for some $\tilde{\alpha}\in\mathbb{R}$. From \eqref{2ndOrdODE} and Lemma \ref{phi-bd}, we see that \eqref{Eq_1} implies
\begin{equation}\label{Taylor_w1}
\lim_{t \nearrow T_\ast} \ddot{w}(t) = \rho_0 >0.
\end{equation}
Since $w(t)>0$ on $[0,T_\ast)$, \eqref{Eq_1} also implies that
\begin{equation}\label{Taylor_w2}
\dot{w}(T_\ast)= \lim_{t \nearrow T_\ast}\dot{w}(t) \leq 0.
\end{equation}
By the fundamental theorem of calculus, one has $\dot{w}(t) = \dot{w}(\tau) + \int_{\tau} ^t \ddot{w}(s)\,ds$ for all $t, \tau \in[0,T_\ast)$. Then taking the limit $\tau \nearrow T_\ast$ and integrating once more, we obtain that for $t<T_\ast$,
\begin{equation}\label{Taylor_w}
\left.
\begin{array}{l l}
\dot{w}(t) = \dot{w}(T_\ast) + \displaystyle{ \int_{T_\ast} ^t \ddot{w}(s)\,ds, } \\
w(t) = \dot{w}(T_\ast) (t - T_\ast) + \displaystyle{ \int_{T_\ast}^t \ddot{w}(s)(t-s)\,ds. }
\end{array}
\right.
\end{equation}
Using \eqref{Taylor_w1}, we have that for all $t<T_\ast$ sufficiently close to $T_\ast$,
\begin{equation}\label{Taylor_w3}
\left.
\begin{array}{l l}
\displaystyle{ 2\rho_0 ( t-T_\ast) < \int_{T_\ast} ^t \ddot{w}(s)\,ds < \frac{\rho_0}{2} ( t-T_\ast), } \\
\displaystyle{ \frac{\rho_0}{4}(T_\ast-t)^2 < \int_{T_\ast}^t \ddot{w}(s)(t-s)\,ds < \rho_0(T_\ast-t)^2. }
\end{array}
\right.
\end{equation}
Thanks to \eqref{Taylor_w2}, we note that either $\dot{w}(T_\ast)<0$ or $\dot{w}(T_\ast)=0$ holds. Combining \eqref{Taylor_w}--\eqref{Taylor_w3}, we conclude that if $\dot{w}(T_\ast)<0$, then
\[
1/2 <(t - T_\ast)u_x = (t - T_\ast)\frac{\dot{w}}{w} < 2 ,
\]
and if $\dot{w}(T_\ast)=0$, then
\[
1 <(t - T_\ast)u_x = (t - T_\ast)\frac{\dot{w}}{w} < 8.
\]
This completes the proof of \eqref{Eq_3}.
\end{proof}
Now we are ready to prove Theorem~\ref{MainTheorem}.
\begin{proof}[Proof of Theorem~\ref{MainTheorem}]
We consider the equation \eqref{2ndOrdODE} with $\alpha\in \mathbb{R}$, for which \eqref{ThmCon2} holds. Suppose that the smooth solution to \eqref{EP} with $K=0$ exists for all $t\in[0,+\infty)$. Then, thanks to Lemma~\ref{Lem_Blowup}, we must have
\begin{equation}\label{Assume1}
w(\alpha,t)>0 \quad \text{ for all } t\in[0,+\infty).
\end{equation}
Combining \eqref{2ndOrdODE} and Lemma \ref{phi-bd}, we have that for all $t\in [0,+\infty)$,
\begin{equation}\label{DiffIneq}
\ddot{w}(t) + a w(t) \leq b, \quad w(0)\geq 1,
\end{equation}
where we let
\[
w(t)=w(\alpha,t), \quad a:=\exp\left( V_-^{-1}(H(0)) \right), \quad b:=\rho_0(\alpha)
\]
for notational simplicity. We notice that the inequality $w(0) \geq 1$ is allowed in \eqref{DiffIneq}. In what follows, we show that there exists a finite time $T_*>0$ such that $\lim_{t \nearrow T_*}w(\alpha,t) =0$. This contradicts to \eqref{Assume1}, and hence finishes the proof of Theorem \ref{MainTheorem}.
We consider two disjoint cases, call them \textit{Case A} and \textit{Case B} for $\dot{w}(0)\leq 0$ and $\dot{w}(0)>0$, respectively.
\textit{Case A}: We first consider the case $\dot{w}(0)\leq 0$. We claim that $b - a w(t) =0$ for some $t$. Suppose to the contrary that $b -a w(t) \ne 0$ for all $t \geq 0$.
Since $b-aw(0)<0$ from \eqref{ThmCon2} and $w(0)\ge1$, we have
\begin{equation}\label{DiffIneq1}
b -a w(t) < 0 \quad \text{ for all } t \in [ 0,+\infty).
\end{equation}
Combining \eqref{DiffIneq}--\eqref{DiffIneq1}, we see that $\ddot{w}(t)<0$ for all $t$. From this and $\dot{w}(0)\leq 0$, we have that $\dot{w}(t) \to c \in [-\infty,0)$ as $t \to +\infty$, which implies that $w(t) \to -\infty$ as $t \to +\infty$. This is a contradiction to \eqref{DiffIneq1}. This proves the claim.
Then, by the continuity of $w$, we can choose the minimal $T_1>0$ such that
\begin{equation}\label{Ineq4}
b=a w(T_1).
\end{equation}
Hence it holds $\ddot{w}(t) \le b - a w(t)<0$ for all $t \in (0,T_1)$, which in turn implies
\begin{equation*}
\dot{w}(t)=\int_0^t \ddot{w}(s)\,ds + \dot{w}(0) < 0 \quad \text{for all } t \in (0,T_1].
\end{equation*}
Now we split the proof further into two cases:
\begin{subequations}
\begin{align}
& \text{(i)} \quad \dot{w}(t)<0 \text{ on } (0,T_1] \text{ and } \dot{w}(t) \text{ has a zero on } (T_1, +\infty), \\
& \text{(ii)} \quad \dot{w}(t)<0 \text{ for all } t > 0. \label{Ineq12}
\end{align}
\end{subequations}
\textit{Case} (i) : We choose the minimal $T_2>T_1$ satisfying
\begin{equation}\label{wpT2}
\dot{w}(T_2)=0.
\end{equation}
Then, $\dot{w}(t) < 0 $ for $t\in (0,T_2)$. It suffices to show that $w(T_2)\le0$ since this implies that
$w(t)=0$ for some $t\in (0,T_2]$ as desired.
We shall show that $w(T_2) \le 0$ by contradiction. Suppose not, i.e., $w(T_2)>0$. Then since $w$ decreases on $[T_1,T_2]$, we have
\begin{equation}\label{Ineq3}
0 < w(T_2) < w(T_1)= b/a,
\end{equation}
where the equality is from \eqref{Ineq4}. Multiplying \eqref{DiffIneq} by $\dot{w} \leq 0$, and then integrating over $[0,t]$, we obtain that for $t\in[0,T_2],$
\begin{equation}\label{Ineq1}
\frac{|\dot{w}(t)|^2}{2} \geq -a\left(\frac{w(t)^2-|w(0)|^2}{2} \right) + b(w(t)-w(0)) + \frac{|\dot{w}(0)|^2}{2}.
\end{equation}
Here we define a function $\tilde{g}(w) := -a\left(\frac{w^2-|w(0)|^2}{2} \right) + b(w-w(0)) + \frac{|\dot{w}(0)|^2}{2}$.
We see that
\begin{equation}\label{Ineq2-0}
\begin{split}
\tilde{g}(0) = \frac{a|w(0)|^2}{2} -w(0)b+ \frac{|\dot{w}(0)|^2}{2}
\ge \frac{a}{2} - b+ \frac{|\dot{w}(0)|^2}{2} >0,
\end{split}
\end{equation}
where we have used the assumption $w(0)\ge 1$ and \eqref{ThmCon2} for the last two inequalities, respectively.
By inspection, one can check that the function $\tilde{g}(w)$ is strictly increasing on $[0,b/a]$. Using this together with \eqref{Ineq2-0}, we have
\begin{equation}\label{Ineq2}
\begin{split}
\tilde{g}(w)
\geq \tilde{g}(0) >0 \ \ \text{ for all } w \in[0,b/a].
\end{split}
\end{equation}
Combining \eqref{wpT2}--\eqref{Ineq2}, we have
\[
0 = \frac{|\dot{w}(T_2)|^2}{2} \geq \tilde{g}(w(T_2)) > 0,
\]
which is a contradiction.
\textit{Case} (ii) :
We first claim that $\limsup_{t \to \infty} \dot{w}(t) = 0$.
If not, i.e., $\limsup_{t \to \infty} \dot{w}(t) \ne 0$, then thanks to \eqref{Ineq12},
we have $\limsup_{t \to \infty} \dot{w}(t)< 0$. This implies $w(t)=0$ for some $t>0$, which is a contradiction to \eqref{Assume1}.
On the other hand, since $w$ is monotonically decreasing on $(0,\infty)$ thanks to \eqref{Ineq12},
we see that $w_\infty:=\lim_{t\to \infty}w(t)$ exists and $w_\infty \in [0,b/a]$ by \eqref{Ineq4}.
Similarly as in obtaining \eqref{Ineq1}, we multiply \eqref{DiffIneq} by $\dot{w}(t) \leq 0$, $t\in[0,\infty)$, and then integrate the resultant over $[0,t]$ to obtain that \eqref{Ineq1} holds for $t\in[0,\infty)$. Since $0=\limsup_{t \to \infty} \dot{w}(t)=\liminf_{t \to \infty} |\dot{w}(t)|$, we arrive at
\[
0 = \liminf_{t \to \infty} |\dot{w}(t)|^2/2 \geq \liminf_{t \to \infty} \tilde{g}(w(t)) = \tilde{g}(w_\infty)\geq \tilde{g}(0) > 0,
\]
where we have used \eqref{Ineq2} for the last inequality.
This is absurd, which completes the proof for \textit{Case A}.
\textit{Case B}: Now we consider the case $\dot{w}(0)> 0$. We claim that $\dot{w}(t)=0$ for some $t>0$. If not, i.e., $\dot{w}(t)>0$ for all $t \geq 0$, we have
\[
\ddot{w}(t) \leq b - a w(t) \leq b - aw(0) < 0.
\]
This implies that $\dot{w}(t) \to -\infty$ as $t \to +\infty$, which is a contradiction to the assumption that $\dot{w}(t)>0$ for all $t\ge0$.
By the continuity of $\dot{w}(t)$, there is a minimal number $T_0>0$ such that $\dot{w}(T_0) = 0$. Since $\dot{w}(t)>0$ for $t\in[0,T_0)$, we see that $w(T_0)\geq w(0) \geq 1$.
Now one can apply the same argument as \textit{Case A} to conclude that $w(t)$ has a zero on the interval $[T_0,+\infty)$.
This completes the proof of Theorem \ref{MainTheorem}.
\end{proof}
We remark that, following the proof of Theorem \ref{MainTheorem}, one obtains an interesting lemma concerning the existence of zeros of second-order linear differential inequality (see Appendix \ref{Appen2}).
\section{Numerical experiments and discussions} \label{numerical}
In this section, we present numerical simulations concerning our blow-up results presented in Theorem~\ref{MainThm_Warm} and Theorem~\ref{MainTheorem}. Based on our numerical observations, we also discuss quantitative and qualitative differences between the two models, i.e., $K=0$ and $K>0$, in the behaviors of their solutions. Referring to \cite{LS}, the implicit pseudo-spectral scheme with $\Delta x=10/2^{10}$ is employed to solve \eqref{EP} numerically on periodic domains for numerical convenience. The Crank–Nicolson method with $\Delta t=0.01$ is applied for time marching.
\begin{table}[h]
\begin{tabular}{c|c|c|c}
&(a)&(b)&(c)\\\hline
$\rho_0(x)$&$1-0.7 \text{sech}(3x)$&$1-0.7 \text{sech}(2x)$&$1-0.3 \text{sech}(2x)$\\\hline
$H(0)$ & $0.0875$& $0.1671$& $0.0036$\\\hline
$\exp(f_{-}^{-1}(H(0)))$ &$0.6448$&$0.5390$&$0.7585$\\\hline
Blow-up condition (\ref{ThmCon2}) & Hold & Not hold & Not hold \\\hline
Numerical results & Figure~\ref{f1} & Figure \ref{f2} & Figure \ref{f3} \\
\end{tabular}
\caption{The pressureless case. $\rho_0$ is the initial density function. The initial velocity $u_0$ are given as identically zero function for all cases. $H(0)$ is the energy defined in \eqref{EnergyConser} for the initial data.}\label{Table1}
\end{table}
\begin{figure}[tbhp!]
{\includegraphics[width=140mm,height=90mm]{sech307.eps}}
\caption{Numerical solution to the pressureless Euler-Poisson system for the case (a) in Table \ref{Table1}. The initial data are $\rho_0 = 1 - 0.7\text{sech}(3x)$ and $u_0 \equiv 0$. $\rho(0,t)$ and $-u_x(0,t)$ are getting larger as $t$ increases, and they blow up in a finite time.} \label{f1}
\end{figure}
We first numerically solve the pressureless model, i.e., \eqref{EP} with $K=0$, for which we consider three cases (see Table \ref{Table1}). In case (a), where condition (\ref{ThmCon2}) holds, we observe that $\rho$ and $u_x$ blow up after $t=2.3$ in Figure \ref{f1}. This supports our result in Theorem~\ref{MainTheorem}. In case (b), we find that the solutions are bound to break down after $t=2.7$ in Figure \ref{f2} while condition (\ref{ThmCon2}) is not satisfied. This indicates that the blow-up condition (\ref{ThmCon2}) has a room to be improved. Lastly, in case (c), where condition \eqref{ThmCon2} is not satisfied, the smooth solutions seem to persist for $t\in[0,20]$ in Figure \ref{f3}.
\begin{figure}[h]
\includegraphics[width=140mm,height=60mm]{sech2072.eps}
\caption{Numerical solution to the pressureless Euler-Poisson system for the case (b) in Table \ref{Table1}. The initial data are $\rho_0 = 1 - 0.7\text{sech}(2x)$ and $u_0 \equiv 0$. Although the condition \eqref{ThmCon2} does not hold, $\rho(0,t)$ and $-u_x(0,t)$ are expected to eventually blow up at a finite time. } \label{f2}
\end{figure}
\begin{figure}[h]
\includegraphics[width=140mm,height=60mm]{sech2032.eps}
\caption{Numerical solution to the pressureless Euler-Poisson system for the case (c) in Table \ref{Table1} when the initial conditions does not hold \eqref{ThmCon2}. The solutions keep oscillating and decreasing as times t goes by. } \label{f3}
\end{figure}
Now we consider the isothermal model, i.e., \eqref{EP} with $K>0$. Figure \ref{Fig3} shows the numerical solution to the isothermal ($K=0.5$) Euler-Poisson system. The same initial data are taken as those for case (a) of the pressureless model (Figure \ref{f1}). We observe that the solution blows up in a short time, but it occurs in a different way from the pressureless case; $\rho$ and $u$ stay bounded while their gradients blow up. In fact, this feature is asserted and proved in Theorem~\ref{MainTheorem}. We remark that unlike the pressureless case, the gradient blow-up occur near the origin, not at the origin. This makes sense since the models have different nature in terms of characteristic curves.
In fact, the ion waves propagate very differently as illustrated in Figure \ref{Fig4}. In the pressureless case, $\rho$ oscillates at $x=0$ and the resulting oscillatory waves propagate. In the isothermal case, in contrast, such an oscillatory behaviors does not appear around $x=0$. Instead, some localized waves propagate with some oscillatory waves. This difference is caused by their different ``characteristic curves". Hence, the following question is naturally posed: does the blow-up occur due to the collision of the dominant characteristics associated with $u\pm \sqrt{K}$? For both models, the analytical study on the mechanism of ion-wave propagation and blow-up would be very interesting and challenging due to the coupled electric potential term, which gives rise to a dispersive effect.
\begin{figure}[tbhp!]
{ \includegraphics[width=140mm,height=60mm]{k05rho071.eps}}
\caption{Numerical solution to the isothermal ($K=0.5$) Euler-Poisson system. The initial data are $\rho_0 = 1 - 0.7\text{sech}(3x)$ and $u_0 \equiv 0$. $\|\partial_x(\rho,u)(\cdot,t)\|_{L^\infty}$ blows up in a finite time while $\|(\rho,u)(\cdot,t)\|_{L^\infty}$ is bounded.} \label{Fig3}
\end{figure}
\begin{figure}[tbhp!]
\includegraphics[width=140mm,height=30mm]{k0rho02.eps}\\
\includegraphics[width=140mm,height=30mm]{k05rho02.eps} \\
\caption{Numerical solution to the pressureless (above) and the isothermal (below, $K=0.5$) Euler-Poisson systems. The initial data are taken as $\rho_0 = 1-(0.2)\text{sech}(x)$ and $u_0 \equiv 0$. } \label{Fig4}
\end{figure}
\begin{figure}[tbhp!]
\includegraphics[width=140mm,height=30mm]{k0rho.eps} \\
(a) $K=0$ \\
\includegraphics[width=140mm,height=30mm]{k5rho.eps} \\
(b) $K=0.5$ \caption{Numerical solutions to the pressureless (above) and the isothermal (below) Euler-Poisson systems. The initial data are taken as $\rho_0=1+\text{sech}(x)$ and $u_0=\text{sech}(x)$. The numerical solution of the pressureless case persists for a long time while that of the isothermal case blows up in a short time.} \label{Fig2}
\end{figure}
\begin{table}[h]
\begin{tabular}{c|c|c|c}
&&$K=0$&$K=0.5$\\\hline\hline
\multirow{4}{*}{Comparison 1}&$\rho_0(x)$&\multicolumn{2}{c}{$1-0.7 \text{sech}(3x)$}\\\cline{2-4}
&$u_0(x)$&\multicolumn{2}{c}{$0$}\\\cline{2-4}
&Blow-up & O & O \\\cline{2-4}
&Numerical results & Figure~\ref{f1} & Figure \ref{Fig3} \\\hline\hline
\multirow{4}{*}{Comparison 2}&$\rho_0(x)$&\multicolumn{2}{c}{$1-0.2 \text{sech}(x)$}\\\cline{2-4}
&$u_0(x)$&\multicolumn{2}{c}{$0$}\\\cline{2-4}
&Blow-up & X & X \\\cline{2-4}
&Numerical results & Figure~\ref{Fig4}(a) & Figure \ref{Fig4}(b) \\\hline\hline
\multirow{4}{*}{Comparison 3}&$\rho_0(x)$&\multicolumn{2}{c}{$1+\text{sech}(x)$}\\\cline{2-4}
&$u_0(x)$&\multicolumn{2}{c}{$\text{sech}(x)$}\\\cline{2-4}
&Blow-up & X & O \\\cline{2-4}
&Numerical results & Figure~\ref{Fig2}(a) & Figure \ref{Fig2}(b)
\end{tabular}
\caption{Comparisons between the pressureless model and the isothermal model.}\label{Table2}
\end{table}
We shall discuss how $\partial_x\rho_0$ affect on the blow-up. In Figures \ref{Fig2}, we present numerical solutions to the Euler-Poisson system both for the pressureless and the isothermal cases with the same initial data. For the pressureless case (Figure \ref{Fig2}.(a)), the initially localized (compressed) wave travels to the right, and it persists for a long time while the numerical solution of the isothermal case (Figure \ref{Fig2}.(b)) blows up in a short time.
We conjecture for the pressureless case that $\partial_x\rho_0$ itself is not a critical component which makes the finite time blow-up occur.
It becomes more plausible by the fact that for any given constant $M>0$, there is a smooth traveling solitary wave solution satisfying $\sup_{x\in \mathbb{R}}|\partial_x\rho(\cdot,t)|>M$ for all $t \geq 0$. In fact, as the speed $c\nearrow c_0$, where $c_0>1$ is some critical speed, the maximum value of the traveling solitary wave $\rho_c$ tends to infinity. (See Section 8.2 of \cite{BK} and Figure 6 of \cite{BK}). For the isothermal case, however, there is an upper bound (depending only on $K$) for $\sup_{c}\|\partial_x \rho_c(\cdot,t)\|_{L^\infty}$. Based on our numerical experiments, we find that the larger $K$ is, we get the smaller this upper bound, see Figure 6 of \cite{BK}.
We also remark that
the numerical solution in Figure~\ref{Fig4} seems to converge to a background constant state $(\rho, u, \phi)=(1,0,0)$. In the study of long time dynamics, a question of whether smooth solutions globally exist or blow up in finite time arises naturally.
As mentioned in the introduction, the question if the global smooth solutions scatter to the constant state is conjectured by the dispersion relation of the linearized Euler-Poisson system.
The questions of global existence of smooth solutions and their long time behavior are intriguing and challenging.
\section{Appendix}
\subsection{Zeros of second-order differential inequality}\label{Appen2}
Following the proof of Theorem \ref{MainTheorem}, one obtains the following lemma:
\begin{lemma}\label{Lem_DiffIneq}
Let $a$ and $b$ be positive constants. Suppose $w(t)$ satisfies
\[
\ddot{w} + a w \leq b
\]
for all $t \geq T_0$ and $w(T_0)\geq 1$. If $a/2>b$ and
\begin{equation}\label{Con11}
\frac{a|w(T_0)|^2}{2} -w(T_0)b+ \frac{|\dot{w}(T_0)|^2}{2} > 0,
\end{equation}
then $w(t)$ has a zero on the interval $(T_0,+\infty)$.
\end{lemma}
The authors are not aware of any literature addressing the existence of zeros of second-order linear differential inequality with the coefficient $a>0$ and \textit{constant nonhomogeneous} term $b$. We finish this subsection with some remarks regarding Lemma \ref{Lem_DiffIneq}.
\begin{remark}
\begin{enumerate}
\item For the case of the differential equation $\ddot{w}+aw = b$,
\begin{equation}\label{Con1}
\frac{a|w(0)|^2}{2} -w(0)b+ \frac{|\dot{w}(0)|^2}{2} \geq 0
\end{equation}
is a necessary and sufficient condition in order for $w$ to have a zero on $[0,+\infty)$.
\item One needs the restriction $a/2>b$ (or $a/2\geq b$) in Lemma \ref{Lem_DiffIneq}. If $a/2<b$, then the solution to $\ddot{w} + aw = b$ with $w(0)=1$ and $\dot{w}(0)=0$ has no zero since \eqref{Con1} is not satisfied. For another example, we consider the equation
\begin{equation}\label{DiffIneq0}
\ddot{w} + aw = b - e^{-t}, \quad t \in [0,+\infty),
\end{equation}
where $a,b>0$ are constants. Since the general solution of \eqref{DiffIneq0} is
\[
w(t) = \alpha \cos \sqrt{a} t + \beta \sin \sqrt{a} t + \frac{b}{a} - \frac{e^{-t}}{a+1},
\]
we have
\[
w(0)=\alpha + \frac{b}{a} - \frac{1}{a+1}, \quad \dot{w}(0) = \sqrt{a}\beta + \frac{1}{a+1}.
\]
Since
\[
\begin{split}
\min_{t \geq 0} w(t)
& \geq \min_{t \geq 0} \left(\alpha \cos \sqrt{a} t + \beta \sin \sqrt{a} t \right) + \min_{t \geq 0} \left( \frac{b}{a} - \frac{e^{-t}}{a+1} \right) \\
& = -\sqrt{\alpha^2+\beta^2} + \frac{b}{a} - \frac{1}{a+1},
\end{split}
\]
$w(t)$ has no zero on $[0,+\infty)$ provided that
\begin{equation}\label{Aux00}
-\sqrt{\alpha^2+\beta^2} + \frac{b}{a} - \frac{1}{a+1} > 0.
\end{equation}
We choose $b=1/3$ and $a>0$ sufficiently small such that
\begin{equation}\label{Aux0}
\frac{1}{2(a+1)^2} > b-\frac{a}{2} > \frac{a}{a+1}.
\end{equation}
For $w(0)=1$ and $\dot{w}(0)=\frac{1}{a+1}$, the first inequality of \eqref{Aux0} is equivalent to \eqref{Con11} and the second inequality of \eqref{Aux0} is equivalent to \eqref{Aux00}. On the other hand, $b>a/2$ holds.
\end{enumerate}
\end{remark}
\subsection{Proof of inequality \eqref{EnergyBd}}\label{Appen1}
Lemma \ref{LemmaAppen} is derived from some elliptic estimates (see \cite{LLS}). The inequality \eqref{EnergyBd} follows from \eqref{PhiH1Bd2} and the definition of $H(t)$.
\begin{lemma}
\label{LemmaAppen}
For $\rho-1 \in L^\infty(\mathbb{R}) \cap L^2(\mathbb{R})$ satisfying $\inf_{x \in \mathbb{R}}\rho>0$ and $\lim_{|x|\to \infty } \rho = 1$, let $\phi$ be the solution to the Poisson equation \eqref{EP_3}. Then, the following hold:
\begin{enumerate}
\item
\begin{equation}\label{MaxPrincp}
\kappa_-:=\inf_{x\in\mathbb{R}}\rho \leq e^\phi \leq \sup_{x\in\mathbb{R}}\rho \quad \text{for all } x \in \mathbb{R},
\end{equation}
\item if $1> \kappa_- > 0$, then
\begin{subequations}
\begin{align}
& \int_{\mathbb{R}} |\phi_x|^2 + \frac{\kappa_0}{2}|\phi|^2\,dx \leq \frac{1}{2 \kappa_0}\int_{\mathbb{R}} |\rho-1|^2\,dx, \quad \label{PhiH1Bd} \\
& \int_{\mathbb{R}} |\phi_x|^2 + (\phi-1)e^\phi +1 \,dx \leq \frac{1}{\kappa_0} \int_{\mathbb{R}} |\rho-1|^2\,dx, \label{PhiH1Bd2}
\end{align}
\end{subequations}
where $\kappa_0:= \frac{1-\kappa_-}{-\log \kappa_-}$.
\item if $\kappa_- \geq 1$, then \eqref{PhiH1Bd} and \eqref{PhiH1Bd2} hold with $\kappa_0 =1$.
\end{enumerate}
\end{lemma}
\begin{proof}
The maximum principle \eqref{MaxPrincp} can be proved by Stampacchia's truncation method. Referring to \cite{LLS}, we omit the details. Multiplying the Poisson equation \eqref{EP_3} by $\phi$ and integrating by parts, we have
\begin{equation}\label{AppendE3}
\int (\rho-1)\phi \,dx = \int |\phi_x|^2 + (e^\phi-1)\phi \,dx.
\end{equation}
We prove the second statement. We first prove \eqref{PhiH1Bd}. Letting $\kappa:= - \log \kappa_->0$,
we claim that
\begin{equation}\label{AppendEq2}
(e^\phi-1)\phi \geq \frac{1-e^{-\kappa}}{\kappa}\phi^2 \ \ \text{ for } \phi \geq -\kappa.
\end{equation}
For $0>\phi \geq -\kappa$, we have
\begin{equation}\label{AppendEq1}
\frac{1-e^\phi}{-\phi} \geq \frac{1-e^{-\kappa}}{\kappa} > 0
\end{equation}
since the mapping $x \mapsto \frac{1-e^{-x}}{x}$ strictly decreases on $x>0$. Multiplying \eqref{AppendEq1} by $\phi^2$, we get \eqref{AppendEq2} for $0>\phi \geq -\kappa$. On the other hand, since $e^\phi - 1 \geq \phi$ and $1>\frac{1-e^{-x}}{x}$ for $x>0$, we obtain \eqref{AppendEq2} for $\phi \geq 0$. This proves the
claim. Then \eqref{PhiH1Bd} follows from \eqref{AppendE3} and \eqref{AppendEq2} by applying Young's inequality.
Next we prove \eqref{PhiH1Bd2}. From \eqref{AppendE3} and the fact that $e^\phi - 1 -\phi \geq 0$, we have
\begin{equation}\label{Equ1}
\begin{split}
\int (\rho-1)\phi \,dx
& = \int |\phi_x|^2 + (\phi-1)e^\phi +1 + (e^\phi-1-\phi) \,dx \\
& \geq \int |\phi_x|^2 + (\phi-1)e^\phi +1 \,dx.
\end{split}
\end{equation}
Using Young's inequality, we obtain from \eqref{PhiH1Bd} that
\begin{equation}\label{Equ2}
\begin{split}
\int (\rho-1)\phi\,dx
& \leq \frac{\kappa_0}{2}\int |\phi|^2 \,dx + \frac{1}{2\kappa_0}\int|\rho-1|^2\,dx \\
& \leq \frac{1}{\kappa_0} \int |\rho-1|^2\,dx.
\end{split}
\end{equation}
Now \eqref{PhiH1Bd2} follows from \eqref{Equ1} and \eqref{Equ2}.
The last statement can be easily checked since $\phi \geq 0$ if $\kappa_- \geq 1$.
\end{proof}
\subsection*{Acknowledgments.}
B.K. was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of science, ICT and future planning (NRF-2020R1A2C1A01009184).
The authors thank Shih-Hsin Chen and Yung-Hsiang Huang for suggesting the example \eqref{DiffIneq0}.
\subsection*{Conflict of Interest}
The authors declare that they have no conflict of interest.
\subsection*{Availability of Data}
The data supporting the findings of this study are available within the article.
|
1,477,468,750,613 | arxiv | \section{Introduction}
It is an old problem
to determine if given irrational value of a transcendental function is
algebraic or transcendental for certain algebraic arguments; the algebraic values
are particularly remarkable and worthy of thorough investigation, see [Hilbert 1902] \cite{Hil1}, p. 456.
Only few general results are known, see e.g. [Baker 1975] \cite{B}. We shall mention the famous
Gelfond-Schneider Theorem saying that $e^{\beta\log\alpha}$ is a transcendental
number, whenever $\alpha\not\in \{0, 1\}$ is an algebraic and $\beta$ an irrational
algebraic number. In contrast, Klein's invariant $j(\tau)$ is known to take
algebraic values whenever $\tau\in {\Bbb H}:=\{x+iy\in {\Bbb C}~|~y>0\}$ is
an imaginary quadratic number.
The aim of our note is a result on the algebraic values of transcendental function
\begin{equation}
{\cal J}(x,y):=\{e^{2\pi i ~x ~+ ~\log\log y} ~|-\infty<x<\infty, ~1<y<\infty\}
\end{equation}
for the arguments $x$ and $y$ in a real quadratic field; the function ${\cal J}(x, y)$
can be viewed as an analog of Klein's invariant $j(\tau)$, hence the notation.
Namely, let ${\goth k}={\Bbb Q}(\sqrt{d})$ be a real quadratic field and ${\goth R}_{\goth f}=
{\Bbb Z}+{\goth f}O_{\goth k}$ be an order of conductor ${\goth f}\ge 1$ in the field ${\goth k}$;
let $h=|Cl~({\goth R}_{\goth f})|$ be the class number of ${\goth R}_{\goth f}$ and denote by
$\{{\Bbb Z}+{\Bbb Z}\theta_i ~|~ 1\le 1\le h\}$ the set of pairwise non-isomorphic pseudo-lattices
in ${\goth k}$ having the same endomorphism ring ${\goth R}_{\goth f}$, see [Manin 2004] \cite{Man1},
Lemma 1.1.1.
Finally, let $\varepsilon$ be the fundamental unit of ${\goth R}_{\goth f}$ and let $f\ge 1$ be the
least integer satisfying equation $|Cl~(R_f)|=|Cl~({\goth R}_{\goth f})|$, where $R_f={\Bbb Z}+fO_k$
is an order of conductor $f$ in the imaginary quadratic field $k={\Bbb Q}(\sqrt{-d})$.
Our main result can be formulated as follows.
\begin{thm}\label{thm1}
For each square-free positive integer $d\not\in\{1,2,3,7,11,19,$
\linebreak
$43,67,163\}$
the values $\{{\cal J}(\theta_i,\varepsilon) ~|~ 1\le i\le h\}$ of transcendental
function ${\cal J}(x,y)$ are algebraically conjugate numbers generating the
Hilbert class field $H(k)$ of the imaginary quadratic field $k={\Bbb Q}(\sqrt{-d})$
modulo conductor $f$.
\end{thm}
\begin{rmk}\label{rk1}
\textnormal{
Since $H(k)\cong k(j(\tau))\cong {\Bbb Q}(f\sqrt{-d}, j(\tau))$ with $\tau\in R_f$, one gets an
inclusion ${\cal J}(\theta_i,\varepsilon))\in {\Bbb Q}(f\sqrt{-d}, j(\tau))$.
}
\end{rmk}
\begin{rmk}
\textnormal{
Note that even though the absolute value $|z|=\sqrt{z\bar z}$ of an algebraic $z$
is an algebraic number, the absolute value of ${\cal J}(\theta_i,\varepsilon)$
is transcendental. It happens because $|z|$ belongs to a quadratic extension
of the real field ${\Bbb Q}(z\bar z)$ which may have no real embeddings at all.
(Compare with the CM-field, i.e. a totally imaginary quadratic extension of the totally
real number field.)
}
\end{rmk}
The structure of article is as follows. Some preliminary facts
can be found in Section 2. Theorem \ref{thm1} is proved in Section 3
and Section 4 contains an example illustrating the theorem.
\section{Preliminaries}
The reader can find basics of the $C^*$-algebras in [Murphy 1990] \cite{M}
and their $K$-theory in [Blackadar 1986] \cite{BL}.
The noncommutative tori are covered in [Rieffel 1990] \cite{Rie1}
and real multiplication in [Manin 2004] \cite{Man1}.
For main ideas of non-commutative algebraic geometry, see the survey
by [Stafford \& van ~den ~Bergh 2001] \cite{StaVdb1}.
\subsection{Noncommutative tori}
By a {\it noncommutative torus} ${\cal A}_{\theta}$ one understands the universal {\it $C^*$-algebra}
generated by the unitary operators $u$ and $v$ acting on a Hilbert space ${\cal H}$
and satisfying the commutation relation $vu=e^{2\pi i\theta}uv$, where $\theta$ is a
real number.
\begin{rmk}\label{rmk1}
\textnormal{
Note that ${\cal A}_{\theta}$ is isomorphic to a free ${\Bbb C}$-algebra on four
generators $u,u^*,v,v^*$ and six quadratic relations:
\begin{equation}\label{eq2}
\left\{
\begin{array}{cc}
vu &= e^{2\pi i\theta} uv,\\
v^*u^* &= e^{2\pi i\theta}u^*v^*,\\
v^*u &= e^{-2\pi i\theta}uv^*,\\
vu^* &= e^{-2\pi i\theta}u^*v,\\
u^*u &= uu^*=e,\\
v^*v &= vv^*=e.
\end{array}
\right.
\end{equation}
Indeed, the first and the last two relations in system (\ref{eq2}) are obvious
from the definition of ${\cal A}_{\theta}$. By way of example,
let us demonstrate that relations $vu=e^{2\pi i\theta} uv$ and $u^*u=uu^*=v^*v=vv^*=e$
imply the relation $v^*u = e^{-2\pi i\theta}uv^*$ in system (\ref{eq2}). Indeed,
it follows from $uu^*=e$ and $vv^*=e$ that $uu^*vv^*=e$. Since $uu^*=u^*u$ we can bring the last
equation to the form $u^*uvv^*=e$ and multiply the both sides by the constant $e^{2\pi i\theta}$;
thus one gets the equation $u^*(e^{2\pi i\theta}uv)v^*=e^{2\pi i\theta}$.
But $e^{2\pi i\theta}uv=vu$ and our main equation takes the form $u^*vuv^*= e^{2\pi i\theta}$.
We can multiply on the left the both sides of the equation by the element $u$
and thus get the equation $uu^*vuv^*= e^{2\pi i\theta}u$; since $uu^*=e$
one arrives at the equation $vuv^*= e^{2\pi i\theta}u$.
Again one can multiply on the left the both sides by the element
$v^*$ and thus get the equation $v^*vuv^*= e^{2\pi i\theta}v^*u$; since $v^*v=e$
one gets $uv^*= e^{2\pi i\theta}v^*u$ and the required identity $v^*u = e^{-2\pi i\theta}uv^*$.
The remaining two relations in (\ref{eq2}) are proved likewise; we leave it to the reader as an exercise in
non-commutative algebra.
}
\end{rmk}
\bigskip
Recall that the algebra ${\cal A}_{\theta}$ is said to be {\it stably isomorphic}
(Morita equivalent) to ${\cal A}_{\theta'}$, whenever ${\cal A}_{\theta}\otimes {\cal K}\cong
{\cal A}_{\theta'}\otimes {\cal K}$, where ${\cal K}$ is the $C^*$-algebra of all compact operators
on ${\cal H}$; the ${\cal A}_{\theta}$ is stably isomorphic to ${\cal A}_{\theta'}$ if and only if
\begin{equation}\label{eq3}
\theta'={a\theta +b\over c\theta+d}\quad
\hbox{for some matrix} \quad \left(\matrix{a & b\cr c & d}\right)\in SL_2({\Bbb Z}).
\end{equation}
The $K$-theory of ${\cal A}_{\theta}$ is two-periodic and
$K_0({\cal A}_{\theta})\cong K_1({\cal A}_{\theta})\cong {\Bbb Z}^2$ so that
the Grothendieck semigroup $K_0^+({\cal A}_{\theta})$ corresponds to positive reals of
the set ${\Bbb Z}+{\Bbb Z}\theta\subset {\Bbb R}$ called a {\it pseudo-lattice}.
The torus ${\cal A}_{\theta}$ is said to have {\it real multiplication}, if $\theta$ is a quadratic
irrationality, i.e. irrational root of a quadratic polynomial with integer coefficients.
The real multiplication says that the endomorphism ring of pseudo-lattice
${\Bbb Z}+{\Bbb Z}\theta$ exceeds the ring ${\Bbb Z}$ corresponding to multiplication
by $m$ endomorphisms; similar to complex multiplication, it means that the
endomorphism ring is isomorphic to an order ${\goth R}_{\goth f}={\Bbb Z}+{\goth f}O_{\goth k}$
of conductor ${\goth f}\ge 1$ in the real quadratic field ${\goth k}={\Bbb Q}(\theta)$ --
hence the name, see [Manin 2004] \cite{Man1}. If $d>0$ is the discriminant of ${\goth k}$, then
by ${\cal A}_{RM}^{(d, {\goth f})}$ we denote a noncommutative torus with real multiplication
by the order ${\goth R}_{\goth f}$.
\subsection{Elliptic curves}
For the sake of clarity, let us recall some well-known facts. An {\it elliptic curve}
is the subset of the complex projective plane of the form
${\cal E}({\Bbb C})=\{(x,y,z)\in {\Bbb C}P^2 ~|~ y^2z=4x^3+axz^2+bz^3\}$,
where $a$ and $b$ are some constant complex numbers. Recall that one can embed
${\cal E}({\Bbb C})$ into the complex projective space ${\Bbb C}P^3$ as the set
of points of intersection of two {\it quadric surfaces} given by the system of homogeneous
equations
\begin{equation}\label{eq4}
\left\{
\begin{array}{ccc}
u^2+v^2+w^2+z^2 &=& 0,\\
Av^2+Bw^2+z^2 &=& 0,
\end{array}
\right.
\end{equation}
where $A$ and $B$ are some constant complex numbers and
$(u,v,w,z)\in {\Bbb C}P^3$; the system (\ref{eq4}) is called
the {\it Jacobi form} of elliptic curve ${\cal E}({\Bbb C})$.
Denote by ${\Bbb H}=\{x+iy\in {\Bbb C}~|~y>0\}$ the Lobachevsky
half-plane; whenever $\tau\in {\Bbb H}$, one gets a complex torus ${\Bbb C}/({\Bbb Z}+{\Bbb Z}\tau)$.
Each complex torus is isomorphic to a non-singular elliptic curve; the isomorphism is realized by the
Weierstrass $\wp$ function and we shall write ${\cal E}_{\tau}$ to denote the corresponding elliptic curve.
Two elliptic curves ${\cal E}_{\tau}$ and ${\cal E}_{\tau'}$ are isomorphic if and only if
\begin{equation}\label{eq5}
\tau'={a\tau +b\over c\tau+d}\quad
\hbox{for some matrix} \quad \left(\matrix{a & b\cr c & d}\right)\in SL_2({\Bbb Z}).
\end{equation}
If $\tau$ is an imaginary quadratic number, elliptic curve ${\cal E}_{\tau}$ is said to have
{\it complex multiplication}; in this case lattice ${\Bbb Z}+{\Bbb Z}\tau$
admits non-trivial endomorphisms realized as multiplication of points of the lattice by the imaginary
quadratic numbers, hence the name. We shall write ${\cal E}_{CM}^{(-d,f)}$ to denote elliptic curve with complex
multiplication by an order $R_f={\Bbb Z}+fO_k$ of conductor $f\ge 1$ in the imaginary quadratic
field $k={\Bbb Q}(\sqrt{-d})$.
\subsection{Sklyanin algebras}
By the {\it Sklyanin algebra} $S_{\alpha,\beta,\gamma}({\Bbb C})$ one understands a
free ${\Bbb C}$-algebra on four generators and six relations:
\begin{equation}
\left\{
\begin{array}{ccc}
x_1x_2-x_2x_1 &=& \alpha(x_3x_4+x_4x_3),\\
x_1x_2+x_2x_1 &=& x_3x_4-x_4x_3,\\
x_1x_3-x_3x_1 &=& \beta(x_4x_2+x_2x_4),\\
x_1x_3+x_3x_1 &=& x_4x_2-x_2x_4,\\
x_1x_4-x_4x_1 &=& \gamma(x_2x_3+x_3x_2),\\
x_1x_4+x_4x_1 &=& x_2x_3-x_3x_2,
\end{array}
\right.
\end{equation}
where $\alpha+\beta+\gamma+\alpha\beta\gamma=0$. The algebra $S_{\alpha,\beta,\gamma}({\Bbb C})$
represents a twisted homogeneous {\it coordinate ring} of an elliptic curve ${\cal E}_{\alpha,\beta,\gamma}({\Bbb C})$
given in its Jacobi form
\begin{equation}
\left\{
\begin{array}{ccc}
u^2+v^2+w^2+z^2 &=& 0,\\
{1-\alpha\over 1+\beta}v^2+
{1+\alpha\over 1-\gamma}w^2+z^2 &=& 0,
\end{array}
\right.
\end{equation}
see [Smith \& Stafford 1993] \cite{SmiSta1}, p.267 and
[Stafford \& van ~den ~Bergh 2001] \cite{StaVdb1}, Example 8.5.
The latter means that algebra $S_{\alpha,\beta,\gamma}({\Bbb C})$ satisfies an isomorphism
$\hbox{{\bf Mod}}~(S_{\alpha,\beta,\gamma}({\Bbb C}))/
\hbox{{\bf Tors}}\cong \hbox{{\bf Coh}}~({\cal E}_{\alpha,\beta,\gamma}({\Bbb C}))$,
where {\bf Coh} is the category of quasi-coherent sheaves on ${\cal E}_{\alpha,\beta,\gamma}({\Bbb C})$,
{\bf Mod} the category of graded left modules over the graded ring $S_{\alpha,\beta,\gamma}({\Bbb C})$
and {\bf Tors} the full sub-category of {\bf Mod} consisting of the
torsion modules, see [Stafford \& van ~den ~Bergh 2001] \cite{StaVdb1}, p.173.
The algebra $S_{\alpha,\beta,\gamma}({\Bbb C})$ defines a natural {\it automorphism}
$\sigma$ of elliptic curve ${\cal E}_{\alpha,\beta,\gamma}({\Bbb C})$, {\it ibid.}
\section{Proof of theorem \ref{thm1}}
For the sake of clarity, let us outline main ideas. The proof is based on a
categorical correspondence (a covariant functor) between elliptic curves ${\cal E}_{\tau}$
and noncommutative tori ${\cal A}_{\theta}$ taken with their ``scaled units''
${1\over\mu}e$. Namely, we prove that for $\sigma^4=Id$ the norm-closure of a self-adjoint
representation of the Sklyanin algebra $S_{\alpha,\beta,\gamma}({\Bbb C})$
by the linear operators $u=x_1,u^*=x_2, v=x_3, v^*=x_4$ on a Hilbert space
${\cal H}$ is isomorphic to the $C^*$-algebra ${\cal A}_{\theta}$ so that its
unit $e$ is scaled by a positive real $\mu$, see lemma \ref{lem2}; because $S_{\alpha,\beta,\gamma}({\Bbb C})$
is a coordinate ring of elliptic curve ${\cal E}_{\alpha,\beta,\gamma}({\Bbb C})$
so will be the algebra ${\cal A}_{\theta}$ modulo the unit ${1\over\mu}e$.
Moreover, our construction entails that a coefficient $q$ of elliptic curve
${\cal E}_{\alpha,\beta,\gamma}({\Bbb C})$ is linked to the constants
$\theta$ and $\mu$ by the formula $q=\mu e^{2\pi i\theta}$, see lemma \ref{lem1}.
Suppose that our elliptic curve has complex multiplication, i.e. ${\cal E}_{\tau}\cong {\cal E}_{CM}^{(-d,f)}$;
then its coordinate ring $({\cal A}_{\theta}, {1\over\mu}e)$ must have real multiplication, i.e. ${\cal A}_{\theta}\cong {\cal A}_{RM}^{(d, {\goth f})}$
and $\mu=\log\varepsilon$, where $|Cl~(R_f)|=|Cl~({\goth R}_{\goth f})|$ and $\varepsilon$ is the fundamental unit of
order ${\goth R}_{\goth f}$, see lemma \ref{lem3}. But elliptic curve ${\cal E}_{CM}^{(-d,f)}$
has coefficients in the Hilbert class field $H(k)$ over imaginary quadratic field $k={\Bbb Q}(\sqrt{-d})$ modulo conductor $f$;
thus $q\in H(k)$ and therefore one gets an inclusion
\begin{equation}
\mu e^{2\pi i\theta}\in H(k),
\end{equation}
where $\theta\in {\goth k}= {\Bbb Q}(\sqrt{d})$ and $\mu=\log\varepsilon$.
(Of course, our argument is valid only when $q\not\in {\Bbb R}$, i.e. when $|Cl~(R_f)|\ge 2$;
but there are only a finite number of discriminants $d$ with $|Cl~(R_f)|=1$.)
Let us pass to a detailed argument.
\begin{lem}\label{lem1}
If $\sigma^4=Id$, then the Sklyanin algebra $S_{\alpha,\beta,\gamma}({\Bbb C})$
endowed with the involution $x_1^*=x_2$ and $x_3^*=x_4$
is isomorphic to a free algebra ${\Bbb C}\langle x_1,x_2,x_3,x_4\rangle$ modulo an ideal
generated by six quadratic relations
\begin{equation}\label{eq9}
\left\{
\begin{array}{cc}
x_3x_1 &= \mu e^{2\pi i\theta}x_1x_3,\\
x_4x_2 &= {1\over \mu} e^{2\pi i\theta}x_2x_4,\\
x_4x_1 &= \mu e^{-2\pi i\theta}x_1x_4,\\
x_3x_2 &= {1\over \mu} e^{-2\pi i\theta}x_2x_3,\\
x_2x_1 &= x_1x_2,\\
x_4x_3 &= x_3x_4,
\end{array}
\right.
\end{equation}
where $\theta=Arg~(q)$ and $\mu=|q|$ for a complex number $q\in {\Bbb C}\setminus \{0\}$.
\end{lem}
{\it Proof.}
(i) Since $\sigma^4=Id$, the Sklyanin algebra $S_{\alpha, \beta, \gamma}({\Bbb C})$
is isomorphic to a free algebra ${\Bbb C}\langle x_1,x_2,x_3,x_4\rangle$
modulo an ideal generated by the skew-symmetric relations
\begin{equation}\label{eq10}
\left\{
\begin{array}{ccc}
x_3x_1 &=& q_{13} x_1x_3,\\
x_4x_2 &=& q_{24}x_2x_4,\\
x_4x_1 &=& q_{14}x_1x_4,\\
x_3x_2 &=& q_{23}x_2x_3,\\
x_2x_1&=& q_{12}x_1x_2,\\
x_4x_3&=& q_{34}x_3x_4,
\end{array}
\right.
\end{equation}
where $q_{ij}\in {\Bbb C}\setminus\{0\}$, see [Feigin \& Odesskii 1989] \cite{FeOd1}, Remark 1.
\bigskip
(ii) It is verified directly, that relations (\ref{eq10}) are invariant of the involution
$x_1^*=x_2$ and $x_3^*=x_4$, if and only if
\begin{equation}\label{eq11}
\left\{
\begin{array}{ccc}
q_{13} &=& (\bar q_{24})^{-1},\\
q_{24} &=& (\bar q_{13})^{-1},\\
q_{14} &= & (\bar q_{23})^{-1},\\
q_{23} &= & (\bar q_{14})^{-1},\\
q_{12} &= & \bar q_{12},\\
q_{34} &= & \bar q_{34},
\end{array}
\right.
\end{equation}
where $\bar q_{ij}$ means the complex conjugate of $q_{ij}\in {\Bbb C}\setminus\{0\}$.
\bigskip
\begin{rmk}
\textnormal{
The invariant relations (\ref{eq11}) define an involution on the Sklyanin algebra;
we shall refer to such as a {\it Sklyanin $\ast$-algebra}.
}
\end{rmk}
\bigskip
(iii)
Consider a one-parameter family $S(q_{13})$ of the Sklyanin $\ast$-algebras
defined by the following additional constraints
\begin{equation}
\left\{
\begin{array}{ccc}
q_{13} &=& \bar q_{14},\\
q_{12} &=& q_{34}=1.
\end{array}
\right.
\end{equation}
It is not hard to see, that the $\ast$-algebras $S(q_{13})$
are pairwise non-isomorphic for different values of complex parameter $q_{13}$;
therefore the family $S(q_{13})$ is a normal form of the Sklyanin $\ast$-algebra
$S_{\alpha, \beta, \gamma}({\Bbb C})$ with $\sigma^4=Id$.
It remains to notice, that one can write complex parameter $q:=q_{13}$
in the polar form $q=\mu e^{2\pi i\theta}$, where $\theta=Arg~(q)$
and $\mu=|q|$. Lemma \ref{lem1} follows.
$\square$
\begin{lem}\label{lem2}
{\bf (basic isomorphism)}
The system of relations (\ref{eq2}) for noncommutative torus ${\cal A}_{\theta}$
with $u=x_1, u^*=x_2, v=x_3, v^*=x_4$, i.e.
\begin{equation}\label{eq13}
\left\{
\begin{array}{cc}
x_3x_1 &= e^{2\pi i\theta}x_1x_3,\\
x_4x_2 &= e^{2\pi i\theta}x_2x_4,\\
x_4x_1 &= e^{-2\pi i\theta}x_1x_4,\\
x_3x_2 &= e^{-2\pi i\theta}x_2x_3,\\
x_2x_1 &= x_1x_2=e,\\
x_4x_3 &= x_3x_4=e,
\end{array}
\right.
\end{equation}
is equivalent to the system of relations (\ref{eq9}) for the Sklyanin $\ast$-algebra, i.e.
\begin{equation}\label{eq14}
\left\{
\begin{array}{cc}
x_3x_1 &= \mu e^{2\pi i\theta}x_1x_3,\\
x_4x_2 &= {1\over \mu} e^{2\pi i\theta}x_2x_4,\\
x_4x_1 &= \mu e^{-2\pi i\theta}x_1x_4,\\
x_3x_2 &= {1\over \mu} e^{-2\pi i\theta}x_2x_3,\\
x_2x_1 &= x_1x_2,\\
x_4x_3 &= x_3x_4,
\end{array}
\right.
\end{equation}
modulo the following ``scaled unit relation''
\begin{equation}\label{eq15}
x_1x_2=x_3x_4={1\over\mu}e.
\end{equation}
\end{lem}
{\it Proof.}
(i) Using the last two relations, one can bring the noncommutative torus
relations (\ref{eq13}) to the form
\begin{equation}\label{eq16}
\left\{
\begin{array}{ccc}
x_3x_1x_4 &=& e^{2\pi i\theta}x_1,\\
x_4 &= & e^{2\pi i\theta}x_2x_4x_1,\\
x_4x_1x_3 &=& e^{-2\pi i\theta}x_1,\\
x_2 &=& e^{-2\pi i\theta}x_4x_2x_3,\\
x_1x_2 &=& x_2x_1 =e,\\
x_3x_4 &=& x_4x_3 =e.
\end{array}
\right.
\end{equation}
\bigskip
(ii) The system of relations (\ref{eq14}) for the Sklyanin $\ast$-algebra complemented
by the scaled unit relation (\ref{eq15}), i.e.
\begin{equation}\label{eq17}
\left\{
\begin{array}{cc}
x_3x_1 &= \mu e^{2\pi i\theta}x_1x_3,\\
x_4x_2 &= {1\over \mu} e^{2\pi i\theta}x_2x_4,\\
x_4x_1 &= \mu e^{-2\pi i\theta}x_1x_4,\\
x_3x_2 &= {1\over \mu} e^{-2\pi i\theta}x_2x_3,\\
x_2x_1 &= x_1x_2={1\over\mu}e,\\
x_4x_3 &= x_3x_4={1\over\mu}e,
\end{array}
\right.
\end{equation}
is equivalent to the system
\begin{equation}\label{eq18}
\left\{
\begin{array}{cc}
x_3x_1x_4 &= e^{2\pi i\theta}x_1,\\
x_4 &= e^{2\pi i\theta}x_2x_4x_1,\\
x_4x_1x_3 &= e^{-2\pi i\theta}x_1,\\
x_2 &= e^{-2\pi i\theta}x_4x_2x_3,\\
x_2x_1 &= x_1x_2={1\over\mu}e,\\
x_4x_3 &= x_3x_4={1\over\mu}e
\end{array}
\right.
\end{equation}
by using multiplication and cancellation involving the last two equations.
\bigskip
(iii) For each $\mu\in (0,\infty)$ consider a {\it scaled unit} $e':={1\over\mu} e$ of
the Sklyanin $\ast$-algebra $S(q)$ and the two-sided ideal $I_{\mu}\subset S(q)$
generated by the relations $x_1x_2=x_3x_4=e'$. Comparing the defining relations (\ref{eq14}) for
$S(q)$ with relation (\ref{eq13}) for the noncommutative torus ${\cal A}_{\theta}$, one gets an
isomorphism
\begin{equation}\label{eq19}
S(q)~/~I_{\mu}\cong {\cal A}_{\theta}.
\end{equation}
The isomorphism maps generators $x_1,\dots,x_4$ of the Sklyanin
$\ast$-algebra $S(q)$ to such of the $C^*$-algebra ${\cal A}_{\theta}$ and
the {\it scaled} unit $e'\in S(q)$ to the {\it ordinary} unit $e\in {\cal A}_{\theta}$.
Lemma \ref{lem2} follows.
$\square$
\begin{rmk}
\textnormal{
It follows from (\ref{eq19}) that noncommutative torus ${\cal A}_{\theta}$ with the
unit ${1\over\mu}e$ is a coordinate ring of elliptic curve ${\cal E}_{\tau}$.
Moreover, such a correspondence is a covariant functor which maps isomorphic
elliptic curves to the stably isomorphic (Morita equivalent) noncommutative tori;
the latter fact follows from an observation that isomorphisms in category {\bf Mod}
correspond to stable isomorphisms in the category of underlying algebras.
Such a functor explains the same (modular) transformation law in formulas
(\ref{eq3}) and (\ref{eq5}).
}
\end{rmk}
\begin{lem}\label{lem3}
The coordinate ring of elliptic curve ${\cal E}_{CM}^{(-d,f)}$ is isomorphic to
the noncommutative torus ${\cal A}_{RM}^{(d, {\goth f})}$ with the unit ${1\over \log\varepsilon}e$,
where ${\goth f}$ is the least integer satisfying equation $|Cl~({\goth R}_{\goth f})|=|Cl~(R_f)|$
and $\varepsilon$ is the fundamental unit of order ${\goth R}_{\goth f}$.
\end{lem}
{\it Proof.}
The fact that ${\cal A}_{RM}^{(d, {\goth f})}$ is a coordinate ring of elliptic curve
${\cal E}_{CM}^{(-d,f)}$ was proved in [Nikolaev 2014] \cite{Nik1}.
We shall focus on the second part of lemma \ref{lem3} saying that the scaling constant
$\mu=\log\varepsilon$.
To express $\mu$ in terms of intrinsic invariants of pseudo-lattice
$K_0^+({\cal A}_{RM}^{(d, {\goth f})})\cong {\Bbb Z}+{\Bbb Z}\theta$,
recall that ${\goth R}_{\goth f}$ is the ring of endomorphisms of
${\Bbb Z}+{\Bbb Z}\theta$; we shall write ${\goth R}_{\goth f}^{\times}$ to denote
the multiplicative group of units (i.e. invertibe elements) of ${\goth R}_{\goth f}$.
Since $\mu$ is an additive functional on the pseudo-lattice $\Lambda={\Bbb Z}+{\Bbb Z}\theta$,
for each $\varepsilon, \varepsilon'\in {\goth R}_{\goth f}^{\times} $ it must hold
$\mu(\varepsilon\varepsilon' \Lambda)=\mu(\varepsilon\varepsilon') \Lambda=
\mu(\varepsilon)\Lambda+\mu(\varepsilon')\Lambda$.
Eliminating $\Lambda$ in the last equation, one gets
\begin{equation}
\mu(\varepsilon\varepsilon')=\mu(\varepsilon)+\mu(\varepsilon'),
\qquad \forall \varepsilon, \varepsilon'\in {\goth R}_{\goth f}^{\times}.
\end{equation}
The only real-valued function on ${\goth R}_{\goth f}^{\times}$ with such a property
is the logarithmic function (a regulator of ${\goth R}_{\goth f}^{\times}$); thus $\mu(\varepsilon)=\log\varepsilon$,
where $\varepsilon$ is the fundamental unit of ${\goth R}_{\goth f}$.
Lemma \ref{lem3} is proved.
$\square$
\begin{rmk}
{\bf (Second proof of lemma \ref{lem3})}
\textnormal{
The formula $\mu=\log\varepsilon$ can be derived using a purely measure-theoretic argument.
Indeed, if $h_x: {\Bbb R}\to {\Bbb R}$ is a ``stretch-out'' automorphism
of real line ${\Bbb R}$ given by the formula $t\mapsto tx,~\forall t\in {\Bbb R}$,
then the only $h_x$-invariant measure $\mu$ on ${\Bbb R}$ is the ``scale-back''
measure $d\mu={1\over t} dt$. Taking the antiderivative and integrating
between $t_0=1$ and $t_1=x$, one gets
\begin{equation}
\mu=\log x.
\end{equation}
It remains to notice that for pseudo-lattice
$K_0^+({\cal A}_{RM}^{(d,{\goth f})})\cong {\Bbb Z}+{\Bbb Z}\theta$,
the automorphism $h_x$ corresponds to $x=\varepsilon$, where $\varepsilon>1$
is the fundamental unit of order ${\goth R}_{\goth f}$.
Lemma \ref{lem3} follows. $\square$.
}
\end{rmk}
\bigskip
One can prove theorem \ref{thm1} in the following steps.
\bigskip
(i) Let $d\not\in\{1,2,3,7,11,19,43, 67,163\}$ be a positive square-free integer.
In this case $h=|Cl~(R_f)|\ge 2$ and ${\cal E}_{CM}^{(-d,f)}\not\cong {\cal E}({\Bbb Q})$.
\bigskip
(ii) Let $\{{\cal E}_1,\dots, {\cal E}_h\}$ be pairwise non-isomorphic elliptic curves
having the same endomorphism ring $R_f$. From $|Cl~(R_f)|=|Cl~({\goth R}_{\goth f})|$ and lemma \ref{lem3},
one gets $\{{\cal A}_1,\dots, {\cal A}_h\}$ pairwise stably non-isomorphic noncommutative tori; the corresponding
pseudo-lattices $K_0^+({\cal A}_i)={\Bbb Z}+{\Bbb Z}\theta_i$ will have the same
endomorphism ring ${\goth R}_{\goth f}$. Thus for each $1\le i\le h$ one gets
an inclusion
\begin{equation}
(\log\varepsilon) e^{2\pi i\theta_i}\in H(k),
\end{equation}
where $H(k)$ is the Hilbert class field of quadratic field $k={\Bbb Q}(\sqrt{-d})$ modulo conductor $f$.
Since $(\log\varepsilon)\exp (2\pi i\theta_i)=\exp (2\pi i\theta_i+\log\log\varepsilon):={\cal J}(\theta_i, \varepsilon)$,
one concludes that ${\cal J}(\theta_i, \varepsilon)\in H(k)$.
\bigskip
(iii) Finally, because $Gal~(H(k)|k)\cong Cl~(R_f)\cong Cl~({\goth R}_{\goth f})$, it is easy
to see that the set $\{{\cal J}(\theta_i,\varepsilon) ~|~ 1\le i\le h\}$ is invariant of the action of
group $Gal~(H(k)|k)$ on $H(k)$; in other words, numbers ${\cal J}(\theta_i, \varepsilon)$
are algebraically conjugate.
\bigskip
Theorem \ref{thm1} is proved.
$\square$
\section{Example}
In this section we shall use remark \ref{rk1} to estimate ${\cal J}(\theta, \varepsilon)$ for special
values of the discriminant $d$; the reader is encouraged to construct examples of his own.
\begin{exm}
\textnormal{
Let $d=15$ and $f=1$. It is well known that the class number of order $R_{f=1}\cong O_k$ of
the field $k={\Bbb Q}(\sqrt{-15})$ is equal to 2. Because the class number of the field ${\goth k}={\Bbb Q}(\sqrt{15})$
is also 2, one concludes from equation $|Cl~({\goth R}_{\goth f})|=|Cl~(R_f)|$ that conductor ${\goth f}=1$.
Let $\tau\in O_k$; it is well known that in this case $j(\tau)\in {\Bbb Q}(\sqrt{5})$, see e.g. [Silverman 1994] \cite{S},
Example 6.2.2. In view of remark \ref{rk1}, one gets an inclusion ${\cal J}(\theta_i,\varepsilon)\in {\Bbb Q}(\sqrt{-15}, \sqrt{5})$.
Since one of $\theta_i$ is equal to $\sqrt{15}$ and the fundamental unit $\varepsilon$ of the field ${\goth k}={\Bbb Q}(\sqrt{15})$
is equal to $4+\sqrt{15}$, one gets the following inclusion
\begin{equation}
{\cal J}(\sqrt{15}, ~4+\sqrt{15}):=e^{2\pi i\sqrt{15}+\log\log (4+\sqrt{15})}\in {\Bbb Q}\left(\sqrt{-15}, \sqrt{5}\right).
\end{equation}
}
\end{exm}
|
1,477,468,750,614 | arxiv | \section{Introduction}
Non-Gaussian probability distributions are frequently observed in a variety of
systems such as physical, chemical, economical and social systems.
Known examples of non-Gaussian probability distributions are L\'evy $\alpha$-stable distributions, which can be defined by its Fourier transformation as
\begin{align}
\mathcal{L}_{\alpha}^C(x) = \frac{1}{2 \pi} \int dk
\exp[ikx-C\vert k \vert^{\alpha}], \quad (0<\alpha \le 2)
\end{align}
and Tsallis' $q$-generalized distributions \cite{Tsallis},
\begin{align}
W_q(x) \propto \left[1-(1-q) \beta x^2 \right]^{\frac{1}{1-q}}, \quad (1 < q \le 3)
\label{q-Gaussian}
\end{align}
in the non-extensive statistical mechanics based on Tsallis' entropy \cite{Tsallis}.
A common key feature of the both probability distributions is the presence of an asymptotic power-law tail,
$\mathcal{L}_{\alpha}^C(x) \sim \vert x \vert^{-\alpha-1}$, and
$W_q(x) \sim x^\frac{2}{1-q}$, respectively.
There is another type of non-Gaussian distributions
with asymptotic power-law tails, which is called a $\kappa$-generalized
Gaussian,
\begin{align}
W_{\kappa}(x) \propto
\left( -\kappa \beta x^2 + \sqrt{1 + \kappa^2 \beta^2 x^4}
\right)^{\frac{1}{\kappa}},
\quad (\vert \kappa \vert < 2)
\label{k-Gaussian}
\end{align}
It has been originally studied in the context of statistical
physics by Kaniadakis \cite{k-entropy}. This $\kappa$-generalized
Gaussian can be derived by maximizing Kaniadakis' $\kappa$-entropy
under appropriate constraints.
This $\kappa$-Gaussian reduces to the standard Gaussian, $\exp(-\beta x^2)$,
in the limit of $\kappa=0$.
For a large value of $x$, the $\kappa$-Gaussian
obeys a power-law as
$W_{\kappa}(x) \sim x^{-\frac{2}{\kappa}}$.\\
The $\kappa$-generalized distributions have been shown to well explain,
for example, the energy distributions of cosmic rays \cite{k-entropy},
and the size distribution of personal incomes \cite{Clementi}.
In a previous work \cite{WS07,WS09}, we have studied the asymptotic behavior of
the $\kappa$-generalized \textit{nonlinear} Fokker-Planck(FP) equation,
which steady-state is a $\kappa$-generalized Gaussian distribution.
Furthermore a $\kappa$-generalized Gaussian is also derived \cite{WS06}
by generalizing the log-likelihood function in Gauss' law of error,
which is an original method developed by Gauss himself to derive
a standard Gaussian.
On the other hand,
Lutz \cite{Lutz} has recently shown an analytic prediction that
the stationary momentum distributions of trapped atoms in an optical lattice
are, in fact, Tsallis' $q$-generalized Gaussian \eqref{q-Gaussian}.
Later, Gaeta \cite{Gaeta} showed its invariance under the asymptotic Lie symmetries.
The prediction was experimentally verified by a London team \cite{Douglas}.
This anomalous transport is described by a \textit{linear} FP equation
with a nonlinear drift coefficient,
\begin{align}
K^{\rm ol}(p) = - \frac{\alpha p}{1+ \left( \frac{p}{p_c} \right)^2 },
\label{K}
\end{align}
which represents a capture force with damping coefficient $\alpha$,
and this force acts only on slow particles whose momentum is smaller than
the capture momentum $p_c$.
A characteristic feature of this nonlinear drift is that:
for a small momentum $\vert p \vert <p_c$, the drift is approximately
linear $K^{\rm ol}(p) \sim -p$, i.e., it reduces to a familiar Ornstein-Uhlenbeck process;
whereas for a large momentum $\vert p \vert >p_c$, it asymptotically
decreases as $K^{\rm ol}(p) \sim -1/p$.\\
In contrast to most systems with power-law distributions which are often
described by nonlinear kinetic equations \cite{Frank}, the above process
is described by an ordinary linear FP equation. Consequently standard
methods can be applied to the analysis of the problem.
It is worth stressing that the Lutz analysis is not restricted to anomalous transport
in an optical lattice, but can be applied to a wide class of systems
described by a FP equation with a drift coefficient decaying
asymptotically as $-1/p$.
In this contribution, we propose another momentum-dependent drift coefficient
$K(p)$ given by equation \eqref{kDrift}, which also asymptotically decreases
as $-1/p$ for a large momentum $\vert p \vert > p_c$.
We consider the process described by the linear FP equation with
this drift coefficient $K(p)$.
Next section provides a brief review of $\kappa$-generalized thermostatistics
and some properties of $\kappa$-generalized Gaussian.
In section three we consider an ordinary linear FP equation with the
proposed momentum-dependent drift coefficient $K(p)$ and a constant diffusion coefficient $D$.
It is shown that the steady-state of the FP equation
with this nonlinear drift coefficient $K(p)$
is a $\kappa$-generalized Gaussian.
The deformed parameter $\kappa$ can be expressed in terms
of the microscopic parameters.
In section four the asymptotic behavior of the FP equation is studied.
It is shown that the non-increase of the Lyapunov functional associated
with the FP equation. Then we numerically analyze the time evolutions
of numerical solutions against different initial probability distributions,
and show the asymptotic convergence of the numerical solutions to
$\kappa$-Gaussian.
In section five we discuss the relation between $\beta$ and
the average energy in the parameter region that the mean-kinetic
energy diverges.
The final section is summary.
\section{$\kappa$-generalized thermostatistics}
We first give the brief review of the generalized thermostatistics based on
$\kappa$-entropy defined as
\begin{align}
S_{\kappa} &\equiv -k_{\rm B} \int_{-\infty}^{\infty} dp \; w(p) \ln_{\kappa} w(p),
\end{align}
for a probability distribution $w(p)$ of the momentum $p$.
Here $k_{\rm B}$ denotes the Boltzmann constant, and $ \ln_{\kappa} (x)$ is the $\kappa$-logarithmic function defined by
\begin{align}
\ln_{\kappa} (x) \equiv \frac{x^{\kappa} - x^{-\kappa}}{2 \kappa}.
\end{align}
The $\kappa$-entropy $S_{\kappa}$ is a real-parameter ($\kappa$) extension of
the standard Boltzmann-Gibbs-Shannon (BGS) entropy.
The inverse function of $ \ln_{\kappa} (x)$ is expressed as
\begin{align}
\exp_{\kappa} (x) &\equiv \exp\left[\frac{1}{\kappa} \mathop\mathrm{arcsinh}\nolimits(\kappa x) \right]
\nonumber \\
&= \left( \kappa x + \sqrt{1 + \kappa^2 x^2}
\right)^{\frac{1}{\kappa}},
\label{kexp}
\end{align}
and called $\kappa$-exponential function.
For a small value of $x$, the $\kappa$-exponential function
is well approximated with $\exp(x)$, whereas a large positive value of $x$,
it asymptotically obeys a power-law $ \exp_{\kappa} (x) \sim x^{1/\kappa}$.
In the limit of $\kappa=0$ both $ \ln_{\kappa} (x)$ and $ \exp_{\kappa} (x)$ reduce to
the standard logarithmic and exponential functions, respectively.
Accordingly the $S_{\kappa}$ reduces to the BGS entropy.
Maximizing the $\kappa$-entropy $S_{\kappa}$ under the constraints of
the mean kinetic energy and the normalization of probability distribution $w(p)$,
\begin{align}
\frac{\delta}{\delta w} \Big(
S_{\kappa}[w]-\beta \int_{-\infty}^{\infty} dp \, \frac{p^2}{2} w(p)
-\gamma \int_{-\infty}^{\infty} dp w(p) \Big) = 0,
\end{align}
leads to a so-called $\kappa$-generalized Gaussian,
\begin{align}
w^{\rm ME}(p) = \alpha \, \exp_{\kappa}
\left[- \frac{1}{\lambda} \big(\gamma + \beta \, \frac{p^2}{2} \big)\right].
\end{align}
Here $\gamma$ is a constant for the normalization, and depends
on $\beta$, which controls the variance of $w^{\rm ME}(p)$.
The parameter $\alpha$ and $\lambda$ are $\kappa$-dependent constants,
which are given by
\begin{align}
\alpha &= \left(\frac{1-\kappa}{1+\kappa}\right)^{\frac{1}{2\kappa}}, \quad
\lambda = \sqrt{1-\kappa^2},
\end{align}
respectively.
The $\kappa$-generalization of free-energy was studied in \cite{SW06}
and given by
\begin{align}
F_{\kappa} &\equiv -\left( \frac{I_{\kappa} +\gamma}{\beta} \right),
\end{align}
where
\begin{align}
I_{\kappa} &\equiv \int_{-\infty}^{\infty} dp \, \frac{1}{2} \Big[
\left(w^{\rm ME}(p)\right)^{1+\kappa}+ \left(w^{\rm ME}(p) \right)^{1-\kappa} \Big].
\end{align}
The $\kappa$-generalized free-energy $F_{\kappa}$ satisfies the Legendre transformation structures,
\begin{align}
F_{\kappa} = U - \frac{1}{\beta} \, S_{\kappa}, \quad
\frac{d}{d \beta} \, \Big( \beta F_{\kappa} \Big) = U,
\label{Fk}
\end{align}
where
\begin{align}
U = \int_{-\infty}^{\infty} dp \, \frac{p^2}{2} w^{\rm ME}(p).
\end{align}
\section{Proposed nonlinear drift coefficient}
Let us consider the linear FP equation
\begin{align}
\frac{\partial}{\partial t} w(p, t) =
-\frac{\partial}{\partial p} \Big( K(p) \, w(p, t) \Big)
+ D \frac{\partial^2}{\partial p^2} w(p,t),
\label{kFPE}
\end{align}
with a constant diffusion coefficient $D$
and the momentum-dependent drift coefficient,
\begin{align}
K(p) = -\frac{\alpha p}{\sqrt{1+\left(\frac{p}{p_c}\right)^4}},
\label{kDrift}
\end{align}
where $\alpha$ is a damping coefficient and $p_c$ denotes a capture
momentum.
Note that this proposed drift coefficient $K(p)$ also
behaves as $-p$ for a small momentum $\vert p \vert <p_c$,
and asymptotically decreases
as $-1/p$ for a large momentum $\vert p \vert > p_c$
as same as $K^{\rm ol}(p)$ in anomalous diffusions in optical lattice \cite{Lutz}.
We introduce the associated potential,
\begin{align}
V(p) = \frac{p_c^2}{2} \mathop\mathrm{arcsinh}\nolimits \left( \frac{p^2}{p_c^2} \right),
\label{k-potential}
\end{align}
which is related with $K(p)$ by
\begin{align}
K(p) = -\alpha \frac{d}{dp} \, V(p).
\label{K-V}
\end{align}
\subsection{Steady-state}
Next we show the steady-state $w_s(p)$ of the FP equation with the nonlinear
drift \eqref{kDrift} is a $\kappa$-Gaussian.
To this, the steady-state condition $\frac{\partial}{\partial t} w_s(p) =0$
in Eq. \eqref{kFPE} leads to
\begin{align}
\frac{d}{d p} \ln w_s(p) &=
\frac{K(p)}{D} = - \frac{d}{dp} \frac{\alpha V(p)}{D}.
\end{align}
In the last step, we used the relation \eqref{K-V}.
Substituting equation \eqref{k-potential} and after integration we have
\begin{align}
\ln w_s(p) = -\frac{\alpha p_c^2}{2 D} \, \mathop\mathrm{arcsinh}\nolimits\left(\frac{p^2}{p_c^2}\right) + \textrm{const. },
\end{align}
then the steady-state becomes
\begin{align}
w_s(p) &\propto \exp\left[-\frac{\alpha p_c^2}{2 D}
\mathop\mathrm{arcsinh}\nolimits \left(\frac{p^2}{p_c^2}\ \right) \right].
\end{align}
By using the definition \eqref{kexp} of $\kappa$-exponential function, and
introducing the two parameters as
\begin{align}
\kappa = \frac{2 D}{\alpha p_c^2},\quad
\beta = \frac{\alpha}{D},
\label{para}
\end{align}
we can express the steady-state as
\begin{align}
w_s(p) = \frac{1}{Z_{\kappa}} \exp_{\kappa} \left[-\beta \frac{p^2}{2} \right],
\end{align}
where $Z_{\kappa}$ is the normalization factor.
We thus found that the steady-state of the FP equation with the nonlinear
drift coefficient $K(p)$ is nothing but a $\kappa$-generalized Gaussian.
Remarkably, the parameter $\kappa$ can be expressed in terms
of the microscopic parameters, i.e., $\alpha, D, p_c$ in the FP equation.
This fact allows us to give a physical interpretation of
the $\kappa$-generalized distribution, as similar as $q$-generalized distribution in Lutz' analysis \cite{Lutz}.
For example, in the limit of $p_c \to \infty$, the drift coefficient $K(p)$
reduces to $-p$ of the standard Ornstein-Uhlenbeck process, and
the deformed parameter $\kappa$ of equation \eqref{para} reduces to $0$.
This corresponds to the standard case in which the steady-state $w_s(p)$
is a standard Gaussian. \\
Note also that the parameter $\beta$ is expressed as the ratio of
the friction coefficient $\alpha$ to the diffusion coefficient $D$,
in analogy with the fluctuation-dissipation relation.
We emphasize that the parameter $\beta$ is not equal to an inverse temperature,
because $w_s(p)$ is not an equilibrium state but a steady-state, for which
temperature is not well defined.
\section{Asymptotic behavior}
We here study the asymptotic solutions of the FP equation
with the nonlinear drift coefficient $K(p)$.\\
In a previous work \cite{WS07,WS09} we have studied the nonlinear FP equation
associated the $\kappa$-generalized entropy, and shown the existence
of the associated Lyapunov functional, which characterizes a long
time behavior of the process described by the FP equation.
Similarly, the Lyapunov functional,
\begin{align}
{\mathcal L}(t) \equiv U[w] - \frac{D}{\alpha} \, S[w],
\label{L}
\end{align}
is monotonically non-increasing, i.e., $ \frac{d} {dt} {\mathcal L}_{\kappa}(t) \le 0$, for any time evolution of $w(x,t)$ according to the linear FP
equation \eqref{kFPE}.
In equation \eqref{L},
\begin{align}
S[w] = - k_{\rm B} \int dp \; w(p, t) \ln w(p, t),
\end{align}
is BGS entropy, and
\begin{align}
U[w] \equiv \int dp \; V(p) \, w(p, t).
\end{align}
is the ensemble average of the potential $V(p)$.\\
The proof of the non-increase of ${\mathcal L}(t)$ is as follows:
\begin{align}
\frac{d {\mathcal L}(t)}{d t} =& \int_{-\infty}^{\infty} dp \;
\frac{\partial }{\partial w} \left[ V(p)\, w
+ \frac{D}{\alpha} w \ln w \right] \;
\frac{\partial w(p, t)}{\partial t} \nonumber \\
=& \int_{-\infty}^{\infty} dp \; \left[ V(p)
+ \frac{D}{\alpha}( \ln w + 1 )
\right]
\nonumber \\
&\qquad \times \frac{\partial}{\partial p}\,\left[-K(p) w +
D \frac{\partial}{\partial p}\, w \right]
\nonumber \\
= & -\int_{-\infty}^{\infty} dp \, \frac{w}{\alpha}
\left\{ -K(p) + D \, \frac{\partial}{\partial p} \ln w \right\}^2
\le 0.
\end{align}
From the fist line to the second line we used the FP equation \eqref{kFPE},
and in the last step we used the integration by part.\\
Thus ${\mathcal L}(t)$ is non-increasing and
consequently ${\mathcal L}(t)$ is minimized for
the steady-state $w_s(p)$ as
\begin{align}
\min {\mathcal L}(t) = \lim_{t \to \infty} {\mathcal L}(t)
= U[w_s(p)] - \frac{D}{\alpha} \, S[w_s(p)].
\label{freeE}
\end{align}
Note that the last expression is the free-energy associated
with the steady-state $w_s(p)$ of the linear FP equation.
In contrast to the $\kappa$-generalized free-energy \eqref{Fk}
associated with the nonlinear FP equation \cite{WS07,WS09},
$S$ is the standard BGS entropy and
$U$ is the ensemble average of the nonlinear potential $V(p)$
in the relation \eqref{freeE}.
\subsection{Asymptotic convergence to $\kappa$-Gaussian}
In order to study a long time behavior of the FP equation
with the nonlinear drift coefficient $K(p)$,
we performed numerical simulations against different initial probability
distributions.
We used a variant of the numerical method originally developed
by Gosse and Toscani \cite{GT06} for the Cauchy problem an the
evolution equation.
For the details of the numerical scheme, please refer to \cite{GT06}.
A time-evolution of the numerical solution $w(p,t)$ of the
FP equation with $K(p)$ is shown in figure 1.
\begin{figure}
\begin{center}
\includegraphics[scale=0.9]{evo.eps}
\caption{
A typical time-evolution of $w(p,t)$ from an initial probability density
with triangle shape.
The microscopic parameters are set to $p_c=1, D=1, \alpha=4$,
so that $\kappa=0.5, \beta=4$.
}
\end{center}
\label{evo}
\end{figure}
Note that the numerical solution $w(p,t)$ seems to be asymptotically
approaching to the $\kappa$-Gaussian.
In order to confirm this property, we
fitted the numerical solution $w(p,t)$ at each time $t$ with
the $\kappa$-Gaussian
\begin{align}
a(t) \exp_{\kappa} \left[- b(t) p^2 \right],
\label{k-Gau}
\end{align}
where $a(t)$ and $b(t)$ are fitting parameters.
Then the time evolution of the function defined by
\begin{align}
\eta(p,t) \equiv
\ln \left( \frac{w(p, t)}{a(t) \exp_{\kappa} \left[- b(t) p^2 \right]} \right),
\label{ratio}
\end{align}
are studied. If a numerical solution $w(p,t)$ is perfectly fitted
with equation \eqref{k-Gau}, the function $\eta(p,t)$ becomes
identically zero. In figure 2 the time evolutions
of $\eta(p,t)$ and $w(p,t)$ against an initial probability distribution
$w(p,0)$ with a triangle shape are plotted. It is obvious from this figure that the function $\eta(p,t)$
is decreasing to zero as time evolves. This fact shows that
the numerical solutions are approaching to the $\kappa$-Gaussian.
\begin{figure}
\begin{center}
\includegraphics[width=0.38\textwidth]{lnratio5.eps}
\includegraphics[width=0.38\textwidth]{lnratio8.eps}
\includegraphics[width=0.38\textwidth]{lnratio10.eps}
\includegraphics[width=0.38\textwidth]{lnratio14.eps}
\caption{Long time behavior of the quantity $\eta(p,t)$ given in
equation \eqref{ratio}
and the numerical solutions $w(p,t)$ against
an initial probability distribution with triangle shape.
The microscopic parameters are set to $p_c=1, D=1, \alpha=2$,
hence $\kappa=1, \beta=2$.
The number of calculated points are $101$.
The best fitted parameters $a(t)$ and $b(t)$ in \eqref{k-Gau} are indicated
in each plot.
Inset figure shows $w(p,t)$ at each time $t$.}
\end{center}
\label{eta-evolution}
\end{figure}
\section{The relation between $\beta$ and the average energy}
The $\kappa$-Gaussian of the steady-state $w_s(p)$ is not
normalizable for the parameter region of $ 2 \le \vert \kappa \vert$,
or equivalently, $\alpha p_c^2 \le D$. As similarly as in the momentum
distributions in an optical lattice, the physical meaning of this
is that compared with the random momentum fluctuations $D$, the capture force
$\alpha p_c^2$ is too weak to keep the particle around the bottom ($p=0$)
of the potential $V(p)$. In other words, the potential is too shallow
to capture a particle with the large random momentum fluctuations.
Next let us turn our focus on the parameter region of
$2/3 < \vert \kappa \vert < 2 (D < \alpha p_c^2 < 3 D)$,
in which the second moment
\begin{align}
\ave{p^2} \equiv \int_{-\infty}^{\infty} dp \, p^2 \, w_s(p),
\end{align}
of the $\kappa$-Gaussian becomes infinite.
Consequently the mean kinetic energy, $\ave{p^2}/2m$, diverges,
and it is the hallmark of an anomalous diffusion.
Lutz \cite{Lutz} showed that an explicit correspondence
between ergodicity breaking in a system described by power-law tail distributions and the divergences of the moments of these distributions, i.e.,
Ensemble average and time average of the dynamical variable $p^n$
are not equal to each other in the infinite-time limit,
when the $2n$-th moment of the stationary momentum distribution
for a system described by power-law tail distributions diverges.
His analysis is also valid for the present study because both momentum
dependent drift coefficient $K^{\rm ol}(p)$ and $K(p)$ have the same asymptotic
behavior $\sim -1/p$, and consequently both steady-states are non-Gaussian
distributions with power-law tails.
Whereas the mean kinetic energy diverges in this way,
in the same region, let us consider the following average energy
\begin{align}
\ave{ p \frac{d}{dp} V(p)} &\equiv
\int_{-\infty}^{\infty} \frac{p^2}{\sqrt{1+\left(\frac{p}{p_c}\right)^4}}
\, w_s(p).
\label{ave}
\end{align}
Since
\begin{align}
- \frac{1}{\beta} \frac{d}{dp} \, \exp_{\kappa} \left(-\beta \frac{p^2}{2} \right)
= \frac{p \exp_{\kappa} \left(-\beta \frac{p^2}{2} \right)}{\sqrt{1+\left(\frac{p}{p_c}\right)^4}},
\end{align}
integrating by part, the r.h.s. of equation \eqref{ave} becomes
\begin{align}
-\frac{p}{\beta} \exp_{\kappa} \left(-\beta \frac{p^2}{2} \right) \Big\vert_{-\infty}^{\infty}+
\frac{1}{\beta} \int_{-\infty}^{\infty} dp \, w_s(p)
\end{align}
In the region $2/3 < \vert \kappa \vert < 2$, the first term become zero
and the $w_s(p)$ is normalizable, thus we finally obtain
\begin{align}
\ave{ p \frac{d}{dp} V(p)} = \frac{1}{\beta}.
\label{this}
\end{align}
This relation remind us \textit{a general equipartition principle} \cite{Tolman},
\begin{align}
\ave{ p \frac{\partial}{\partial p} E} = k_{\rm B} T,
\label{GER}
\end{align}
where $E$ is the energy of a system in thermal equilibrium
with the temperature $T$. However, as pointed out before,
$\beta$ is not an inverse temperature since the steady-state $w_s(p)$
is, in general, a non-equilibrium state, in which the temperature is not
well defined.
\section{Summary}
We have proposed a momentum-dependent drift coefficient which
asymptotically decreases as $-1/p$ for a large momentum $\vert p \vert > p_c$.
We have studied a system described by the FP equation with this drift
coefficient, and found that
the steady-state is a $\kappa$-generalized probability distribution.
We performed the several numerical simulations in order
to study asymptotic behaviors of the numerical solutions against the different
initial probability distributions, and found that these numerical solutions
asymptotically approach to the $\kappa$-Gaussian functions.
\section*{Acknowledgement}
This research was partially supported by the Ministry of Education,
Science, Sports and Culture (MEXT), Japan, Grant-in-Aid for Scientific
Research (C), 19540391, 2008.
|
1,477,468,750,615 | arxiv | \section{Introduction}
Graphene nanoribbons (GNRs) have the most of the outstanding
properties of pristine graphene and a non-null bandgap that is
edge-dependent and inversely proportional to the ribbon
width~\cite{1,2,3}. GNRs have been fabricated by several methods, from
top-down to chemical and bottom-up
methods~\cite{4,5,6,7,8,10,11,12,13}. A particularly interesting
discovery is the possibility of tuning the bandgap and other
electronic and magnetic properties of GNRs by application of twisting
along its axis~\cite{14,15,16,17,18,19,20,22}. Electronic,
mechanical~\cite{24,25,26,27,28} and thermal~\cite{29,30,31}
properties of twisted GNRs (TGNRs) make them promising and versatile
nanostructures for diverse applications.
Fabrication of TGNRs has been reported in the literature. Chamberlain
{\it et al.}~\cite{33} grew TGNRs inside carbon nanotubes from
reaction of small sulfur-containing molecules. Cao {\it et
al.}~\cite{34} developed a method to curl GNRs by thermal annealing
that was used by Jarrari {\it et al.}~\cite{35} to show that curled
GNRs enhance photocurrent responses. Previously developed methods to
bend and twist nanotubes~\cite{36} or induce, by laser, changes in
GNRs~\cite{37}, might be useful to fabricate TGNRs.
A ubiquitous phenomenon in filamentary structures is the so-called
{\it twist-to-writhe transition} (TWT). It consists of releasing the
torsional elastic energy accumulated in an initially straight twisted
rod by spontaneous curling and coiling. The TWT is shown to happen
when the filament twist density reaches a critical
value~\cite{40,goriely2000,fonseca2006JAP,mahadevan2019}. In turn, the
twist density is shown to depend on either the applied tensile stress
or filament end-to-end
distance~\cite{goriely2000,fonseca2006JAP,mahadevan2019}. TWT has been
shown to obey the conservation of a geometric quantity called the {\it
linking number}, $Lk$, of a pair of closed curves or a pair of open
curves with extremities prevented from crossing one with respect to
the other. Defined as a double Gauss integral along the two curves,
$Lk$ is shown to be always an integer number given by half the value
of a certain ``oriented'' way of counting how many times one curve
crosses the other~\cite{writhe2006}. $Lk$, then, satisfies the
C\u{a}lug\u{a}reanu-White-Fuller {\it linking number}
theorem~\cite{40,39,41}:
\begin{equation}
\label{cwf}
Lk = Tw + Wr \, ,
\end{equation}
where $Tw$ ($Wr$) is the {\it total twist} ({\it writhe}) of the
filament (filament centerline). {\it Writhe} is a geometric quantity
related to the non-planarity of a space
curve~\cite{41,writhe2006,kleninwrithe2000}.
TWT is observed in conformations of
DNA~\cite{kleninwrithe2000,dna0,dna1,dna2,dna3}, filamentary
structures of some bacteria~\cite{mendel1,mendel2,prl1998}, in garden
hoses, telephone cords, cables and other engineering
structures~\cite{dna3,c1,c2,c3}. It is also present in a wide range of
correlated phenomena and applications as in dynamics of stiff
polymers~\cite{prl2000}, coiled artificial muscles made of twisted
carbon nanotube yarns~\cite{science2012} or fishing
lines~\cite{science2014}, helicity in solar active
structures~\cite{magnetic2014} and in fluid
vortices~\cite{science2017vortex}, chemical synthesis of twisted
annulene molecules~\cite{natchem2014annulene}, mechanics of knots and
tangles~\cite{science2020}, collagen
fibrils~\cite{acsnano2020collagen}, etc.
In this Work, a new experimental concept designed to promote and
control the interconversion between {\it twist} and {\it writhe} in a
TGNR, without rotation of its extremities, is proposed and
computationally demonstrated. Basically, it consists of laying the
extremities of a TGNR on two separate substrates, and then allowing
the distance between them to vary, within the nanoribbon length
(Fig. \ref{fig1}). As these substrates play an essential role on the
proposed TWT phenomenon, it is here named {\it substrate induced} TWT
(SITWT). Although nanoribbons can be subject to regular
TWT~\cite{twcTGNR2014scirep}, the proposed interconversion method is
innovative because it does not require the TGNR to be neither
tensioned (or tension released) nor additionally twisted to reach or
exceed the critical twist density.
Section \ref{sec2} presents the description of the proposed SITWT
method as well as
the theory and the computational approach used to calculate $Wr$ and
$Tw$ of each configuration of the TGNR, required to demonstrate the
SITWT. It also describes the computational methods employed to
simulate the SITWT experiment. In Sections \ref{sec3} and \ref{sec4},
the results and the conclusions are presented, respectively.
\section{Methodology}
\label{sec2}
\subsection{Description of the proposed SITWT method}
An initial amount of turns or torsional strain has to be applied to a
GNR in order to produce a TGNR with a given value of $Lk$
(Figs. \ref{fig1}a and \ref{fig1}b). $Lk$ will be conserved as long as
the TGNR extremities are prevented from rotation with respect to the
ribbon axis (also called the ribbon centerline).
The experiment itself consists of first suspending the TGNR, by laying
its two extremities on two separated substrates (Fig. \ref{fig1}c). As
the adhesion forces between the nanoribbon and substrates are
relatively large, as in graphene-graphene surface
interactions~\cite{annett2016nature,fonseca2019Carbon}, it is expected
that these forces will themselves prevent the TGNR extremities from
rotating or releasing the initial applied torsional strain. The idea
of the proposed experiment is, then, to allow the distance between the
substrates to vary within the size of the TGNR (Fig. \ref{fig1}d).
Variation of this distance leads to variation of the amount of TGNR
surface that interacts with the substrates. As the flexural rigidity
of nanoribbons are usually low (see for example, that of
graphene~\cite{zerobending2011prl}), van der Waals forces between the
nanoribbon and substrates flattens the TGNR parts in touch with the
substrates, leading to an overall change of the shape of the suspended
part of the TGNR (illustrated in Fig. \ref{fig1}d). Smaller the
distance between the substrates, larger the difference in the
conformation of the axis (or centerline) of the TGNR from that of a
straight twisted ribbon. As a consequence, the {\it writhe}, $Wr$, of
the TGNR centerline changes with the substrate distance. As long as
the adhesion forces with the substrates keep preventing the TGNR ends
from rotation, the {\it linking number} theorem, eq. (\ref{cwf}), is
expected to be satisfied during the movement of the substrates. The
theorem, then, predicts that if the {\it writhe}, $Wr$, of the TGNR
varies, its {\it total twist}, $Tw$, will vary in order to keep the
TGNR $Lk$ unchanged. That is the basis for the experiment of changing
the twist, $Tw$, without applying or removing any amount of rotation
to the TGNR extremities.
In order to demonstrate the above SITWT, fully atomistic classical
molecular dynamics (MD) simulations of a TGNR suspended on two
substrates will be performed. The AIREBO potential~\cite{old39,old40}
and LAMMPS package~\cite{old41} will be employed to simulate the
proposed experiment of moving substrates with suspended TGNRs.
Graphite substrates will be considered and modeled as fixed graphene
layers. AIREBO is a well-known reactive empirical potential, largely
used to study the structure, mechanical and thermal properties of
carbon
nanostructures~\cite{42,43,44,45,46,47,48,49,50,51,52}. Therefore, the
MD results for the structure and dynamics of the TGNRs on the moving
substrates are expected to really represent real experiments.
\begin{figure}[h!]
\includegraphics[width=8.5cm,keepaspectratio]{fig1.png}
\caption{\label{fig1} Experimental scheme to demonstrate the SITWT.
Panels (a) and (b) show the initial preparation of a TGNR by fixing
one extremity of the straight untwisted GNR (a), then applying a
torsional strain until reaching the desired amount of initial total
twist (b). Panels (c) and (d) show both TGNR extremities being laid
on two substrates without additional constraints, and the distance
between the substrates being allowed to change.}
\end{figure}
From the MD results, the {\it linking number} theorem will be shown to
be always satisfied for suspended TGNRs under the present method. To
show that, the calculation of $Tw$ and $Wr$ for every configuration of
the TGNRs studied here is required. Their summation should be equal to
the initially applied $Lk$ to the TGNR, according to eq. (\ref{cwf}).
In turn, calculations of $Tw$ and $Wr$ require the definition of two
space curves corresponding to the TGNR centerline and an adjacent
line. As described ahead, these space curves will be discretized based
on the positions of two sets of carbon atoms along the TGNR, one at
the middle part of the nanoribbon, and the other at about one graphene
lattice of distance from the first, on the side, respectively. A piece
of the TGNR showing these two sets of atoms is shown in the insets of
Fig. \ref{fig2}. In what follows, the details about how these
quantities are calculated and the definitions of $Tw$ and $Wr$ are
presented.
\subsection{Numerical approach for calculating $Wr$ and $Tw$ of TGNRs}
\label{discretizar}
Let vectors $\bm{\mbox{x}}$ and $\bm{\mbox{y}}$ be identified with the
TGNR centerline and an adjacent line bounded to it, as ilustrated by
red and black atoms drawn in the insets of Fig. \ref{fig2},
respectively. $Tw$ and $Wr$ can be calculated by~\cite{41,writhe2006}:
\begin{equation}
\label{tw}
Tw = \frac{1}{2\pi}\oint \bm{\mbox{t}}_{\bm{\mbox{x}}(s)}\cdot
\left( \bm{\mbox{u}}\times\frac{\mbox{d}\bm{\mbox{u}}}{\mbox{d}s}
\right) \,\mbox{d}s \, ,
\end{equation}
where $s$ and $\bm{\mbox{t}}$ are the arclength and tangent vector of
the TGNR centerline curve $\bm{\mbox{x}}$, respectively, and
$\bm{\mbox{u}}$ is a unit vector orthogonal to $\bm{\mbox{t}}$, and
pointing from $\bm{\mbox{x}}$ to $\bm{\mbox{y}}$. And
\begin{equation}
\label{wr}
Wr = \frac{1}{4\pi}\oint_{\bm{\mbox{x}}}\oint_{\bm{\mbox{x}}}
\frac{(\bm{\mbox{t}}_{\bm{\mbox{x}}(s)}\times\bm{\mbox{t}}_{\bm{\mbox{x}}(s')})
\cdot (\bm{\mbox{x}}(s)-\bm{\mbox{x}}(s'))}
{|\bm{\mbox{x}}(s)-\bm{\mbox{x}}(s')|^3} \mbox{d}s\,\mbox{d}s'
\, .
\end{equation}
While $Lk$ is shown to be always an integer number, $Tw$ and $Wr$ are
real numbers that, for closed or end-constrained rods, can varies as
long as eq. (\ref{cwf}) is satisfied. Eqs. (\ref{tw}) and (\ref{wr})
are defined for closed curves. However, it has been
shown~\cite{writhe2006,vanderheidge2003} that if the tangents at the
endpoints of a finite open centerline are coplanar, an imagined
coplanar closing curve would contribute with zero to $Wr$. Similarly,
it is possible to think of closing curves for the centerline,
$\bm{\mbox{x}}$, and its adjancent line, $\bm{\mbox{y}}$, that do not
cross one to each other, so contributing with zero to the calculation
of $Tw$. In the proposed experiment, the TGNRs are not closed ribbons
but the substrates on which its extremities are laid, are coplanar.
All above quantities are discretized according to the following
definitions:
\begin{subequations}
\label{discreti}
\begin{eqnarray}
&&\bm{\mbox{x}}=\{\bm{\mbox{x}}_{1},\bm{\mbox{x}}_{2},\ldots,\bm{\mbox{x}}_{i},
\ldots,\bm{\mbox{x}}_{N-1},\bm{\mbox{x}}_{N}\} \, , \label{x} \\
&&\bm{\mbox{y}}=\{\bm{\mbox{y}}_{1},\bm{\mbox{y}}_{2},\ldots,\bm{\mbox{y}}_{i},
\ldots,\bm{\mbox{y}}_{N-1},\bm{\mbox{y}}_{N}\} \, , \label{y} \\
&&s_{1}=0\quad\mbox{and}\quad s_{i>1}=
\sum^{i}_{k=2}|\bm{\mbox{x}}_{k}-\bm{\mbox{x}}_{k-1}|\, , \label{s} \\
&&\mbox{d}s_1=0\quad\mbox{and}\quad \mbox{d}s_{i>1}=s_i-s_{i-1} \, \label{ds} \\
&&\bm{\mbox{t}}_{1}=0\quad\mbox{and}\quad\bm{\mbox{t}}_{i>1}=
\frac{\bm{\mbox{x}}_{i}-\bm{\mbox{x}}_{i-1}}
{|\bm{\mbox{x}}_{i}-\bm{\mbox{x}}_{i-1}|} \, , \label{t} \\
&&\bm{\mbox{u}}_i=\frac{\bm{\mbox{y}}_i-\bm{\mbox{x}}_i}
{|\bm{\mbox{y}}_i-\bm{\mbox{x}}_i|} \, , \label{u} \\
&&\mbox{d}\bm{\mbox{u}}_i\equiv
\left.\frac{\mbox{d}\bm{\mbox{u}}}{\mbox{d}s}\right|_{i} ,
\quad\mbox{d}\bm{\mbox{u}}_1=0\quad\mbox{and} \nonumber \\
&&\mbox{d}\bm{\mbox{u}}_{i>1}=
\frac{\bm{\mbox{u}}_{i}-\bm{\mbox{u}}_{i-1}}
{\mbox{d}s_i} \, , \label{du}
\end{eqnarray}
\end{subequations}
where $\bm{\mbox{x}}_{i}$ ($\bm{\mbox{y}}_{i}$) and $N$ are the
position of the $i$-esim atom along the centerline (adjacent line) and
the number of atoms along the centerline of the TGNR, respectively.
For the TGNR studied here, $N = 285$. In eqs. (\ref{discreti}) the
indices go from 1 to $N$.
\subsection{Molecular Dynamics simulations and chosen structures of TGNRs}
\begin{figure}[h!]
\includegraphics[width=8.5cm,keepaspectratio]{fig2.png}
\caption{\label{fig2} Upper views of the TGNRs used in the simulated
SITWT processes. Two substrates of 297 \AA\mbox{} by 300 \AA\mbox{}
of size (not shown to scale to save space) are placed at an initial
distance of about 320 \AA. Suspended on these two substrates are
armchair TGNRs with $Lk=2$ and 600 (33) \AA\mbox{} length
(width). Panels (a) and (b) show optimized TGNRs after 8 ns of MD
simulations at 300 K and 1000 K, respectively. Insets show pieces
of two sets of carbon atoms that represent the centerline (red) and
an adjacent line (black) of the TGNR. The positions of these sets
of atoms are used to define and discretize the vectors
$\bm{\mbox{t}}$ and $\bm{\mbox{u}}$ as shown in eqs.
(\ref{discreti}). Substrates and TGNR atoms are shown in grey and
transparent yellow, respectively, while the set of carbon atoms
corresponding to the TGNR centerline (adjacent line) are shown in
red (black).}
\end{figure}
Every structure was optimized by an energy minimization method based
on gradient conjugate implemented in LAMMPS, with energy and force
tolerances of $10^{-8}$ and $10^{-8}$ eV/\AA, respectively. Thermal
fluctuations were simulated using Langevin thermostat, with timestep
set to 0.5 fs and thermostat damping factor set in 1 ps.
The set-up of the simulated experiments carried out here is shown in
Fig. \ref{fig2}. The nanoribbon chosen to investigate the SITWT
phenomenon is an initially straight hydrogen passivated armchair GNR
of about 600 \AA\mbox{} (33 \AA) length (width) to which a total
torsional strain of $4\pi$ (two full turns) was previously
applied. Its initial linking number is, then, $Lk = 2$. TGNRs in
Fig. \ref{fig2} were drawn in a transparent color in order to
facilitate the observation of the shape of their centerlines,
highlighted in red. The substrates are modeled as fixed graphene
layers.
Fig. \ref{fig2} shows two different configurations of suspended TGNRs
that have the same value of $Lk=2$ but different values of $Tw$ and
$Wr$ (see Table \ref{tab1}). They were obtained from two different
pathways as described below and will be considered for the experiment
of moving substrates. One of them came from bringing the extremities
of the TGNR into contact with two substrates followed by optimization.
The structure was, then, simulated for 4 and 8 ns at 300 and 1000 K,
respectively, in order to verify its thermal stability under the
suspended configuration. Optimization of these structures at the end
of the thermal simulations revealed no difference in their
corresponding configurations. Fig. \ref{fig2}b shows this optimized
structure.
Before proceeding to the dynamical simulations of the moving
substrates experiment, I have looked for other possible equilibrium
configurations of suspended TGNRs with the same $Lk=2$. The recent
work by Savin {\it et al.}~\cite{38}, then, came to my knowledge.
There, a particular TGNR, also having $Lk=2$, was fully laid on a
substrate and the final configuration displayed two {\it loop-like}
structures named by them as {\it twistons}. After testing the
formation of the same two {\it twistons}, I moved the structure to a
suspended configuration on two separate substrates, and simulated it
by 8 ns at 300 K. Then, the configuration shown in Fig. \ref{fig2}a
was found. Further simulation of this structure at 1000 K, made it to
become similar to that of Fig. \ref{fig2}b, indicating that they might
have similar cohesive energies. In fact, Table \ref{tab1} shows that
the optimized cohesive energies of the structures shown in
Figs. \ref{fig2}a and \ref{fig2}b are very close. The files containing
the coordinates of the atoms of the structures shown in
Fig. \ref{fig2} are provided in Supplemental Material~\cite{sm}.
\section{Results}
\label{sec3}
\subsection{Test of the numerical calculation of $Wr$ and $Tw$}
The centerline and its adjacent line of the TGNRs considered here
possess 285 carbon atoms. Therefore, the eqs. (\ref{x}) and (\ref{y})
possess 285 coordinates. Before using the discretization of
eqs. (\ref{tw}) and (\ref{wr}) to calculate $Tw$ and $Wr$ of the
TGNRs, as described and explained in the previous section, the
accuracy of the eqs. (\ref{discreti}) was tested with two discretized
special curves: i) a helical curve closed by straight segments similar
to that shown in Fig. 4 of Fuller's paper~\cite{41}, and ii) a
discretized almost straight TGNR, to which two turns were previously
applied (the same structure used to draw the panels (b) and (c) of
Fig. \ref{fig1}). According to Fuller, the {\it writhe} of a ribbon
having that particular centerline curve can be calculated by the
formula $Wr=n-n\sin\alpha$, where $\alpha$ is the helix pitch angle of
the helical part of the curve, and $n$ is the number of turns. I
generated a list of points following the helical curve with $n=2$,
radius = $1$ and pitch = $4\pi$, which, from the Fuller's formula,
provides $Wr = 0.585786$. Using the proposed discretization method,
the result for the numerical calculation of {\it writhe} of the
discretized curve i), with 285 points, is $Wr \simeq 0.5832$. The
second curve (in fact, two curves are needed, the centerline and an
adjacent line) was considered for the calculation of the total twist,
$Tw$, since the {\it writhe} of a straight curve is zero. $Tw$ of the
almost straight $4\pi$-twisted TGNR, whose centerline and adjacent
line also have 285 points, was obtained as $Tw \simeq
1.987$. Therefore, the estimated uncertainty in the calculations of
$Wr$ and $Tw$ using the present method is $\lesssim 0.02$. Wolfram
Mathematica scripts and the data points used to calculate $Tw$ and
$Wr$ of curves i) and ii) are provided in Supplemental
Material~\cite{sm}.
\subsection{$Wr$ and $Tw$ of static TGNRs}
Using the above discretization method, $Tw$ and $Wr$ of the structures
shown in Fig. \ref{fig2} were calculated. Table \ref{tab1} shows the
values of $Tw$, $Wr$ and the sum $Tw+Wr$ for these two TGNRs, showing
that although they have different values of $Tw$ and $Wr$, their sum
is $ \simeq 2$ within the uncertainties of the calculation method.
These results confirm the validity of the {\it linking number}
theorem, eq. (\ref{cwf}), and the SITWT. The possibility of performing
additional control of the $twist$ and $writhe$ of the TGNR, while
keeping $Lk$ conserved, and the results for the dynamical tests of the
SITWT will be shown in the next subsection.
\begin{table}[h!]
\caption{\label{tab1} Energy per atom, $E$ [eV/atom], $Wr$, $Tw$ and
the sum $Wr+Tw$ corresponding to the TGNR structures shown in
Fig. \ref{fig2}.}
\begin{ruledtabular}
\begin{tabular}{ccc}
& Fig. \ref{fig2}a & Fig. \ref{fig2}b \\
\hline
$E$ [eV/atom] & -7.0849 & -7.0848 \\
$Wr$ & 0.323 & 0.457 \\
$Tw$ & 1.663 & 1.524 \\
$Wr+Tw$ & 1.986 & 1.981
\end{tabular}
\end{ruledtabular}
\end{table}
The results shown in Table \ref{tab1} raise an important issue
regarding the determination of the total amount of twist of a given
TGNR. Although the TGNRs of Fig. \ref{fig2} initially received a
torsional strain of $4\pi$, as soon as the TGNR extremities touched
the substrates, its total amount of twist became no longer $4\pi$
anymore ($4\pi$ corresponds to $Tw=2$). Besides, although both
configurations shown in Fig. \ref{fig2} have $Lk=2$, both have neither
$Tw = 2$ nor the same $Tw$. $Tw$ calculated from eq. (\ref{tw})
represents the real values of the total twist of the nanoribbon. As
the electronic properties of GNRs depend on the amount of twist
applied to them~\cite{14,15,16,35,20,22}, it is important to know the
real value of the twist in order to correctly determine the
structure-property relationships in TGNRs. Subsection \ref{tuning}
shows an example of how to find out the right distance between the
substrates on which a TGNR of $Lk=2$ presents a chosen value of the
$Tw$.
\subsection{Dynamical interconvertion of $Wr$ and $Tw$ in TGNRs}
In view of the problem mentioned in the previous section and the need
for precise determination of the total twist of a TGNR, the present
experimental proposal of moving substrates with suspended TGNRs might
come in handy. The reason is that by simply controling the substrate
distance, the amount of twist of a TGNR can be determined. In order to
demonstrate that, I simulated several cycles of movements of the
substrates using the structures shown in Figs. \ref{fig2}a and
\ref{fig2}b. From these simulations, using the discretization method
described in Subsection \ref{discretizar}, $Tw$, $Wr$ and $Tw+Wr$ were
calculated as function of time. One cycle of the numerical experiment
consists of moving both substrates, one towards the other until almost
touching, so ``closing'' them, then inverting the velocities and
moving back to the initial distance, so ``opening'' them. Each
substrate was moved at an absolute speed of 0.2 \AA/ps, then the
effective approaching or going away speed was 0.4 \AA/ps. For an
initial maximum distance of $\sim$ 320 \AA, one cycle takes 1.6 ns. In
the numerical simulations of the experiment, the atoms of the TGNR of
Fig. \ref{fig2}a (Fig. \ref{fig2}b) were thermostated at 300 K (1000
K) in order to verify if conditions close to realistic situations
influence the results. The atoms of the substrates, however, were not
thermostated.
In order to calculate the dependence of $Wr$ and $Tw$ of the TGNRs
with time, one frame of the system was collected every 20 ps, or 50
frames were collected per nanosecond. Every frame provides the sets of
carbon atoms positions of the TGNR centerline and adjacent line and,
from them, the quantities given in eq. (\ref{discreti}) were
calculated. The summation of $Wr$ and $Tw$ allows for the verification
of the {\it linking number} theorem and, consequently, once more, the
legitimacy of the SITWT.
\begin{figure}[h!]
\includegraphics[width=8.5cm,keepaspectratio]{fig3.png}
\caption{\label{fig3} Variation of {\it writhe}, $Wr$ (circles), {\it
total twist}, $Tw$ (crosses), and the sum $Wr + Tw$ (triangles)
with time for (a) the TGNR of Fig. \ref{fig2}a, 8 cycles simulated
at 300 K and (b) the TGNR of Fig. \ref{fig2}b, 4 cycles simulated
at 1000 K. }
\end{figure}
Fig. \ref{fig3}a (\ref{fig3}b) shows $Wr$, $Tw$ and $Wr + Tw$ as
function of time, during 8 (4) cycles of closing and opening the
substrates with the TGNRs of Fig. \ref{fig2}a (\ref{fig2}b) simulated
at 300 K (1000 K). Fig. \ref{fig3} shows that $Wr$ and $Tw$ oscillate
between minimum and maximum values during the cycles. The maximum
(minimum) value of $Wr$ happens for the substrates closed (opened) and
contrary for $Tw$. The rate of changing $Wr$ and $Tw$ is not uniform
despite the constancy of the speed of moving substrates. The rate
increases (decreases) when the substrates get closed (far) one to each
other, what suggests that longer the suspended TGNR easier to
fine-tune its total twist. Movies from S1 to S4 in Supplemental
Material~\cite{sm} show upper and lateral views of one cycle of the
experiment with both the TGNRs shown in Fig. \ref{fig2}. The movies
allow to see the change of the centerline as the substrates get closed
and go away.
Fig. \ref{fig3}, then, demonstrates the possibility of controlling the
amount of total twist of a TGNR by just changing the distance between
the substrates on which its ends are laid. The results for $Wr + Tw$
along the time show that the {\it linking number} theorem,
eq. (\ref{cwf}), is satisfied within thermal fluctuations and
uncertainties that come from the discrete method of calculating $Wr$
and $Tw$.
\begin{figure}[h!]
\includegraphics[width=8.5cm,keepaspectratio]{fig4.png}
\caption{\label{fig4} Cohesive energy of the whole system composed by
substrates $+$ TGNR of Fig. \ref{fig2}a as function of time, during
4 cycles of the movement of the substrates.}
\end{figure}
Fig. \ref{fig4} displays the energy of the whole system during 4
cycles of movement of the substrates with the TGNR of Fig. \ref{fig2}a
simulated at 300 K. The energy decreases with the increase of the
contact between the TGNR and substrates (increased adhesion), and
back. The cusps in Fig. \ref{fig4} represent the subtle increase of
the energy of the system because the simulation allowed the substrates
to be almost in full contact. The rate of variation of the energy with
the time, calculated from the inclination of the curve in
Fig. \ref{fig4}, is $P\simeq41.3$ nW. It provides an estimate for the
external power necessary to carry out the SITWT process. Assuming the
force, $F$, needed to move the substrates is approximately constant,
using the equation $P=Fv$, with $v=0.4$ \AA/ps, it is found that
$F\simeq1$ nN. This value of force is within the range of actuation of
AFM microscopes~\cite{danielPaul1990}.
\subsection{Example of determination of the distance between substrates
to reach a chosen $Tw$}
\label{tuning}
From Fig. \ref{fig3} we see that the range of variation of the total
twist of the initially applied 2 turns (or 4$\pi$) TGNR is $0.8
\lesssim Tw \lesssim 1.6$. To ilustrate the possibility of chosing and
determining the total amount of twist of the TGNR, within
uncertainties of the method, let us find out the distance, $d$,
between the substrates, such that $Tw$ has the chosen value. Suppose
the desired value of the {\it total twist} of the TGNR is
$Tw=1$. Based on the present conditions of MD simulations,
\begin{equation}
\label{dd}
d = 320 - vt\, , \mbox{$d$ in \AA\mbox{ }and $t$ in ps,}
\end{equation}
where $v = 0.4$ \AA/ps is the simulated speed of approaching or moving
away the substrates. Taking the value of $t \approx 750$ ps, that
corresponds to $Tw \approx 1$ in Fig. \ref{fig3}a, we obtain $d \cong
20$ \AA.
Applications, other than controlling the electronic properties of the
TGNR, are the possibility of tunning thermal transport and mechanical
properties of TGNRs by fixing their amount of {\it twist}. As the {\it
writhe} of a suspended TGNR varies with the distance between the
substrates in the present SITWT method, any physical property that
depends on ribbon shape can be also controlled by controlling the
substrate distance. These options expand the range of possible
applications of suspended TGNRs.
\section{Conclusions}
\label{sec4}
In summary, a method to adjust and determine the amount of twist of a
previously twisted GNR without the need of applying additional
rotation is presented and computationally demonstrated. The method
reveals the concept of a tension-free, ends-rotation-free, substrate
induced {\textit{twist to writhe transition}} in twisted
nanoribbons. The method relies on the adhesion forces between the
extremities of the twisted nanoribbon and the substrates, and on the
relatively low flexural rigidity of the ribbon. The {\it total twist},
$Tw$, {\it writhe}, $Wr$, and the sum $Tw+Wr$ were numerically
calculated for several configurations of suspended TGNRs, obtained
from MD simulations. In particular, the sum $Tw+Wr$ was compared to
the value of $Lk$ initially ascribed to the TGNR ($Lk=2$). The results
were shown to satisfy the {\it linking number} theorem,
eq. (\ref{cwf}), within the uncertainties of the methods and thermal
fluctuations. Estimates for the power and force needed to move the
substrates were presented based on the MD results. An application of
the method to the controlling of the total amount of twist of a TGNR
was also presented. The advantage of such a method is the possibility
of fine-tunning the total twist of a TGNR by simply moving the
substrates on which its extremities are laid. This method, then, might
be useful for experimentalists to manipulate TGNRs. It was also shown
that temperature does not affect the SITWT phenomenon, so the
experiment can be performed at different temperatures. I hope this
work motivates the development of new experiments and applications of
twisted nanoribbons.
\begin{acknowledgments}
I thank the Brazilian Agency CNPq for grant 311587/2018-6 and S\~{a}o
Paulo Research Foundation (FAPESP) for the grant \#2020/02044-9. This
research also used the computing resources and assistance of the John
David Rogers Computing Center (CCJDR) in the Institute of Physics
``Gleb Wataghin'', University of Campinas.
\end{acknowledgments}
|
1,477,468,750,616 | arxiv | \section{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\titlespacing\subsection{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\titlespacing\subsubsection{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\titlespacing\paragraph{0pt}{12pt}{12pt}
\usepackage[T1]{fontenc}
\usepackage[font=small,labelfont=bf,tableposition=top]{caption}
\usepackage[font=small,labelfont=bf]{caption}
\AtBeginDocument{%
\providecommand\BibTeX{{%
\normalfont B\kern-0.5em{\scshape i\kern-0.25em b}\kern-0.8em\TeX}}}
\copyrightyear{2022}
\acmYear{2022}
\setcopyright{rightsretained}
\acmConference[UbiComp/ISWC '22 Adjunct]{Proceedings of the 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing}{September 11--15, 2022}{Cambridge, United Kingdom}
\acmBooktitle{Proceedings of the 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp/ISWC '22 Adjunct), September 11--15, 2022, Cambridge, United Kingdom}
\acmDOI{10.1145/3544793.3560333}
\acmISBN{978-1-4503-9423-9/22/09}
\begin{document}
\title{Privacy-Patterns for IoT Application Developers}
\author{Nada Alhirabi}
\affiliation{%
\institution{Cardiff University}
\city{Cardiff}
\country{UK}
}
\affiliation{%
\institution{King Saud University}
\city{Riyadh}
\country{Saudi Arabia}
}
\email{alhirabin@cardiff.ac.uk}
\author{Stephanie Beaumont}
\affiliation{%
\institution{My Data Fix Ltd}
\city{London}
\country{UK}
}
\email{stephaniebeaumont@mydatafix.com}
\author{Omer Rana}
\affiliation{%
\institution{Cardiff University}
\city{Cardiff}
\country{UK}
}
\email{ ranaof@cardiff.ac.uk}
\author{Charith Perera}
\affiliation{%
\institution{Cardiff University}
\city{Cardiff}
\country{UK}
}
\email{pererac@cardiff.ac.uk}
\renewcommand{\shortauthors}{Alhirabi et al.}
\begin{abstract}
Designing Internet of things (IoT) applications (apps) is challenging due to the heterogeneous nature of the systems on which these apps are deployed. Personal data, often classified as sensitive,
may be collected and analysed by IoT apps, where data privacy laws are expected to protect such information.
Various approaches already exist to support privacy-by-design (PbD) schemes, enabling developers to take data privacy into account at the design phase of application development. However, developers are not widely adopting these approaches because of understandability and interpretation challenges. A limited number of tools currently exist to assist developers in this context -- leading to our proposal for ``PARROT" (PrivAcy by design tool foR inteRnet Of Things). PARROT supports a number of techniques to enable PbD techniques to be more widely used.
We present the findings of a controlled study
and discuss how this privacy-preserving tool increases the ability of IoT developers to apply privacy laws (such as GDPR) and privacy patterns.
Our students demonstrate that the PARROT prototype tool increases the awareness of privacy requirements in design and increases the likelihood of the subsequent design to be more cognisant of data privacy requirements.
\end{abstract}
\begin{CCSXML}
<ccs2012>
<concept>
<concept_id>10002978.10003029.10011703</concept_id>
<concept_desc>Security and privacy~Usability in security and privacy</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003138.10003142</concept_id>
<concept_desc>Human-centered computing~Ubiquitous and mobile computing design and evaluation methods</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
\end{CCSXML}
\ccsdesc[300]{Security and privacy~Usability in security and privacy}
\ccsdesc[100]{Human-centered computing~Ubiquitous and mobile computing design and evaluation methods}
\keywords{Internet of Things; Privacy by Design; Privacy Patterns; Software Design; Software Developers; Data Protection; Privacy Law; GDPR; Usable Privacy; Privacy Practices}
\maketitle
\section{Introduction}
Internet of Things (IoT) applications generate and process a large amount of data that are transmitted between devices.
As the size and frequency of this data increase, an efficient architecture is needed to manage and process this data~\cite{Kumar2019}. Many efforts have been made to support privacy in the early stage of software development, such as Privacy-by-Design (PbD) principles by Cavoukian \cite{cavoukian2009privacy}.
However, many developers are unaware of the potentially significant privacy issues in an online context -- finding it time-consuming and challenging to understand privacy policies and their implications for their work \cite{cranor2006user}.
Moreover, privacy concerns for a specific app design or implementation are rarely discussed by developers \cite{Li2021}.
This indicates a need for a privacy tool to reduce the operational and implementation gap between software developers and privacy requirements \cite{Alhirabi2021}.
The PARROT tool offers intuitive and user-friendly interfaces to assist and educate software developers on how to learn and include privacy in their system design \cite{AlhirabiDemo2022}. Initially, the tool was built for the highly regulated domain of healthcare. Since then, we have added more use cases, such as smart homes and multi-cloud systems, to test different sensors and privacy challenges such as managing advertisements, cookies and payments.
\section{Architecture and Implementation
PARROT is an interactive prototype tool that was implemented using Sirius (eclipse.org/sirius), a domain-specific modelling tool, to test the effectiveness of privacy by design principles.
We have assessed the gaps and challenges that developers usually face when planning to consider privacy by design. Therefore, this prototype is intended to act as a privacy assistant for software developers. To improve visual support, we used a simple visual notation based on: Size, Shape, and Colour \cite{Moody2010}. We co-designed this software tool collaboratively with a privacy lawyer and other privacy professionals to take their differing perspectives into account. We constructed the tool based on the six Cavoukian PbD principles \cite{cavoukian2009privacy}, which are: \textit{(1) Privacy requirements intrinsic in design and analysis, (2) Privacy embedded in the design, (3) Full functionality, (4) End-to-end security, (5) Visibility and transparency,} and \textit{(6) Respect for user privacy}.
\section{Evaluation}
\label{sec:Methodology}
We conducted a controlled lab study to answer the following research questions: (RQ1) does the tool enable the design of privacy-aware IoT applications for less regulated domains, in comparison to a highly regulated domain such as healthcare? (RQ2) does the tool help increase awareness in software developers about privacy-preserving measures such as privacy patterns.
Since software design is typically a collaborative activity, participants worked in pairs.\\
\textbf{\textit{Recruitment:}}
We recruited participants through the University email group targeting computer science students (UG, PG taught and PG Research) who worked on IoT applications for at least a year \cite{host2000using}.
We hired 12 participants for the study where each participant was given a voucher after completing the study.\\
\textbf{\textit{Evaluation sessions:}}
All the study sessions were conducted online, where each study session lasted between 1.5-2 hours.
We performed between-subjects evaluation to test if PARROT developers are able to create more privacy-preserving IoT designs. We also tested the potential increase in privacy awareness of participants.
In the study, each participant was allocated to one of the two conditions (using or not using PARROT). Twelve participants were divided into an experimental(E) and a control(C) group. Both groups had 6 participants each; both groups involved participants working in pairs.
At the beginning, both groups were given a 20-minutes introduction to privacy, followed by a tutorial on Mural for Group C only and PARROT for Group E only. The participants were then given a list of 20 privacy patterns that were picked based on their applicability to the use case (Figure \ref{table:privacy patterns}).
We asked the control group (C) to use the Mural tool to do the design task for the smart home scenario considering privacy rules and privacy patterns.
The experimental group (E) performed the same task but using the PARROT tool. Both groups had an exit questionnaire for ten minutes at the end of the session.\\
\textbf{\textit{Data scoring:}}
To evaluate the overall privacy principle score, we assigned a score
for each principle by the lawyer as 3: if privacy is considered, the issue is identified and the solution is correct; 2: if privacy is considered, and the issue is identified; 1: if privacy is considered; and 0: if no privacy requirement is considered. We also assigned a score for each privacy pattern as 0: if no pattern was considered; 1: if privacy pattern was considered overall but not in a reasonable place;
2: if a privacy pattern was considered in a reasonable place. A privacy patterns expert was consulted to reduce researcher basis. Then, we sum the score up for all the patterns and principles to have the total score for each one.
\section{Quantitative and Qualitative Results}
To evaluate the study results, Kruskal-Wallis test was performed to determine if there was a statistically significant difference between the groups (E and C). Both privacy principles (p-value = $0.0463<0.05$) and privacy patterns (p-value
\begin{figure}[h]
\includegraphics[scale=.37]{PatternsTable.pdf}
\caption{List of the aplicable privacy patterns from sources: (privacypatterns.org) and (privacypatterns.eu)}
\label{table:privacy patterns}
\end{figure}
\begin{figure}[h]
\includegraphics[scale=.28]{SmartPlotBoxPrinPat.pdf}
\caption{Mean rates of privacy principles scores in Mural and PARROT. (b) Mean rates of privacy patterns in Mural and PARROT.}
\label{Fig:Study1ScorsPlotBox}
\end{figure}
\noindent
=$0.04953 < 0.05$) revealed a significant difference. For posthoc test, Dunn Test was used to test if there is statistical difference between Mural and PARROT. We observed a significant difference
for both privacy principles (p-value=0.046 30159) and privacy patterns (p-value =0.04953461), as shown in Figure~\ref{Fig:Study1ScorsPlotBox}.
We also performed a qualitative analysis to have more insight into participants' thoughts and ideas.
We discussed how the tool helps integrate privacy principles and patterns with the lawyer and the participants.
The privacy lawyer said that PARROT was able to include privacy-specific design components into the IoT application \textit{``from the beginning rather than retrospectively"} from a privacy compliance perspective. In addition, several participants expressed their preference for the visual representation of PARROT.
For example, Pair 2 said, \textit{``the generated colours are helpful to flag any privacy issue immediately... I think it helps to rethink the question again"}.
Pairs 1, 4 and 5 believed PARROT could help people who do not have any privacy background to understand it in a short period. Pair 4 said \textit{``I definitely struggle to understand and apply privacy and privacy patterns because there are many different documents, laws and IoT devices... PARROT will tell you already what privacy needs to be fulfilled for that node which is super useful, in my opinion...you don't have to start researching about it"}. \\
Pair 1, 4, 5 and 6 said that the questions led them to think about things they had not considered previously. For example, Pair 1 said, \textit{"the questions and visual presentation make me aware of little things... presenting privacy when you are setting up is very helpful."}. Pair 4 stated that \textit{``the variety of questions you got asked makes you think of how you can make this correctly"}. Pair 5 said, \textit{``the questions help me to think more about the data subject perspective, not the problem owner only"}.
\section{Conclusion and Future Plan}
This paper presents and discusses the findings of PARROT, an interactive prototype tool to assist developers with privacy. Our participants demonstrated how the PARROT prototype tool helps to embed privacy principles and increases their awareness of privacy patterns. We plan to add more use cases and features, such as showing the overall privacy score of the design and adding menus that include all the applicable privacy patterns in each part of the design.
\bibliographystyle{plain}
|
1,477,468,750,617 | arxiv | \section{\textbf{INTRODUCTION}}
Quantum mechanics was originated in two principally different forms: the
matrix mechanics of Heisenberg and the wave mechanics of Schr\"{o}dinger.
Both forms are equivalent in general and are the complement of one another.
Latter Feynman \cite{F} proposed a third formulation in terms of a path
integral which is equivalent to the previous two. The use of any alternative
formulation is obvious: it opens new possibilities in the development of the
theory \cite{F1}. Following the logic of the quantum theory development one
can suppose that not all alternative formulations are found. In this paper
we propose a new formulation of the non-relativistic quantum mechanics with
a quantum version of the action principle in a ground. We shall come to this
formulation pushing from the Schr\"{o}dinger wave theory. Then the canonical
foundation of the quantum action principle will be done.
\section{\textbf{WAVE FUNCTIONAL}}
For simplicity we shall consider here the dynamics of a particle of mass $m$
in the one-dimensional space. In the Schr\"{o}dinger theory a quantum state
of a particle is described by a wave function $\psi \left( x,t\right) $.
This wave function has the meaning of the probability density to observe the
particle nearby a point with a coordinate $x$ at a moment of time $t$ if the
following normalization condition is fulfilled: \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \
\begin{equation}
\overset{+\infty }{\underset{-\infty }{\dint }}\left| \psi \left( x,t\right)
\right| ^{2}dx=1. \label{1}
\end{equation}
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The evolution of a quantum
state is described by the Schr\"{o}dinger equation: \ \ \ \ \ \ \ \ \ \ \ \
\ \
\begin{equation}
i\hbar \overset{\cdot }{\psi }=-\frac{\hbar ^{2}}{2m}\psi ^{\prime \prime
}+U\left( x,t\right) \psi , \label{2}
\end{equation}
where the dot denotes the partial derivative on time and the hatch denotes
the partial derivative on $x$-coordinate. The normalization condition (\ref%
{1}) is conserved during the Schr\"{o}dinger evolution. The Schr\"{o}dinger
equation (\ref{2}) may be obtained as the Euler-Lagrange equation for the
action:
\begin{eqnarray}
I_{S}\left[ \psi \right] &\equiv &\overset{T}{\underset{0}{\dint }}dt\overset%
{+\infty }{\underset{-\infty }{\dint }}dx\left[ \frac{1}{2}i\hbar \left(
\overline{\psi }\overset{\cdot }{\psi }-\overset{\cdot }{\overline{\psi }}%
\psi \right) \right. \notag \\
&&\left. -\frac{\hbar ^{2}}{2m}\overline{\psi }^{\prime }\psi ^{\prime }-U%
\overline{\psi }\psi \right] . \label{3}
\end{eqnarray}
The wave function describes a quantum state of the particle at each moment
of time. Let us introduce a new object -- a function of a trajectory of the
particle, i.e., a functional $\Psi \left[ x\left( t\right) \right] $ which
describes the particle dynamics on a whole time interval $\left[ 0,T\right] $%
. It will be called \emph{the wave functional}. For this purpose, let us
divide the interval $\left[ 0,T\right] $\ by points $t_{n}$\ on $N$\ small
parts of equal length $\varepsilon =T$ $/N$ . Let us approximate a
trajectory $x=x\left( t\right) $\ of the particle by a broken line with
vertices $x_{n}=x\left( t_{n}\right) $. We define the wave functional as a
product:
\begin{equation}
\Psi \left[ x\left( t\right) \right] \equiv \underset{n}{\dprod }\psi \left(
x_{n},t_{n}\right) . \label{4}
\end{equation}
The normalization condition (\ref{1}) may be rewritten as:
\begin{equation}
\left( \Psi ,\Psi \right) \equiv \dint \underset{n}{\dprod }dx_{n}\left\vert
\Psi \left[ x\left( t\right) \right] \right\vert ^{2}=1. \label{5}
\end{equation}
In fact, the wave functional is a function of $N+1$\ variables:
\begin{equation}
\Psi \left[ x\left( t\right) \right] \equiv F\left(
x_{0},...,x_{n},...,x_{N}\right) , \label{6}
\end{equation}
but the limit $N\rightarrow \infty $\ is assumed. Taking into account this
limit, let us define the variation derivative of the functional (6) as
follows:
\begin{equation}
\frac{\delta \Psi \left[ x\left( t\right) \right] }{\delta x\left(
t_{n}\right) }\equiv \frac{1}{\varepsilon }\frac{\partial F}{\partial x_{n}}=%
\frac{1}{\varepsilon }\frac{\partial \psi \left( x_{n},t_{n}\right) }{%
\partial x_{n}}\underset{n^{\prime }\neq n}{\dprod }\psi \left( x_{n^{\prime
}},t_{n^{\prime }}\right) . \label{7}
\end{equation}
Let us consider the integral defined by the corresponding integral sum:
\begin{equation}
\underset{0}{\overset{T}{\dint }}dt\overset{\cdot }{x}\left( t\right) \frac{%
\delta \Psi \left[ x\left( t\right) \right] }{\delta x\left( t\right) }\cong
\underset{n}{\dsum }\Delta x_{n}\frac{1}{\varepsilon }\frac{\partial F}{%
\partial x_{n}}, \label{8}
\end{equation}
where $\Delta x_{n}\equiv x_{n+1}-x_{n}$. The integral (\ref{8}) has the
meaning of a variation of the wave functional (\ref{4}) which is originated
by an infinitesimal shift of the broken line ``back''\ in time. This shift
is represented as the successive replacement of vertices: $x_{n}\rightarrow
x_{n+1}$. On the other hand, this variation may be considered as the result
of the time shift on the one step back: $t\rightarrow t-\varepsilon $. Then
one obtains:
\begin{equation}
\underset{0}{\overset{T}{\dint }}dt\overset{\cdot }{x}\left( t\right) \frac{%
\delta \Psi \left[ x\left( t\right) \right] }{\delta x\left( t\right) }\cong
\underset{n}{-\dsum }\frac{\partial \psi \left( x_{n},t_{n}\right) }{%
\partial t}\underset{n^{\prime }\neq n}{\dprod }\psi \left( x_{n^{\prime
}},t_{n^{\prime }}\right) . \label{9}
\end{equation}
\section{\textbf{QUANTUM ACTION PRINCIPLE}}
The action (\ref{3}), which defines the Schr\"{o}dinger dynamics in terms of
the wave function $\psi \left( x,t\right) $, can be transformed in a
functional of the wave functional $\Psi \left[ x\left( t\right) \right] $.
The following formula,
\begin{eqnarray}
I_{S}\left[ \psi \right] &=&-\underset{0}{\overset{T}{\dint }}dt\dint
\underset{t}{\dprod }dx\left\{ \frac{1}{2}i\widetilde{\hbar }\overset{\cdot }%
{x}\left( t\right) \left[ \overline{\Psi }\frac{\delta \Psi }{\delta x\left(
t\right) }-\frac{\delta \overline{\Psi }}{\delta x\left( t\right) }\Psi %
\right] \right. \notag \\
&&\left. +\frac{\hbar ^{2}}{2m}\frac{\delta \overline{\Psi }}{\delta x\left(
t\right) }\frac{\delta \Psi }{\delta x\left( t\right) }+U\left( x\left(
t\right) ,t\right) \overline{\Psi }\Psi \right\} , \label{10}
\end{eqnarray}
is a ``bridge''\ between the Schr\"{o}dinger and our representations. For
the multiplicative functional (\ref{4}), which is related with the $%
\varepsilon $-division of the interval of time $\left[ 0,T\right] $, the
constant $\widetilde{\hbar }$\ is
\begin{equation}
\widetilde{\hbar }\equiv \varepsilon \hbar . \label{11}
\end{equation}
The proof of the formula (\ref{10}) is based on the definition (\ref{7}) of
the variation derivative of the wave functional, the equation (\ref{9}) and
the normalization condition for the wave function (\ref{1}) which is
conserved in time.
The new representation of the Schr\"{o}dinger action in terms of the wave
functional gives us a possibility for an alternative formulation of quantum
dynamics. The new formulation is based on the following equation:
\begin{eqnarray}
\widehat{I}\Psi &\equiv &\overset{T}{\underset{0}{\dint }}dt\left[ \frac{%
\widetilde{\hbar }}{i}\overset{\cdot }{x}\left( t\right) \frac{\delta \Psi }{%
\delta x\left( t\right) }+\frac{\widetilde{\hbar }^{2}}{2m}\frac{\delta
^{2}\Psi }{\delta x^{2}\left( t\right) }-U\left( x\left( t\right) ,t\right)
\Psi \right] \notag \\
&=&\lambda \Psi . \label{12}
\end{eqnarray}
Here $\lambda $\ is an eigenvalue of the operator $\widehat{I}$ which is a
quantum version of the classical canonical action:
\begin{equation}
I\left[ x\left( t\right) ,p\left( t\right) \right] \equiv \overset{T}{%
\underset{0}{\dint }}dt\left[ p\overset{\cdot }{x}-\frac{p^{2}}{2m}-U\left(
x\left( t\right) ,t\right) \right] . \label{13}
\end{equation}
The \textquotedblleft quantization\textquotedblright\ of the classical
action (\ref{13}) is performed by the replacement of the canonical momentum $%
\ p\left( t\right) $\ by the functional-differential operator:
\begin{equation}
\widehat{p}\left( t\right) \equiv \frac{\widetilde{\hbar }}{i}\frac{\delta }{%
\delta x\left( t\right) }. \label{14}
\end{equation}
Let us formulate the quantum action principle as a search for an extremum in
the set of eigenvalues $\lambda $, assuming that $\lambda $\ depends on a
certain set of continuous parameters. According to (\ref{10}), on the set of
the multiplicative functionals (\ref{4}) we have:
\begin{equation}
\lambda =\left( \Psi ,\widehat{I}\Psi \right) =I_{S}\left[ \psi \right] ,
\label{15}
\end{equation}
Therefore, the quantum action principle reduces to the well-known action
principle of the Schr\"{o}dinger wave mechanics if only multiplicative wave
functionals are considered.
\section{\textbf{NEW FORM OF CANONICAL QUANTIZATION}}
The canonical foundation of the new form of quantum dynamics consists in the
definition of new rules of transition from classical to quantum mechanics.
Old rules are a ``quantum deformation''\ of classical dynamics which is
formulated in the canonical form by use of the Poisson brackets (PB) \cite%
{FD}. These brackets are defined as the Lie brackets on the set of functions
of canonical variables which obey the canonical relation (in the case of one
dimension space):
\begin{equation}
\left\{ x,p\right\} =1, \label{16}
\end{equation}
see, for example, in Ref. \cite{M}. The relation (\ref{16}) is defined in a
certain moment of time but the classical dynamics conserves PB-relations in
time.
A modification of the classical dynamics proposed here consists in the
replacement of the ordinary PB by a new ones which obey a non-simultaneous
canonical relation (all others are equal\ to zero):
\begin{equation}
\left\{ x\left( t\right) ,p\left( t^{\prime }\right) \right\} =\delta \left(
t-t^{\prime }\right) . \label{17}
\end{equation}
The new definition of PB permits us to formulate classical equations of
motion as the PB-relations:
\begin{equation}
\left\{ x\left( t\right) ,I\right\} =\left\{ p\left( t\right) ,I\right\} =0,
\label{18}
\end{equation}
where $I$\ is the classical action (\ref{13}). The equations (\ref{18}) are
conditions of extremum of the classical action.
The canonical quantization of this theory consists in the replacement of
classical canonical variables by operators and the replacement of canonical
PB-relations by the corresponding commutation relations \cite{FD}. In our
case the commutator
\begin{equation}
\left[ \widehat{x}\left( t\right) ,\widehat{p}\left( t^{\prime }\right) %
\right] =i\widetilde{\hbar }\delta \left( t-t^{\prime }\right) \label{19}
\end{equation}
defines a quantum algebra of the canonical variables, where $\widetilde{%
\hbar }$ is a new ``Plank''\ constant with the dimensionality $Dj\cdot s^{2}$%
. The canonical commutation relation (\ref{19}) is valid if we consider, as
usually, $\widehat{x}\left( t\right) $ as the operator of product by $%
x\left( t\right) $\ and $\ \widehat{p}\left( t\right) $ as the operator of
the functional differentiation (\ref{14}) in the space of wave functionals $%
\Psi \left[ x\left( t\right) \right] $. After that, the quantum version of
the action principle formulated in this work arises naturally.
\section{\textbf{CONCLUSIONS}}
The equation (\ref{12}) is the main result of our work. It is an analog of
the Schr\"{o}dinger equation in the new formulation of quantum mechanics.
The new formulation is equivalent to the Schr\"{o}dinger theory, but it
opens new possibilities for the development of quantum theory. The
description of processes of birth and annihilation of particles without the
use of the secondary quantization formalism will be one of the new
possibilities.
We are thanks V. A. Franke and A. V. Goltsev for useful discussions.
|
1,477,468,750,618 | arxiv | \section{\label{sec:level1}First-level heading:\protect\\ The line
The interest in antiferromagnetic (AF) spintronics is stimulated by an increasing number of reports on different scenarios of manipulation of the N\'eel vector. Current-driven methods include spin-orbit torque commensurate with spin directions \cite{Zelezny:2014_PRL,Wadley:2016_Science,Grzybowski:2017_PRL,Wadley:2018_NN,Meinert:2018_PRA,Bodnar:2018_NC} or antidamping torque \cite{Chen:2018_PRL,Moriyama:2018_SR,Baldrati:2018_arXiv}. Although they provide means to control reversibly the N\'eel vector direction, they require a high current density. On the other hand, switching by an electric \emph{field} is considered promising for low-power spintronics. The electric field has been shown to modify a magnetic behavior of numerous ferromagnetic (FM) materials \cite{Ohno:2000_Nature,Boukari:2002_PRL,Chiba:2008_Nature,Sawicki:2010_NP,Chiba:2011_NM,Matsukura:2015_NN}, surprisingly including also rather conductive metal films \cite{Weisheit:2007_Science,Maruyama:2009_NN,Matsukura:2015_NN}, presumably because of an important role played by interfacial magnetic anisotropy \cite{Nozaki:2017_AM}.
It was also shown that an electric field can decrease the switching current in FM tunneling junctions \cite{Wang:2012_NM}. As for AFs, the electric field was proven to change a domain structure of multiferroic BiFeO$_3$ \cite{Zhao:2006_NM} or switch between AF and FM interactions in EuTiO$_3$ \cite{Ryan:2013_NC}. The magnetoelectric effect was also utilized to construct a memory device in $\alpha$-Cr$_2$O$_3$ film\cite{Kosub:2017_NC}. Metallic AFs were observed to exhibit modulation of the exchange spring effect \cite{Wang:2015_AM, Zhang:2016_SCPMA} or to change the magnetoresistance in AF-FM heterostructures \cite{Goto:2016_JJAP} due to the electric field. Finally, it has been recently demonstrated that it is possible to influence the spin-orbit torque by applying an electric field to a piezoelectric substrate \cite{Chen:2019_NM}. Furthermore, theoretical studies of CuMnAs show the coexistence of massless Dirac fermions and the AF order \cite{Smejkal:2017_PRL, Tang:2016_NP}. The reorientation of the N\'eel vector can induce the topological metal-insulator transition \cite{Smejkal:2017_PRL, Tang:2016_NP}. However, to observe these phenomena experimentally, tuning the Fermi level to the band gap with an electric field would be highly desirable. In this context, exploring the influence of an electric field on CuMnAs is important from the point of view of low-power spintronics, topological aspects of AFs, and fundamental research on the role of electric fields in AFs.
In this paper, we demonstrate experimentally that the resistivity of highly conducting antiferromagnetic tetragonal CuMnAs thin films passivated by AlO$_x$ is reversibly modulated at room temperature by an electric field applied across an ionic liquid. The sign and the magnitude of the effect allow us to evaluate the carrier type, concentration, and mobility. By comparing the field and Hall effect data we assess a possible magnitude of the anomalous Hall resistance. Conversely, under an assumption that the anomalous Hall resistance is negligible in collinear AFs, the consistency of the field and Hall effect results demonstrates that phenomena associated with surface charge trapping states \cite{Bauer:2012_NL}, electromigration \cite{Bauer:2015_NM}, and piezoelectricity \cite{Sztenkiel:2016_NC} are weak in CuMnAs, so that the main effect of gating is the formation of depletion and accumulation layers for positive and negative gate voltages, respectively. The study yields also an upper limit for the dependence of the resistivity modulation on the direction of the current with respect to crystal axes.
\begin{table*}
\caption{\label{tab:hall}Properties of two CuMnAs films at room temperature determined from Hall measurements: the Hall coefficient $R_{\text{H}}$, hole concentration $p$ and mobility $\mu_{\text{H}}$ evaluated neglecting a possible contribution from the anomalous Hall effect.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
CuMnAs layer thickness [nm] & $T$ [K] & $R_{\text{H}}\times10^8 [\Omega\text{cm}/\text{T}]$ & $p \times 10^{-21} [\text{cm}^{-3}]$ & $\mu_{\text{H}} [\text{cm}^2/\text{Vs}]$\\
\hline
45 & 300 & $4.1 \pm 0.1$ & $15.0 \pm 0.2 $ & $3.9 \pm 0.1$\\
10 & 283 & $13 \pm 4$ & $5 \pm1$ & $3.4\pm0.7$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
The field and Hall effect data have been obtained for a 10~nm CuMnAs tetragonal film grown coherently on a (001) GaAs substrate by molecular beam epitaxy, and capped with 2.5~nm Al layer that undergoes oxidation in the air. The thickness of the capping layer corresponds to the thickness of native Al oxide \cite{Evertsson:2015_ASS}. Additionally, Hall measurements have been carried out for a 45~nm film of CuMnAs grown on a (001) GaP substrate, and also capped with a 2.5~nm Al layer. Two devices have been prepared from the 10~nm film. The first one (device A) has been obtained from an elongated piece of the epilayer by fixing gold wires with silver paint and by depositing a droplet of the ionic liquid DEME-TFSI between them. As shown schematically in Fig.~\ref{fig:1}, about a half of the sample is cover by the gate. Another gold wire is dipped in the ionic liquid so that it does not touch the studied layer and forms a gate electrode. The microdevice B has been fabricated by means of multilevel electron beam lithography, wet etching, and lift-off to pattern four different current paths and eight gold contact pads, and by employing atomic layer deposition for growing an Al$_2$O$_3$ film serving to protect the etched trenches from undesired oxidation or chemical reaction with ionic liquid, as shown in Fig.~\ref{fig:2}. A wire bonder is used to fix wire probes to the contact pads. Similarly to the device A, the ionic liquid drop, deposited on the device top, and the gate electrode wire complete the field-effect structure. In the case of the microdevice B, the gate area is much larger than the central probed region.
The capacitance per area unit $C/S=(4.4 \pm 0.8)\times10^{-7}\,\text{F/cm}^{2}$ has been estimated for our ionic liquid by the $C$--$V$ profiling employing the frequency of 1\,kHz and the modulation voltage of 30\,mV superimposed on the d.c. voltage between 0 and 1\,V. This means that we can change interfacial charge density by about $3\times10^{12}$\,cm$^{-2}$ by the gate voltage of $V_{\text{G}}= 1$\,V.
The Hall resistance measured for our films is linear in the magnetic field in the studied range up to 9\,T and reveals a positive sign of the Hall coefficient, in agreement with earlier studies \cite{Wadley:2013_NC}. Since magnetization of collinear AFs and, thus, spin polarization of band carriers vary also linearly with the magnetic field, the Hall resistance may {\em a priori} contain an anomalous component. Neglecting it, and adopting a single band transport model, our measurements lead to the values of hole concentrations $p$ and mobilities $\mu_{\text{H}}$, collected in Table~\ref{tab:hall}. The value $p = 1.1\times10^{22}$\,cm$^{-3}$ determined previously \cite{Wadley:2013_NC}, lies between $p = (1.50 \pm 0.02)\times10^{22}$ and $(0.5\pm0.1)\times10^{22}$\,cm$^{-3}$ obtained here for the 45 and 10\,nm thick films, respectively. A relatively small hole density in the thinnest layer, corresponding to the areal density of $5\times10^{15}$\,cm$^{-2}$, may point to interfacial or surface depletion. At the same time, the magnitudes of the Hall mobilities are within $3.6\pm0.3\,\text{cm}^2/\text{Vs}$ for the three films in question. A comparison of these values to the field mobilities will tell about the role of surface states as well as to what extent the Hall data are affected by multiband transport and the anomalous Hall effect in the semimetallic and antiferromagnetic CuMnAs.
The key experiment of this work is the field effect, i.e., how the four-probe longitudinal resistance changes under the influence of the gate voltage. The main challenge is a high value of the carrier concentration in CuMnAs, making the magnitude of the field effect small and comparable to resistance changes caused by temperature fluctuations under ambient conditions. Therefore, a strong electric field has to be used and a good temperature stabilization implemented. Accordingly, the studied devices are mounted on a sample holder in a vacuum chamber of a cryostat with a temperature controller, the arrangement preventing also a contact with water vapor in the air. The gate voltage is applied to the gate electrode in a form of a square wave with a period of 200 or 300\,s. In the case of the device A, the current source supplies a probing current $I$ in the range $1-100\,\mu\text{A}$ of alternating polarity (20~s period) to eliminate thermal forces. In the case of the device B, resistance changes generated by the gate voltage are probed by an ac lock-in method with an excitation current of $10\,\mu \text{A}$ and frequency $11\,\text{Hz}$. The device design allows to probe resistance along four different crystal axes, [100], [110], [010], and [1$\bar{1}$0].
\begin{figure}
\includegraphics{il_top_view3.eps
\caption{\label{fig:1} Experimental setup for the determination of the resistivity changes under the influence of the gate voltage ($V_{\text{G}}$) for the device A. The blue circle denotes the area covered with the ionic liquid.}
\end{figure}
\begin{figure}
\includegraphics{rc133_e.eps
\caption{\label{fig:2} Microdevice (device B) with eight contacts (clear bluish areas) for studies of the field effect for current along different crystalline axes. The darkest regions are trenches etched down to the substrate defining current paths, covered additionally by an Al$_2$O$_3$ film (brown areas) with atomic layer deposition to prevent chemical reactions between the layer and the ionic liquid. The Al$_2$O$_3$ film extends beyond the etched trenches covering partially the CuMnAs layer to ensure there is no contact with side walls and the ionic liquid. The whole device is covered by the ionic liquid in which the gate electrode is dipped. The current and voltage connections are shown for resistance measurements along the [100] crystal direction.}
\end{figure}
As shown in Figs.~\ref{fig:3} and \ref{fig:devicebj100}, clear variations of the resistance with the same periodicity as the gate voltage are observed for both devices. Assuming that neither electrochemical nor piezoelectric effects operate, an increase of resistance for positive values of the gate voltage means that hole carriers are involved. The field-effect data are presented in the form of relative resistance changes $\Delta R_\text{xx}/R_\text{xx}$,
\begin{equation}
\frac{\Delta R_{\text{xx}}}{R_{\text{xx}}}=\frac{R_{\text{xx}}\left(V_{\text{G}}\right)-R_{\text{xx}}\left(V_{\text{G}}=0\,\text{V}\right)}{R_{\text{xx}}\left(V_{\text{G}}=0\,\text{V}\right)}, \label{eq:1}
\end{equation}
that is as a difference between the resistance when a specific gate voltage is on and when the gate voltage is zero, normalized by the resistance value at $V_{\text{G}}=0$. In the case of the device A, a small resistance drift linear in time is observed and subtracted from the data. It originates probably from a chemical reaction between the ionic liquid and edges of the sample. Its rate is $9 \times 10^{-5}\, \Omega/\text{s}$ for $V_{\text{G}}\!=\!1\,\text{V}$, but for most gate voltages used in the experiment it does not exceed $6 \times 10^{-5}\,\Omega/\text{s}$.
The resistance changes depend on the magnitude of the gate voltage (Fig.~\ref{fig:5}), whereas they do not show any clear dependence on the probing current (Fig.~\ref{fig:6}). The current flowing through the gate $I_{\text{G}}$ (Fig.~\ref{fig:3}) decays with time. This dependence is presumably a long-time tail of two phenomena: (i) the capacitance charging effect via a non-zero resistance of the sample; (ii) a reorganization process of the charge distribution within the ionic liquid \cite{Reichert:2018_FD,Jitvisate:2018_JPCL}. For the smaller device B the total current through the gate does not exceed $10\,\text{nA}$, which means that its magnitude in the probed region is much below 1\,nA. The $V_{\text{G}}$ has been used in the range between -1$\,$V and 1$\,$V. A significant increase in $I_{\text{G}}$ and $R_{\text{xx}}$ for $V_{\text{G}}$ beyond this range suggests the onset of chemical reactions and electrical breakdown.
\begin{figure}
\includegraphics{fig3.eps
\caption{\label{fig:3} Time dependence of the gate voltage $V_{\text{G}}$, relative resistance changes $\Delta R_{\text{xx}}/R_{\text{xx}}$ and current flowing through the gate $I_{\text{G}}$ for device A at room temperature for $10\,\mu\text{A}$ probing current in the experimental setup presented in Fig.~\ref{fig:1}. A clear correlation between changes of the gate voltage and longitudinal resistance can be seen, whereas the residual gate current $I_{\text{G}}$ shows a different time dependence.}
\end{figure}
\begin{figure}
\includegraphics{deviceB_J100.eps
\caption{\label{fig:devicebj100} Relative resistance changes $\Delta R_{\text{xx}}/R_{\text{xx}}$ for the device B under the influence of the gate voltage $V_{\text{G}}$ of the magnitude showed in the upper panel. The current is flowing along the $[100]$ crystal direction.}
\end{figure}
\begin{figure}
\includegraphics{il_RC133_C_UGdepRxx.eps
\caption{\label{fig:5} Dependence of the relative resistance changes $\Delta R_{\text{xx}}/R_{\text{xx}}$ on the gate voltage $V_{\text{G}}$ for the two studied structures. The lines represent a linear fit to the experimental data for the device A (dashed, blue line) and device B (solid, red line).}
\end{figure}
\begin{figure}
\includegraphics{il_RC133_C_Idep.eps
\caption{\label{fig:6} Relative resistance changes $\Delta R_{\text{xx}}/R_{\text{xx}}$ of the device A recorded for different values of probing currents $I$ at $V_{\text{G}}\!=\!1\,$V. No apparent dependence is found in the studied current range, as expected.}
\end{figure}
A linear fit to the experimental data for the device A presented in Fig.~\ref{fig:5} indicates that the relative resistivity change $\Delta R_{\text{xx}}/R_{\text{xx}}\!=\!(3.1\pm0.5)\times10^{-4}$ per 1\,V in the studied gate voltage range. The corresponding values for the device B are also shown for measurements carried out for different current directions with respect to crystal axes, and a linear fit to these data gives $\Delta R_{\text{xx}}/R_{\text{xx}}\!=\!(5\pm1)\times10^{-4}$ per 1\,V. Within the estimated experimental uncertainty there is no dependence of $\Delta R_{\text{xx}}/R_{\text{xx}}$ on the current direction. The lower value of $\Delta R_{\text{xx}}/R_{\text{xx}}$ in the case of the device A is assigned to only partial covering of the region between the voltage probes by the ionic liquid, as shown in Fig.~\ref{fig:1}. Actually, the data for these two samples are in accord if corrected by a factor $f$ describing a relative coverage of the probed area by the ionic liquid, where $f = 0.5\pm0.1$ and $f = 1$ for the device A and B, as shown in Fig.~\ref{fig:1} and Fig.~\ref{fig:2}, respectively.
We compare the experimental values of $\Delta R_{\text{xx}}/R_{\text{xx}}$ to theoretical estimations under the assumption that the only effect of the gate electric field is a depletion or accumulation of hole carriers at the layer surface. Under this assumption, a change in the areal hole density $\Delta p$ in the gated region is given by,
\begin{equation}
\Delta p= -\frac{CV_{\text{G}}}{Sq}, \label{eq:deltap}
\end{equation}
where $C/S = (4.4 \pm 0.8) \times 10^{-7}\,\text{F/cm}^{2}$ and $q =e$ for holes. On the other hand, assuming that the hole mobility $\mu$ is independent of the local carrier density as well as noting that in our case $\Delta p \ll pt$, where $t$ is the film thickness,
\begin{equation}
\frac{\Delta R_{\text{xx}}}{R_{\text{xx}}}= -f\frac{\Delta p}{pt}.
\label{eq:deltar}
\end{equation}
For the areal hole concentration determined from the Hall measurements $(5 \pm 1) \times 10^{15}\,\text{cm}^{-2}$ we arrive to $\Delta R_{\text{xx}}/R_{\text{xx}}= f\cdot(5\pm1)\times10^{-4}$ per 1$\,$V, which is in good agreement with the experimentally observed values presented in Fig.~\ref{fig:5} for both devices taking into account the values of $f$ quoted above.
The Hall effect and the resistivity changes generated by gating allow us to compare the values of carrier mobility, namely the Hall mobility, $\mu_{\text{H}}\!=\!\sigma_{xx}/pq$, and the field mobility $\mu_{\text{E}}$ defined by
\begin{equation}
\mu_{\text{E}} = -\frac{1}{C/S}\frac{\partial \sigma_{\Box}}{\partial V_{\text{G}}},
\end{equation}
where $\sigma_{\Box}$ is the sheet conductivity in the gated region. In terms of the device longitudinal resistance $R_{xx}$, $\mu_{\text{E}}$ assumes the form,
\begin{equation}
\mu_{\text{E}} = \frac{L}{fWC/S}\frac{1}{R_{xx}^2}\frac{\partial R_{xx}}{\partial V_{\text{G}}},
\label{eq:mu}
\end{equation}
where $L$ and $W$ is the length and the width of the probed region, respectively; $L/W= 1.7\pm0.3$ and $1\pm0.1$ for the device A and B, respectively. The mobility values determined from the data in Fig.~\ref{fig:5} and Eq.\,\ref{eq:mu} for the studied structure are presented in Tab.~\ref{tab:mob}.
The numbers quoted there imply that hole concentrations determined from the Hall resistance on the one hand, and from the field mobility and sample conductance on the other, are in accord.
\begin{table}
\caption{\label{tab:mob} Comparison of the Hall ($\mu_\text{H}$) and field mobility ($\mu_\text{E}$) for studied devices.}
\begin{ruledtabular}
\begin{tabular}{ccc}
& $\mu_{\text{H}}\,[\text{cm}^2/\text{Vs}]$ & $\mu_{\text{E}}\,[\text{cm}^2/\text{Vs}]$\\
\hline
device A & - & $3.7 \pm 1$\\
\hline
device B & $3.4 \pm 0.7$ & $3.7 \pm 1$\\
\hline
\end{tabular}
\end{ruledtabular}
\end{table}
In summary, the electric field can modify reversibly the resistivity of CuMnAs structure capped with AlO$_{\text{x}}$. A quantitative agreement between the values of the Hall and field mobilities proves that the modulation of the itinerant hole concentration in the layer is a mechanism accounting for the observed field effect. This points out that within the studied range of the electric fields electrochemical and piezoelectric phenomena as well as charging of surface states do not contribute significantly to the field-induced resistance changes, at least at room temperature. At the same time, there is no indications for a sizable contribution of the anomalous Hall effect. Similarly, there is no evidence for a breakdown of the single band approximation, as in the case of the multiband transport. In this situation the Hall effect would provide information of the highest mobility carriers, whereas the field effect on the band with the largest density of states at the Fermi energy. The presented approach opens a way to manipulate the Fermi level and to explore the influence of the electric field on the magnetic order and the N\'eel vector switching in conducting antiferromagnetic systems. \\
The work was supported by the Polish National Science Centre (grants No. DEC-2016/21/N/ST3/03380 and DEC-2012/06/A/ST3/00247) by the Foundation for Polish Science through the IRA Programme financed by EU within SG OP Programme.
\nocite{*}
|
1,477,468,750,619 | arxiv | \section{Introduction}
As of 1st January 2021, there are nearly three million Android apps available on the official Google Play app store.
The majority of them (over 95\%) are made freely accessible to Android users and cover every aspect of users' daily life, such as supporting social networking, online shopping, banking, etc.
Many of these functionalities are supported by application interfaces provided by the Android framework, essentially fulfilled by a set of hardware-based sensors~\cite{developerandroid}.
For example, Android apps often leverage accelerometer sensors to detect the orientation of a given smartphone and user movement, and the temperature sensor to detect the device's temperature.
Despite being needed to support the implementation of many diverse Android apps, mobile phone sensors can also be abused to achieve malicious behaviors.
There have been many reports of apps that exploit sensors in Android devices to conduct malicious activities.
For example, Adam et al.~\cite{aviv2012practicality} have experimentally shown that the accelerometer sensor could be leveraged as a side-channel to infer mobile users' tap and gesture-based input.
Xu et al.~\cite{xu2012taplogger} have also demonstrated the possibility of this attack by presenting to the community a Trojan application named \emph{TapLogger} to silently infer user's tap inputs based on the device's embedded motion sensors.
Similarly, Schlegel et al.~\cite{schlegel2011soundcomber} have provided another Trojan application called \emph{Soundcomber} that leverages the smartphone's audio sensor to steal users' private information.
These studies have experimentally shown that the leaks of Android sensor data can cause severe app security issues.
We argue that there is thus a strong need to invent automated approaches to detect such sensor leaks in Android apps before publishing them onto app markets.
To the best of our knowledge, existing works focus on detecting certain types of sensor usage and its corresponding suspicious behaviors. None of them are designed as a generic approach for systematically revealing data leaks in all types of Android sensors. Also, these works mainly concentrate on discovering and understanding the usage patterns of Android embedded sensors, which do not involve completed data flow analysis to pinpoint sensitive data leaks caused by sensors.
Although many generic approaches to detect privacy leaks in Android apps have been proposed, none can be directly applied to achieve our purpose, i.e., detecting generic sensor leaks in Android apps.
Indeed, the famous FlowDroid tool has been demonstrated to be effective in detecting method-based privacy leaks in Android apps.
It performs static taint analysis on Android apps' bytecode and attempts to locate data-flow paths connecting two methods, i.e., from a \emph{source} to a \emph{sink} method.
Here, \emph{source} refers to such methods that obtain and return sensitive information from the Android framework (e.g., get device id), while \emph{sink} refers to such methods that perform dangerous operations such as sending data to remote servers.
FlowDroid has been designed as a generic approach.
It has provided a means for users to pre-define the needed \emph{source} and \emph{sink} methods.
Unfortunately,FlowDroid does not allow users to configure fields as \emph{sources} so as to support the detection of privacy leaks flowing from \emph{fields} to sensitive operations (i.e., \emph{sink}).
Since sensor data in Android is mostly provided via fields, FlowDroid cannot be directly applied to detect sensor leaks in Android apps.
To address this research gap, we designed and implemented a prototype tool, \tool{}, to automatically detect sensor data leaks in Android apps.
We extend the open-source tool FlowDroid to support field-triggered sensitive data-flow analyses. Our new
\tool{} further performs a detailed static code analysis to infer the sensor types involved in the sensitive data-flows as the leaked sensor data is not directly associated with the sensor type. (we detail this challenge in Section~\ref{subsec:type}).
We then apply \tool{} to detect and characterize sensor leaks in real-world Android apps.
Based on 40,000 randomly selected Android apps, including 20,000 benign apps and 20,000 malicious apps, our experimental results show that \tool{} is effective in detecting sensor leaks in Android apps.
We also find that malware is more interested in obtaining and leaking sensor data than benign apps, and Accelerometer and Magnetic are among the most targeted sensors by those malicious apps.
We make the following main contributions in this work:
\begin{itemize}[leftmargin=*]
\item We have designed and implemented a prototype tool, \tool{} (\underline{Se}nsor l\underline{e}a\underline{k} find\underline{er}), that leverages static analysis to automatically detect privacy leaks originated from Android sensors.
\item We apply \tool{} to analyze both malware and benign apps at a large scale. Our results show many sensor leaks that are overlooked by the state-of-the-art static analysis tool.
\item We have demonstrated the effectiveness of our tool by evaluating the sensor leaks it highlights.
\end{itemize}
\section{Background and Motivation}
\subsection{How sensors work in Android platforms}
\begin{table*}[!h]
\small
\centering
\caption{Sensor types supported by the Android platform.}
\vspace{-2mm}
\label{tab:sensor_types}
\resizebox{\linewidth}{!}{
\begin{tabular}{l l l }
\hline
Sensor Type & Sensor Category & Description \\
\hline
Gravity & Motion sensor & Provides a three dimensional vector indicating the direction and magnitude of gravity\\
Linear acceleration & Motion sensor & Provides a three-dimensional vector representing acceleration along each device axis \\
Rotation vector & Motion sensor & Provides the orientation of the device \\
Significant motion & Motion sensor & Triggers an event each time significant motion is detected and then it disables itself \\
Step counter & Motion sensor & Provides the number of steps taken by the user since the last reboot\\
Step detector & Motion sensor & Triggers an event each time the user takes a step \\
Accelerometer & Motion sensor & Measures the acceleration applied to the device, including the force of gravity \\
Gyroscope & Motion sensor & measures the rate of rotation in rad/s around a device's x, y, and z axis \\
Game rotation & Position sensor & Identical to the Rotation vector sensor, except it does not use the geomagnetic field \\
Geomagnetic rotation & Position sensor & Similar to the rotation vector sensor, but it doesn't use the gyroscope \\
Geomagnetic field & Position sensor & Monitor changes in the earth's magnetic field \\
Uncalibrated magnetometer & Position sensor & Similar to the geomagnetic field sensor, except that no hard iron calibration is applied\\
Proximity sensor & Position sensor & Determine how far away an object is from a device \\
Light & Environment sensor & Provides Illuminance \\
Pressure & Environment sensor & Provides ambient air pressure \\
Temperature & Environment sensor & Provides device temperature \\
Ambient temperature & Environment sensor & Provides ambient air temperature \\
Humidity & Environment sensor & Provides ambient relative humidity \\
\hline
\end{tabular}
}
\vspace{-4mm}
\end{table*}
Figure~\ref{fig:layers_sensor_stack} depicts the Android sensor stack. Sensors are Microelectromechanical systems (MEMS) chips that detect events or changes in surrounding environment. After the sensors capture the events, data is optionally passed on to the Sensors Hub. This Sensors Hub performs low-level computation as a support to the sensors, such as step counting and sensor fusion. Then the Drivers and Hardware Abstraction Layer (HAL) handles the interaction between the hardware and the Android framework. Finally, the Android apps access the sensor data through APIs provided by the Android Software Development Kit (SDK).
\begin{figure}[h!]
\centering
\vspace{-5mm}
\subfigure[]{\label{fig:layers_sensor_stack}\includegraphics[width=0.48\linewidth]{figures/layers_sensor_stack.pdf}}
\subfigure[]{\label{fig:coordinate}\includegraphics[width=0.48\linewidth]{figures/coordinate.pdf}}
\caption{Layers and Coordinate system of the Android sensor stack. Source: https://developer.android.com/guide/topics/sensors/sensors\_overview}
\label{fig:Android_Sensor}
\vspace{-1mm}
\end{figure}
In general, Android platform provides three broad categories of sensors for measuring motion, orientation, and various environmental conditions of the device:
\begin{itemize}
\item \textbf{Motion sensors}: These are used to monitor device movement, such as tilt, shake, rotation, or swing. The movement usually reflects direct user input or the physical environment around the device. Motion sensors include the accelerometer, the gyroscope, the step counter, etc.
\item \textbf{Position sensors}: These determine the physical position of a device in the world's frame of reference or the orientation of a device. Position sensors include the geomagnetic field sensor, the proximity sensor, etc.
\item \textbf{Environmental sensors}: These monitor various environmental properties, such as relative ambient humidity, illuminance, ambient pressure, and ambient temperature near the device. Examples of environmental sensors include the light sensor, the pressure sensor, etc.
\end{itemize}
Android uses a standard 3-axis coordinate system to represent data values, as shown in Figure~\ref{fig:coordinate}. The X-axis is defined relative as horizontal, the Y-axis is vertical, and the Z-axis points towards the outside of the screen face. This coordinate system is unalterable when the device's screen orientation changes, which means the sensor's coordinate system remains the same even if the device is on the move.
Table~\ref{tab:sensor_types} summarises the main embedded sensors supported by Android with their categories, types, and descriptions. The Android sensor framework provides both hardware-based and software-based sensors. Hardware-based sensors are accessed by reading the data directly from physical components built in the device, such as acceleration, geomagnetic field strength, or angular change. Software-based sensors derive their data from one or more of the hardware-based sensors. Examples of software-based sensors includes the linear acceleration sensor and the gravity sensor.
\begin{lstlisting}[
caption={Example of demonstrating how to access the device's sensors.},
label=code:example_sensor_usage,
firstnumber=1]
public class SensorActivity extends Activity implements SensorEventListener {
private SensorManager sensorManager;
private Sensor pressure;
private List<Sensor> deviceSensors;
@Override
public final void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
// Get an instance of the sensor service, and use that to get an instance of a particular sensor.
sensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE);
deviceSensors = sensorManager.getSensorList(Sensor.TYPE_ALL);
pressure = sensorManager.getDefaultSensor(Sensor.TYPE_PRESSURE);
}
@Override
public final void onAccuracyChanged(Sensor sensor, int accuracy) {
// Do something here if sensor accuracy changes.
}
@Override
public final void onSensorChanged(SensorEvent event) {
float millibarsOfPressure = event.values[0];
// Do something with this sensor data.
}
@Override
protected void onResume() {
//Register a listener for the sensor.
super.onResume();
sensorManager.registerListener(this, pressure, SensorManager.SENSOR_DELAY_NORMAL);
}
@Override
protected void onPause() {
//Unregister the sensor when the activity pauses.
super.onPause();
sensorManager.unregisterListener(this);}}
\end{lstlisting}
The Android sensor framework provides several APIs for developers to access its sensors and acquire raw data. We present an example in Listing~\ref{code:example_sensor_usage} to elaborate on how one identifies and determines sensor capabilities. First, to identify the sensors on a device, developers need to obtain the sensor service by calling the \texttt{getSystemService()} method and then passing the constant "Context.SENSOR\_SERVICE" as an argument (line 10). After that, developers can get a list of all sensors on a device through invoking \texttt{getSensorList(int type)}(line 11). To access a specific sensor, method \texttt{getDefaultSensor(int type)} can be called with a specific type constant (line 12).
To monitor sensor events, the developer should implement two callback methods that are exposed through \texttt{SensorEventListener} interface, which are \texttt{onAccuracyChanged()} and \texttt{onSensorChanged()} (lines 15-17 and 19-22, respectively). Whenever a sensor detects a change, the Android system will call these two methods to report the following details to users:
\textbf{Sensor accuracy changes}
When the sensor's accuracy changes, \texttt{onAccuracyChanged()} will provide users with a reference of the \texttt{Sensor} object and the new accuracy status of this sensor.
\textbf{Sensor value changes}
When a sensor obtains a new value, \texttt{onSensorChanged()} will provide users with a \texttt{SensorEvent} object, which contains the accuracy of the data, the sensor object, the timestamp when the data was generated, and the new data that the sensor recorded.
Last, the \texttt{onResume()} (lines 24-28) and \texttt{onPause()} (lines 30-34) callback methods are used to register and unregister the listener for the sensor. When an activity is paused, the related sensors should be disabled to avoid battery draining.
\begin{table}[!h]
\small
\centering
\caption{Examples of Sensor-based Cybersecurity attacks.}
\vspace{-2mm}
\label{tab:sensor_attacks}
\resizebox{\linewidth}{!}{
\begin{tabular}{l l l }
\hline
Sensor Category & Sensor Type & Attack Description \\
\hline
\multirow{23}{*}{Motion sensor} &
Accelerometer & sniffing smartwatch passwords \cite{lu2018snoopy} \\
& Accelerometer, Gyroscope & Text Inference \cite{hodges2018reconstructing}\\
& Accelerometer, Gyroscope & Motion-based keystroke inference \cite{cai2012practicality} \\
& Accelerometer, Gyroscope & Keystroke inference on Android \cite{al2013keystrokes} \\
& Accelerometer & Accelerometer side channel attack \cite{aviv2012practicality} \\
& Accelerometer & Touchscreen area identification \cite{owusu2012accessory} \\
& Accelerometer & Decoding vibrations from nearby keyboards \cite{marquardt2011sp} \\
& Gyroscope & Single-stroke language-agnostic keylogging \cite{narain2014single} \\
& Accelerometer, Gyroscope & Inferring Keystrokes on Touch Screen \cite{cai2011touchlogger} \\
& Accelerometer, Gyroscope & Inferring user inputs on smartphone touchscreens \cite{xu2012taplogger}\\
& Accelerometer, Gyroscope & Keystroke Inference \cite{bo2019know} \\
& Accelerometer & keystrokes Inference in a virtual environment. \cite{ling2019know}\\
& Accelerometer, Gyroscope & Risk Assessment of motion sensor \cite{huang2019risk}\\
& Accelerometer, Gyroscope & Infer tapped and traced user input \cite{nguyen2015using} \\
& Accelerometer, Gyroscope & Motion-based side-channel attack \cite{lin2019motion} \\
& Accelerometer & Keystroke inference with smartwatch \cite{liu2015good}\\
& Accelerometer & Motion leaks through smartwatch sensors \cite{wang2015mole}\\
& Accelerometer & Side-channel inference attacks \cite{maiti2018side} \cite{maiti2015smart}\\
& Accelerometer & Smartphone PINs prediction \cite{sarkisyan2015wristsnoop} \\
& Gyroscope & Inferring Mechanical Lock Combinations \cite{maiti2018towards} \\
& Accelerometer, Gyroscope & Inference of private information \cite{maiti2018towards} \\
& Accelerometer, Gyroscope & Typing privacy leaks via side-Channel from smart watch \cite{liu2019aleak}\\
& Accelerometer, Magnetometer & Input extraction via motion sensor \cite{shen2015input} \\
& Gyroscope & Recognizing speech \cite{michalevsky2014gyrophone} \\
\hline
\multirow{2}{*}{Position sensor}
& Magnetic & Compromising electromagnetic emanations \cite{vuagnoux2009compromising}\\
& Magnetic & My Smartphone Knows What You Print \cite{song2016my} \\
& Magnetic & Location detection \cite{block2018my} \\
\hline
\multirow{1}{*}{Environment sensor}
&Light Sensor & Optical eavesdropping on displays \cite{chakraborty2017lightspy}\\
\hline
\end{tabular}
}
\vspace{-4mm}
\end{table}
\subsection{Motivation}
\label{subsec:motivation}
Sensors have been widely adopted for launching side-channel attacks against smart devices \cite{sikder2021survey}.
Table ~\ref{tab:sensor_attacks} summarizes a diverse set of sensor-based attacks targeting smartphones and smartwatches. Since accessing sensitive sensor data does not require any security checks (e.g., permission check), attackers can easily trigger malicious behaviors by making use of such data. As revealed in the table, generally, sensor leakage are performed with the aim of (1) keystroke inference, (2) task inference (refers to a type of attack which reveals the information of an on-going task or an application in a smart device), (3) location inference, and (4) eavesdropping. For example, motion and position sensors can be exploited for keystroke inference, leading to severe privacy leaks such as passwords, credit card information, etc.
Light sensor is found to eavesdrop acoustic signals in the vicinity of the device, causing private information leak. Magnetic sensors can be exploited to compromise electromagnetic emanations, which would affect the confidentiality of the devices.
As a concrete example, Lu et al. \cite{lu2018snoopy} revealed that sensitive intercepting password could be accessed through motion data on the smartwatch's onboard sensors. They proposed \emph{Snoopy}, a password extraction and inference approach via sensor data for PIN attack, which could affect smartwatch users in a non-invasive way.
\emph{Snoopy} extracts the segments of motion data when users entered passwords and then applies deep learning techniques to infer the actual passwords. Figure~\ref{fig:snoopy_example} gives two examples of the differences of the motion sensor data changes when the user swipes or taps a password on a smartwatch.
\emph{Snoopy} demonstrates the feasibility of sensor data leaks by intercepting password information entered on smartwatches.
Such real-world sensor-enabled attacks motivated us to provide automatic tools for characterizing universal sensor leaks in Android Apps that have been long overlooked.
\begin{figure*}[!h]
\centering
\includegraphics[width=0.7\linewidth]{figures/methodology.pdf}
\vspace{-3mm}
\caption{The working process of our approach.}
\label{fig:methodology}
\end{figure*}
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{figures/snoopy.jpg}
\vspace{-1mm}
\caption{The snoopy example of sniffing smartwatch passwords via censoring motion sensor data \cite{lu2018snoopy}.}
\vspace{-3mm}
\label{fig:snoopy_example}
\end{figure}
\vspace{-1mm}
\section{Approach}
This work aims to automatically detect information leaks of onboard sensors in Android apps. To this end, we design and implement a prototype tool called \tool{} for achieving this purpose. Figure \ref{fig:methodology} describes the overall working process of \tool{}, which is mainly made up of three modules, namely Sensitive Sensor Source Identification, Sensor-triggered Static Taint Analysis and Sensor Type Inference.
\vspace{-1mm}
\subsection{Sensitive Sensor Source Identification}
\label{subsec:source}
The first module, \emph{Sensitive Sensor Source Identification}, aims to identify sensor-related sources that access and obtain sensitive information related to the device's sensors. As reported by Liu et al.~\cite{liu2018discovering}, Android sensor data can be obtained through invoking sensor-related APIs or directly accessing local fields in which the sensor data is stored.
In this work, we take both of these types into consideration, aiming at pinpointing all the possible sensor-triggered privacy leaks.
To do this we need to identify all the sensor-related sources, including both Android methods and fields.
For Android methods, we use the well known SUSI tool~\cite{arzt2013susi} to obtain sensor-related source methods.
SUSI is a novel machine-learning guided approach that scans Android API's source code to predict \emph{source} and \emph{sink} methods, based on a training set of hand-annotated sources and sinks.
In this work, we launch SUSI on the latest Android Open Source Project (i.e., AOSP version 11.0) and manually filter out non-sensor related source methods.
To identify sensor-related fields (as sources), there is no existing approach to achieve such a purpose. We resort to a manual process of going through the Android Developers' Documentation to identify source fields storing sensitive sensor information. The identified fields are then discussed and confirmed by the authors by measuring whether leaking such information would potentially expand the attack surface to users' privacy. Finally, we identified 79 fields and 20 methods as the sources. Table \ref{tab:sensor_sources} lists the selected sources that indeed introduce leaks in our experimental dataset. A full list of field and method sources can be found in the \emph{SourcesAndSinks.txt} file of our open-source project\footnote{https://github.com/MobileSE/SEEKER}.
\begin{table}[!h]
\small
\centering
\caption{The list of sensitive sensor sources.}\vspace{-1mm}
\label{tab:sensor_sources}
{
\begin{tabular}{l c c}
\hline
Sensor-related Source & Source Type \\
\hline
SensorEvent\#values & Field \\
SensorEvent\#timestamp & Field \\
Sensor\#getName() & Method \\
Sensor\#getVendor() & Method \\
Sensor\#getVersion() & Method \\
SensorManager\#getDefaultSensor(int) & Method \\
Sensor\#getMaximumRange() & Method \\
SensorManager\#getSensorList(int) & Method \\
Sensor\#getType() & Method \\
Sensor\#getResolution() & Method \\
Sensor\#getPower() & Method \\
\hline
\hline
\end{tabular}
}
\vspace{-2mm}
\end{table}
\vspace{-1mm}
\subsection{Sensor-triggered Static Taint Analysis}
The ultimate goal of \tool{} is to detect sensor-related data leaks. To this end, we implement the \emph{Sensor-triggered Static Taint Analysis} module that extends state-of-the-art tool FlowDroid \cite{arzt2014flowdroid} to facilitate sensor-related data leak detection. FlowDroid detects data leaks by computing data flows between sources and sinks. FlowDroid defined a sensitive data flow happens when a suspicious “tainted” information passes from a source API (e.g., \texttt{getDeviceId}) to a sink API (e.g., \texttt{sendTextMessage}).
FlowDroid is a state-of-the-art tool and it provides a highly precise static taint-analysis model, especially for Android applications. However, FlowDroid only takes API statements as sources or sinks, leading to false negatives because of the lack of field-triggered sources. Thus, in this work, we extend FlowDroid by supporting field statement as sources, so as to pinpoint data leaks originated from specific field sources of interest.
Our preliminary study discovered that certain sensor-related data leaks are sourced from data stored in class fields (e.g., android.hardware.SensorEvent\#values). We therefore implemented our own class that implements the \texttt{ISourceSinkDefinitionProvider} interface in FlowDroid for supporting the declaration of fields as sources. Also, based on the feature of class fields, we defined a new model names AndroidField extends from \texttt{SootFieldAndMethod}. After loading a specific field statement from \emph{source\&sink.txt} file, we apply a field pattern regular expression to convert it to the AndroidField model.
FlowDroid has the ability to compute data flow connections between all possible statements. In the implementation of FlowDroid, \texttt{ISourceSinkManager} interface marks all statements as possible sources and then records all taint abstractions that are passed into \texttt{getSourceInfo()}. To that end, we pass the constructed field model as a source statement to the following taint analysis process. In this way, sensitive data flow can be detected starting at given field source statements.
\subsection{Sensor Type Inference}
\label{subsec:type}
The primary goal of \tool{} is to detect data leaks from Android platform sensors.
With the help of FlowDroid's taint analysis, \tool{}'s second module can detect sensor-triggered sensitive data flows.
Unfortunately for the field-triggered ones, the identified data-flows only show that there is sensor data leaked but do not tell from which sensor the data is collected.
The sensor type information is important for helping security analysts understand the sensor leaks.
Therefore, in our last module, we identify the types of sensors that are leaking information.
To identify which sensors exist on a specific Android device, we first get a reference to the sensor service by creating an instance of the \texttt{SensorManager} class via calling the \texttt{getSystemService()} method with \emph{SENSOR\_SERVICE} argument. After that, we can determine available sensors on the device by calling the \texttt{getSensorList()} method.
The \texttt{getSensorList()} method returns a list of all available sensors on the device by specifying constant \emph{TYPE\_ALL} as the parameter. A list of all sensors from a given type can also be retrieved by replacing the parameter as the constants defined for corresponding sensor types, such as \emph{TYPE\_GYROSCOPE, TYPE\_LINEAR\_ACCELERATION}, etc. We can also determine whether a specific type of sensor exists by calling the \texttt{getDefaultSensor()} method with the target type constant (the same as the ones passed in to \texttt{getSensorList()} method). If a device has that type of sensor, it will return an object of that sensor. Otherwise, null will be returned.
We use a rule-based strategy to identify the sensor type of a leak in the case of only one sensor registered in the given app. To do this, \tool{} obtains the sensor type by looking into the type constant in the \texttt{getDefaultSensor()} statement. For instance, \texttt{getDefaultSensor(Sensor.TYPE \_ACCELEROMETER)} indicates that the Accelerometer sensor is obtained. We can then reasonably assume that all sensor-related data leaks in the class are associated with the identified sensor (because only this sensor is registered).
\begin{lstlisting}[
caption={An example of sensor type usage with switch branch.},
label=code:example_sensor_type_usage,
firstnumber=1]
public class MainActivity extends AppCompatActivity implements SensorEventListener{
@Override
public void onSensorChanged(SensorEvent sensorEvent) {
switch(sensorEvent.sensor.getType()) {
case Sensor.TYPE_ACCELEROMETER:
accX = sensorEvent.values[0];
accY = sensorEvent.values[1];
accZ = sensorEvent.values[2];
...
case Sensor.TYPE_GYROSCOPE:
gyroX = sensorEvent.values[0] * 5;
gyroY = sensorEvent.values[1] * 5;
gyroZ = sensorEvent.values[2] * 5;
...
case Sensor.TYPE_ROTATION_VECTOR:
rvX = sensorEvent.values[0];
rvY = sensorEvent.values[1];
rvZ = sensorEvent.values[2];
...
}}}
\end{lstlisting}
In the case of multiple sensors registered in the given app, we further leverage context-aware static code analysis to find the connection between sensor types and the leaked field data. Firstly, we locate the invocation statement of API \texttt{android.hardware.SensorManager\#getDefault Sensor(int)} in the \texttt{onSensorChanged()} method. In the multiple sensors scenario, different sensor's behavior is handled in a conditional branch (e.g. if-then-else statement or switch statement). We then apply context-aware static code analysis to detect the code branch that contains the taint sensor source statement, based on which we then resolve the sensor type in the branch condition.
We further elaborate on the context-aware static code analysis with an example presented in Listing~\ref{code:example_sensor_type_usage}. The code snippet in the Listing shows an example of how multiple sensors are handled with \texttt{onSensorChanged(android .hardware.SensorEvent)} method. Android determines the activated sensor by matching the \texttt{sensorEvent.sens or.getType()} method (line 4). For example, if \texttt{get Type()} returns \texttt{Sensor.TYPE\_ACCELEROMETER} (line 5), the data obtained by \texttt{sensorEvent.values} is associated with the Accelerometer sensor (lines 6-8); if \texttt{getType()} returns \texttt{Sensor.TYPE\_GYROSCOPE} (line 10), the data contained in \texttt{sensorEvent.values} is accordingly associated with the current activated sensor, i.e., Gyroscope (lines 11-13).
\section{Experimental Setup and Results}
\tool{} is designed to expose the data leak issues of sensors in Android apps.
We investigate the feasibility and effectiveness of detecting sensor leaks in Android apps with the following three research questions:
\begin{itemize}
\item{\bf RQ1:} {\em Can \tool{} effectively detect sensor leaks in Android apps?} This research question aims to investigate the feasibility of detecting sensor leaks in Android apps with \tool{}.
\item {\bf RQ2:} {\em To what extent diverse sensor leaks can be identified by \tool{}?} With this research question, we explore the sensor types related to the identified sensitive data leaks, and investigate to what extent such sensor leaks are targeted by attackers.
\item {\bf RQ3:} {\em Is \tool{} efficient to detect the sensor leaks in Android apps?} In this study, we leverage the time costs of detecting sensor leaks to assess the efficiency of \tool{}.
\end{itemize}
\subsection{Experimental Setup}
To answer the aforementioned research questions, we build the experimental dataset with a \textit{malware} set and a \textit{benign} set. The \textit{malware} set contains 20,000 Android apps including malware downloaded from VirusShare repository \cite{Virusshare} that were collected between 2012 and 2020. The 20,000 Android apps in \textit{benign} set are crawled from the official Google Play store. All of the 40,000 apps are submitted to VirusTotal \cite{Virustotal}, the online scan engines aggregating over 70 anti-virus scanners (including the famous Kaspersky, McAfee, Kingsoft anti-virus engines), to check whether they contains viruses or not.
For the \textit{malware} set, we select the malware Android apps that have been labeled by at least
five anti-virus engines to ensure their maliciousness, while for the \textit{benign} set, the Android apps that are not tagged by any anti-virus engines are selected.
\tool{} is designed to detect sensor leaks, thus we filter out the Android apps without any onboard sensors by checking whether their code contains the string ``\texttt{android.hardware.sensor}''.
The final experimental dataset used in this study consists of 6,724 malware apps and 12,939 benign apps (cf. the 3rd column of Table~\ref{tab:sensor_leak_results}).
Our experiment runs on a Linux server with Intel(R) Core(TM) i9-9920X CPU @ 3.50GHz and 128GB RAM. The timeout setting for analyzing each app with \tool{} is set 20 minutes.
\subsection{RQ1 -- Feasibility of Detecting Sensor Data Leaks}
Our first research question evaluates the feasibility of \tool{} on detecting sensor leaks in Android apps, of which results are illustrated in Table~\ref{tab:sensor_leak_results}.
For the quantitative aspect, 9,905 potential sensor leaks are identified by \tool{} in 1,596 apps.
On average, one Android app could be injected with six sensor leaks.
It indicates that the sensor leaks could exist in Android apps which might have been overlooked by the security analysts of Android apps.
From the malicious aspect, 14.4\% (967 out of 6,724) malware apps are identified with sensor leaks, while 4.9\% (629 out of 12,939) benign apps are identified with such leaks.
Figure~\ref{fig:Android_method_field_Sensor} further presents the number of sensor leaks detected in each Android app, which shows that each malware app could be identified with more sensor leaks than the benign one.
It is significantly confirmed by the Mann-Whitney-Wilcoxon (MWW) test \cite{fay2010wilcoxon}, of which the resulting \emph{p-value} is less than $\alpha = 0.001$.
All of these results imply that malware apps have a higher possibility of containing sensor leaks than benign apps.
{\bf Note that:} there is lack of the ground-truth dataset about the sensor data leaks in Android apps.
To address this limitation, we consider a sensor leak existing in an Android app if there is the data flow interaction between sensor-related sources (i.e., class fields or methods) and sinks.
With this criterion, we manually checked the 229 sensor leaks detected by \tool{} in 20 randomly selected apps (10 malware apps and 10 benign apps).
There are only 4 false-positive identified sensor leaks among the 229 identified sensor leaks in the 20 Android apps, which are caused by inaccurate data-flow analysis results of FlowDroid (we detail this limitation cased by FlowDroid in Section \ref{subsec:limitations}).
Such results show that \tool{} is capable of identifying the sensor leaks in Android apps.
Simultaneously, it raises a major alarm for security analysts to pay attention to sensor leaks in Android apps that are not protected by the Android permission mechanism.
\begin{table}[!t]
\centering
\caption{Experimental results of the detected sensor leaks.}
\label{tab:sensor_leak_results}
{
\begin{tabular}{c c c |c c}
\toprule
\textbf{Dataset} &
\textbf{\# apps} &
\makecell[c]{\textbf{\# selected}\\\textbf{apps}} &
\makecell[c]{\textbf{\# apps identified}\\\textbf{with sensor leaks}} &
\makecell[c]{\textbf{\# identified}\\\textbf{sensor leaks}} \\
\hline
Malware & 20,000 & 6,724 & 967 & 6,103 \\
\hline
Benign & 20,000 & 12,939 & 629 & 3,802 \\
\hline
Total & 40,000 & 19,663 & 1,596 & 9,905\\
\bottomrule
\end{tabular} }
\end{table}
\begin{figure}[!t]
\centering
\vspace{-1mm}
\includegraphics[width=0.85\linewidth]{figures/distribution_sensor_leak.pdf}
\vspace{-2mm}
\caption{Distribution of sensor leaks in each app.}
\label{fig:Android_method_field_Sensor}
\vspace{-2mm}
\end{figure}
\begin{tcolorbox}[title=\textbf{RQ1 \ding{43} Feasibility and Effectiveness}, left=2pt, right=2pt,top=2pt,bottom=2pt]
\tool{} is capable of automatically detecting sensor leaks in Android apps.
Malware apps present higher possibility of committing sensor leaks than benign apps, and the sensor leaks might be ignored by security analysts.
\end{tcolorbox}
\subsection{RQ2 -- Characterization of Sensor Leaks}
\paragraph*{Sources Triggering Sensor Leaks}
The data leaks in sensor of Android apps are mainly triggered by two kinds of source: field and method (cf. Section~\ref{subsec:source}).
As presented in Table~\ref{tab:sensor_leak_results2}, $\sim$80\% (= 7941/9905) of identified sensor leaks are triggered by the method sources. For the benign Android apps, $\sim$85.8\% sensor data leaks are sourced from methods, while in the malware Android apps, $\sim$76.6\% leaks are originated from methods.
Table~\ref{tab:sensor_leak_types_method} lists the top-10 most frequent sources triggering sensor leaks identified by \tool{}, which are 8 {\tt getter} methods and 2 public fields from {\tt Sensor}-related classes.
We observe that the most frequent leaking source is the method {\tt android.hardware.SensorManager\# getDefaultSensor(int)} that is used to get the specific sensor of a given type, which is followed by the field \texttt{values} of class \texttt{SensorEvent}.
The leaking source {\tt android.hardware.SensorManager\#getDefaultS ensor(int)} occupies $\sim$89.1\% (7074 out of 7941) of the method-triggered sensor leaks,
and the field {\tt SensorEve nt\#values} occupies $\sim$95.6\% (1877 out of 1964) of the field-triggered sensor leaks.
The sensor leaks triggered by the two sources occupy $\sim$90\% of all identified sensor leaks.
\begin{table}[!ht]
\centering
\caption{Number of identified method/field-triggered sensor leaks.}
\label{tab:sensor_leak_results2}
{
\begin{tabular}{l c c}
\toprule
&
\textbf{\# identified method leaks} &
\textbf{\# identified field leaks} \\
\hline
Malware & 4,677 & 1,426\\
\hline
Benign & 3,264 & 538\\
\hline
Total & 7,941 & 1,964\\
\bottomrule
\end{tabular} }
\end{table}
\begin{table}[!ht]
\centering
\vspace{-3mm}
\caption{Top-10 frequent leaking sources.}
\label{tab:sensor_leak_types_method}
\resizebox{\linewidth}{!}{
\begin{tabular}{l c c c}
\hline
\multirow{1}{*}{\textbf{Sensor Sources}} &
\multirow{1}{*}{\textbf{Malware}} &
\multirow{1}{*}{\textbf{Benign}} &
\multirow{1}{*}{\textbf{Total}} \\
\hline
SensorManager\#getDefaultSensor(int) & 4,326 & 2,748 & 7,074\\
\hline
SensorEvent\#values & 1,358 & 519 & 1,877\\
\hline
SensorManager\#getSensorList(int) & 114 & 121 & 235 \\
\hline
Sensor\#getType() & 123 & 6 & 129 \\
\hline
Sensor\#getName() & 29 & 82 & 111 \\
\hline
Sensor\#getMaximumRange() & 19 & 73 & 92 \\
\hline
SensorEvent\#timestamp & 68 & 19 & 87 \\
\hline
Sensor\#getVendor() & 12 & 74 & 86 \\
\hline
Sensor\#getVersion() & 11 & 69 & 80 \\
\hline
Sensor\#getResolution() & 13 & 64 & 77 \\
\hline
\end{tabular} }
\end{table}
\paragraph*{Sensor Types of Field-triggered Sensor Leaks}
The sensor type is essential for deepening the understanding of sensor data leaks, i.e., knowing from which sensor the data is originally collected, as by default, this information is not given in field-triggered sensor leaks (e.g., sourced from the field variable {\tt values} in class {\tt SensorEvent}).
The last module of \tool{} is hence dedicated to infer the sensor types of such leaks.
Overall, in the 1,964 identified field-triggered sensor leaks, \tool{} successfully infers the corresponding sensor types for 1,923 (97.9\%) of them.
After manually checking the unsuccessful cases, we find that the 41 failed cases are mainly caused by the mistaken usage of sensors which can cause the sensors unexpected functional behavior, such as lacking sensor register information.
This high success rate demonstrates the effectiveness of \tool{} in pinpointing the sensor types associated with sensor data leaks.
\begin{table}[!t]
\centering
\caption{Top-10 frequent sensor types of field-triggered sensor leaks.}
\label{tab:sensor_leak_types_field}
\resizebox{\linewidth}{!}{
\begin{tabular}{l c c c}
\hline
\multirow{1}{*}{\textbf{Sensor Type}} &
\multirow{1}{*}{\textbf{Malware}} &
\multirow{1}{*}{\textbf{Goodware}} &
\multirow{1}{*}{\textbf{Total}} \\
\hline
ACCELEROMETER & 1,068 & 304 & 1,372 \\
\hline
MAGNETIC\_FIELD & 131 & 50 & 181\\
\hline
ORIENTATION & 92 & 84 & 176\\
\hline
PROXIMITY & 12 & 32 & 44\\
\hline
LINEAR\_ACCELERATION & 40 & 4 & 44 \\
\hline
STEP\_COUNTER & 14 & 9 & 23\\
\hline
TEMPERATURE & 12 & 8 & 20 \\
\hline
GYROSCOPE & 7 & 8 & 15 \\
\hline
PRESSURE & 1 & 13 & 14 \\
\hline
LIGHT & 6 & 5 & 11 \\
\hline
\end{tabular} }
\end{table}
We further investigate the true-positive rate of the successfully inferred sensor types. Due to the lack of the ground-truth dataset of related sensor types for sensor leaks, we resort to a manual inspection on the source code of 20 randomly selected apps (10 malware apps and 10 benign apps), each of which is identified with at least one field-triggered leak (86 field-triggered sensor leaks in total).
All leaks are confirmed with true-positive inferred sensor types, which implies that \tool{} is effective in inferring the sensor types of field-triggered leaks.
Table~\ref{tab:sensor_leak_types_field} presents the top 10 leaking sensor types of the identified field-triggered sensor leaks.
The type ``Accelerometer'' is the sensor type of 74.9\% and 56.5\% of identified field-triggered sensor leaks in the \textit{malware} apps and the \textit{benign} apps, respectively.
Android apps widely use the Accelerometer to monitor device motion states by measuring the acceleration applied to a device on three physical axes (i.e., x, y, and z axes). The motion data captured by the Accelerometer can be further processed or analyzed. For example, \emph{Smart-Its Friends} \cite{holmquist2001smart} pairs two devices by acquiring Accelerometer data in a shared wireless medium. Pirttikangas et al. \cite{pirttikangas2006feature} reported that the Accelerometer in smartphones can be used to track the accurate activity of users, such as brushing teeth and sitting while reading newspapers. Such information can also be utilized to steal the PIN of a device through side-channel attacks (such as \cite{lu2018snoopy} and \cite{giallanza2019keyboard}).
Apart from the Accelerometer, the other frequent sensor types of field-triggered sensor leaks include MAGNETIC\_ FIELD, ORIENTATION, PROXIMITY, LINEAR\_ACCEL- ERATION, STEP\_COUNTER, TEMPERATURE, GYROSCOPE, PRESSURE and LIGHT. These sensors are also likely to be used to harm the user's privacy.
Biedermann et al. \cite{biedermann2015hard} stated that the magnetic field sensor can be exploited to detect what type of operating system is booting up and what application is being started. The orientation sensor can wiretap the device's orientation without requesting any permission, which can be used by attackers to infer the user's PIN. The proximity sensor data can be a trigger to automatically start a phone call recording when users hold the smartphone against their face to make a call. The individual step details can be stored by collecting data from the step counter sensor when the app runs in the background. Temperature, pressure and light sensors are also widely used in IoT devices to monitor environmental conditions, while the gyroscope sensor is utilized to verify the user's identity \cite{sikder2021survey}.
\paragraph*{Case Study}
Here we show two real-world apps that leaks the sensor data, which could be leveraged by attacker to achieve malicious goals.
\begin{lstlisting}[style=JAVA, escapechar=\%,
caption={Example of a sensor leak in {\em com.n3vgames.android.driver}.},
label=code:Case_Study_time_stamp,
firstnumber=1]
final class a.b.b implements SensorEventListener{
public void onSensorChanged(SensorEvent var1){
float var5 = var1.values[0];
float var6 = var1.values[1];
float var7 = var1.values[2];
Log.v("WindowOrientationListenerN3V", "Raw acceleration vector: x=" + var5 + ", y=" var6 + ", z=" + var7);
}}
\end{lstlisting}
Listing \ref{code:Case_Study_time_stamp} showcases a typical sensor leak case in real-world apps. The code snippet is excerpted from a malicious app {\em com.n3vgames.android.driver}.
This app collects raw accelerometer data from the class field \texttt{SensorEven\#values[]} (lines 3-5), and then leak them through invoking \emph{android.util.Log} API (line 6). The app is flagged as a Trojan that downloads additional executable content from a remote server. While leaking such information may not direct link to its malicious behaviour, it expands the attack surface to the attackers. For example, the sensor information can be used to predict the device's motion state, which may lead to a stealthier attack (e.g., downloading malicious content when the device is not in use). It is worth noting that Zhang et al. \cite{zhang2019using} have demonstrated the possibility of using the sensor information to launch stealthy attack for taking control of an Android phone via Google's voice assistant.
Figure \ref{fig:case_study_2} shows another example derived from a phone book app \emph{com.tencent.pb}. It collects the Proximity sensor data (line 8) and eventually send it out through {\tt sendMessage} method (line 41) in a asynchronous thread. The Proximity sensor data was passed as a parameter of method {\tt dlg.a(dlg, float)} (line 8), then the data was stored in the class field {\tt i} of object \texttt{dlg}. The data flows through the method \emph{Log.d(String, Object...)} (line 9), which obtains the field variable \texttt{i} of object \texttt{dlg} as the second parameter via \texttt{dlg.a(dlg)} (line 23). Finally, the tainted parameter passed on to the method \texttt{Log.saveLogToSdCard(String, String, int)}, which creates a new thread (line 29-33) and send the sensor data out (line 35-42).
\begin{figure}[t!]
\centering
\includegraphics[width=1\columnwidth]{figures/code.pdf}
\caption{Code snippet of sensor value leak excerpted from \textit{com.tencent.pb}.}
\label{fig:case_study_2}
\end{figure}
\begin{tcolorbox}[title=\textbf{RQ2 \ding{43} Characterizing Sensor Leaks}, left=2pt, right=2pt,top=2pt,bottom=2pt]
\tool{} is capable of inferring sensor types and pinpointing the corresponding source sensors for data leaks. Our results show that the Accelerometer leaks the most sensor data, both in malware samples and benign apps. The most leaking sources are the method \textit{SensorManager\#getDefaultSensor(int)} and the field \textit{SensorEvent\#values}, the latter of which has been frequently leveraged in malicious behaviours such as inferring user's PIN.
\end{tcolorbox}
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\linewidth]{figures/performance.pdf}
\vspace{-2mm}
\caption{Distribution of time Performance spent to analyze an app by FlowDroid and \tool{}, respectively.}
\label{fig:time_performance}
\vspace{-3mm}
\end{figure}
\subsection{RQ3 -- Runtime Overhead}
\tool{} extends FlowDroid to detect sensor-related data leaks and for inferring the sensor types involved in the leak. We evaluate the runtime overhead of \tool{} and compare it with the original FlowDroid. Figure \ref{fig:time_performance} shows the time consumed by FlowDroid and \tool{}, respectively.
On average, it takes 177.09 seconds for \tool{} to process an app in our dataset, which is comparable to that of the original FlowDroid (i.e., on average, 132.74 seconds to process an app).
As experimentally demonstrated by Avdiienko et al.~\cite{avdiienko2015mining}, by increasing the capacity of the execution server, the performance of FlowDroid could be further improved.
This improvement should also be applicable to \tool{}, making it also possible to analyze real-world apps in practice.
The fact that the time difference between \tool{} and FlowDroid is relatively small suggests that it is also capable of applying \tool{} to analyze (in parallel) large-scale Android apps, as what has been experimentally demonstrated to be true for FlowDroid.
\begin{tcolorbox}[title=\textbf{RQ3 \ding{43} Efficiency}, left=2pt, right=2pt,top=2pt,bottom=2pt]
The time consumption of \tool{} is acceptable for real-time sensor leak detection, with on average 177.09 seconds for one app without a high increase when comparing with FlowDroid, which is suitable for real-time app analysis.
\end{tcolorbox}
\section{Discussion}
We now discuss the potential implications and limitations of this work.
\subsection{Implications}
\textbf{Beyond smartphone apps.}
The motivating example presented in Section~\ref{subsec:motivation} is extracted from an attack targeting smartwatches, which also supply sensors to support client apps to implement advanced features.
These sensors could be abused by smartwatch app developers, especially malicious attackers.
We argue there is also a strong need to characterize sensor leaks in smartwatch apps, not just smart phones.
Our preliminary experiment has shown that \tool{} can be directly applied to correctly pinpoint the sensor leaks in the Android-based smartwatch apps that sniff passwords~\cite{chen2021comparative}.
Android has been used on more and more devices, such as TVs, home appliances, fitness machines and cars.
The apps in these devices could also all be compromised to leak end-users' sensitive data and hence should also be carefully analyzed before releasing them to the public.
\tool{} could also be useful to characterize data leaks for such Android devices and we will examine some of these in our future work.
\textbf{Beyond sensor leaks.}
As argued by Zhang et al.~\cite{zhangcondysta}, tainted values of string type could be organized as fields in objects, which cannot be detected by state-of-the-art static taint analysis tools such as FlowDroid. This is because FlowDroid only supports methods as sources. Thus, sensitive field sources are overlooked by FlowDroid, giving rise to many false negatives.
Our \tool{} extends FlowDroid to mitigate this research gap by introducing field-triggered static taint analysis.
It is worth highlighting that \tool{} is capable of not only detecting sensor leaks but also pinpointing general privacy leaks, either triggered by source methods or fields.
To help users experience this feature, we have committed a pull request to the original FlowDroid on GitHub so that users can easily access Field-triggered Static Taint Analysis by simply configuring their interested field sources in \emph{SourcesAndSinks.txt} file.
\textbf{Automated approaches for discovering sensitive source fields.}
In this work, the sensor-related sensitive source fields are identified through manual effort. These are well known to be time-intensive and error-prone.
Hence, our current \tool{} approach is not directly applicable for detecting general field-triggered privacy leaks.
To achieve this, we need to go through all the fields defined in the Android framework to identify sensitive ones. This is non-trivial as Android is now one of the largest community software projects and contains nearly 10K classes.
There is a need to invent new automated approaches to discover sensitive source fields.
One possible solution would be to extend the machine learning approach applied in the SUSI tool to support the prediction of sensitive source fields.
\subsection{Limitations of \tool{}}
\label{subsec:limitations}
\textbf{Limitation of static analysis.}
One major limitation of our tool lies in the intrinsic vulnerability of static code analysis when encountering code obfuscation, reflection, native code, etc. These lead to the unsoundness of our approach. However, these challenges are regarded as well known and non-trivial issues to overcome in our research community. In our future work, we want to integrate other useful tools developed by our fellow researchers to overcome these shortcomings. For example, we plan to leverage DrodRA \cite{sun2021taming, li2016droidra} to reduce the impact of reflective calls on our static analysis approach.
As explained in Section \ref{subsec:type}, our sensor type inference approach can not trace the sensor type in method-triggered leaks when multiple sensors are available on a device. This is because the actual calling object of a method can only be obtained at run-time. We plan to overcome this limitation in our future work by incorporating dynamic analysis approaches to obtain the required run-time values.
\textbf{Limitations inherited from FlowDroid.}
Since our \tool{} approach directly extends FlowDroid to detect sensor-triggered privacy leaks, it also has all of the limitations of FlowDroid.
For example, FlowDroid may yield unsound results because it may have overlooked certain callback methods involved in the Android lifecycle or incorrectly modelled native methods accessed by the app.
FlowDroid is also oblivious to multi-threading and it assumes threads to execute in an arbitrary but sequential order, which may also lead to false results.
\textbf{Limitations inherited from SUSI.}
The sensor-related sensitive source methods are collected based on the results of the state-of-the-art tool SUSI. This is also the tool leveraged by the FlowDroid to identify source and sink methods.
However, the results of SUSI may not be completely correct -- some of its identified sources may not be truly sensitive.
However, this threat has no impact on our approach but only on our experimental results.
This limitation could be mitigated if a better set of source and sink methods are configured.
\textbf{Threats to Validity.}
Apart from these technical limitations, our work also involves some manual efforts. For example, the sensor-related sensitive source fields are summarized manually by reading the Android developers' documentation.
Such manual processes may also introduce errors of their own.
To mitigate this threat, the authors of this paper have cross-validated the results, and we release our tool\footnote{https://github.com/MobileSE/SEEKER} and dataset\footnote{https://zenodo.org/record/4764311\#.YJ91jJMzadZ} for public access.
\section{RELATED WORK}
\textbf{Android sensor usage.} Android sensor usage has long been analyzed in software security mechanisms. Related works \cite{zhu2013sensec, ba2020learning,xu2012taplogger,miluzzo2012tapprints,liu2015exploring,aviv2012practicality,cai2011touchlogger,owusu2012accessory,lee2015multi} have indicated that embedded sensors can be intentionally misused by malicious apps for privacy compromise. Ba et al. \cite{ba2020learning} proposed a side-channel attack that adopts accelerometer data to eavesdrop on the speaker in smartphones. Xu et al. \cite{xu2012taplogger} have shown that it is feasible to infer user's tap inputs using its integrated motion sensors. Liang Cai et al.\cite{cai2011touchlogger} revealed that confidential data could be leaked when motion sensors, such as accelerometers and gyroscopes, are used to infer keystrokes.
Also, Lin et al.\cite{lin2012new} demonstrated that the orientation sensor of the smartphone could be utilized to detect users' unique gesture to hold and operate their smartphones.
Android Sensor misuse is one of the major causes of privacy leaks and security issues on the Android platform. Zhu et al. \cite{zhu2013sensec} collected sensor data from accelerometers, gyroscopes and magnetometers and constructs users' gesture based on these data. Their work indicates that it is feasible to get access to sensory data for personalized usage. Liu et al. \cite{liu2015exploring} demonstrated the most frequently used sensors in Android devices and revealed their usage patterns through backward tracking analysis. They further investigate sensor data propagation path for accurately characterizing the sensor usage \cite{liu2018discovering}. Their findings suggest that the accelerometer is the most frequently used sensor and the sensor data are always used in local codes.
\textbf{Software side-channels attacks.}
Many previous studies \cite{chang2009inferring, lester2004you, liu2009uwave, ravi2005activity, allen2006classification, schlegel2011soundcomber} explored password inference through specific sensors on smartphones. Owusu et al. \cite{owusu2012accessory} showed that accelerometer values could be used as a powerful side channel to figure out the password on a touchscreen keyboard. Cai et al.\cite{cai2011touchlogger} provided insights of how motion sensors, such as accelerometers and
gyroscopes, can be used to infer keystrokes. Cai et al. \cite{cai2009defending} found that mobile phone sensors are inadequately protected by permission system so that it can raise serious privacy concerns. Enck et al. \cite{enck2014taintdroid} developed TaintDroid that takes sensor information (i.e., location and accelerometer) as sources to detect privacy leaks. Mehrnezhad et al. \cite{mehrnezhad2018stealing} show that orientation sensor can be stealthily listened to without requesting any permission, contributing for attackers to infer the user’s PIN. However, these works emphasize the challenges facing the detection of sensor-sniffing apps or only provided specific attacks by using sensor data. None of them can systematically characterize data leaks in all kinds of sensors.
\textbf{Static analysis on Android apps.}
Android users have long been suffered from privacy leaks \cite{li2017static, kong2018automated, samhi2021raicc, octeau2016combining}. Several solutions have been proposed for detecting such data leaks through static taint analysis~\cite{gao2020borrowing, li2015apkcombiner, yang2017characterizing}. For example, Arzt et al. \cite{arzt2014flowdroid} developed FlowDroid, a context, flow, field and object-sensitive static analysis tool for detecting potential data leaks in Android Apps. Based on Soot \cite{vallee2010soot}, FlowDroid relies on pre-defined knowledge to pinpoint taint flows between source and sink APIs. Zhang et al. \cite{zhangcondysta} developed ConDySTA, a dynamic taint analysis approach, as a supplement to static taint analysis by introducing inaccessible code and sources that help reduce false negatives. Further, Li et al. \cite{li2015iccta} presented IccTA, which can precisely perform data-flow analysis across multiple components for Android apps. Klieber et al. \cite{klieber2014android} augment the FlowDroid and Epicc\cite{octeau2013effective} analyses by tracking both inter-component and intra-component data flow in Android apps.
However, none of these tools concerns the leaks that originated from sensors. Apart from that, our tool only takes the sensor-related code into account, which cost less time by pruning the control flow graph.
The most similar work to ours is SDFDroid\cite{liu2018discovering}, which provides the sensor usage patterns through data flow analysis. As a static approach, however, it focuses on different research object compared to \tool{}. For example, SDFDroid reveals sensor usage patterns while our work explores how and where the sensor data are leaked. On the other hand, SDFDroid applies a static approach to extract sensor data propagation path to construct sensor usage patterns through clustering analysis. In contrast to SDFDroid, \tool{} provide detailed privacy leaks caused by misuse of sensor data, which haven't been found by SDFDroid.
\section{CONCLUSION}
We have presented a novel tool, \tool{}, for characterizing sensor leaks in Android apps. Our experimental results on a large scale of real-world Android apps indicate that our tool is effective in identifying all types of potential sensor leaks in Android apps. Our tool is not only capable of detecting sensor leaks, but also pinpointing general privacy leaks that are triggered by class fields.
Although there are related works on sensor usage analysis, to the best of our knowledge, there is no other work that thoroughly analyses Android sensor leakage. Unlike previous works, our tool is the first one to characterize all kinds of sensor leaks in Android apps. We extend FlowDroid for supporting field sources detection (i.e., merged to FlowDroid via pull \#385 on Github\cite{FlowDroidMerge}), which we believe could be adapted to analyze other sensitive field-triggered leaks.
To benefit our fellow researchers and practitioners towards achieving this, we have made our approach open source at the following Github site.
\begin{center}
\url{https://github.com/MobileSE/SEEKER}
\end{center}
\section*{Acknowledgements}
{
This work was supported by the Australian Research Council (ARC) under a Laureate Fellowship project FL190100035, a Discovery Early Career Researcher Award (DECRA) project DE200100016, and a Discovery project DP200100020.
This research was also supported by the Open Project Program of the State Key Laboratory of Mathematical Engineering and Advanced Computing (No. 2020A06) and the Open Project Program of the Key Laboratory of Safety-Critical Software (NUAA), Ministry of Industry and Information Technology (No. XCA20026).
}
\balance
\bibliographystyle{IEEEtran}
|
1,477,468,750,620 | arxiv | \section{Derivation of the update matrices}\label{section_derivation}
Analogously to the DQNN update rule presented in \cite{Beer2020} the unitaries will be updated through
\begin{equation*}
U_j^l(t+\epsilon)=e^{i\epsilon K_{j}^l(t)} U_j^l(t).
\end{equation*}
We will derive the update matrices in general in \cref{prop:QGAN_K}. To understand the basic idea we discuss the update of a DQGAN consisting of three unitaries, see \cref{{fig:QGAN_qgan}}, first. These perceptron unitaries have the following update rules:
\begin{align*}
U_D(t+\epsilon)&=e^{i\epsilon K_{D}(t)} U_D(t)\\
U_{G1}(t+\epsilon)&=e^{i\epsilon K_{G1}(t)} U_{G1}(t)\\
U_{G2}(t+\epsilon)&=e^{i\epsilon K_{G2}(t)} U_{G2}(t).
\end{align*}
Note that the unitaries act on the current layers, e.g. is $U_{G1}$ denotes $U_{G1}\otimes \mathbbm{1}$ and $U_{G2}$ denotes $\mathbbm{1} \otimes U_{G2}$.
In the first part of the algorithm the generator is fixed and only the discriminator is updated. When the training data is the discriminator's input we get the output state
\begin{align*}
\rho_{\mathrm{out}}^{D}(t+\epsilon)
=&\tr_{\{2,3\}}\Big(e^{i\epsilon K_{D}}U_D \ \ket{\phi^T}\bra{\phi^T} \otimes \ket{0}\bra{0} \ U_D^\dagger e^{-i\epsilon K_{D}}\Big)\\
=&\tr_{\{2,3\}}\Big(U_D\ \ket{\phi^T}\bra{\phi^T} \otimes \ket{0}\bra{0}\ U_D^\dagger +i\epsilon\ \left[K_{D}, U_D\ \ket{\phi^T}\bra{\phi^T} \otimes \ket{0}\bra{0}U_D^\dagger\right] \\
&+\mathcal{O}(\epsilon^2)\Big)\\
=&\rho_{\mathrm{out}}^{D}(t)+i\epsilon\ \tr_{\{2,3\}}\Big(\left[K_{D}, U_D\ \ket{\phi^T}\bra{\phi^T} \otimes \ket{0}\bra{0} \ U_D^\dagger\right] \Big)+\mathcal{O}(\epsilon^2).
\end{align*}
If the discriminator gets the generator's output as input we get the output state
\begin{align*}
\rho_{\mathrm{out}}^{G+D}(t+\epsilon)
=&\tr_{\{1,2,3\}}\Big(e^{i\epsilon K_{D}}U_D U_{G2} U_{G1} ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{000}\bra{000}) U_{G1}^\dagger U_{G2}^\dagger U_D^\dagger e^{-i\epsilon K_{D}}\Big)\\
=&\tr_{\{1,2,3\}}\Big(U_D U_{G2} U_{G1} ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{000}\bra{000}) U_{G1}^\dagger U_{G2}^\dagger U_D^\dagger \\
& +i\epsilon\ \left[K_{D}, U_D U_{G2} U_{G1} ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{000}\bra{000}) U_{G1}^\dagger U_{G2}^\dagger U_D^\dagger\right] +\mathcal{O}(\epsilon^2)\Big)\\
=&\rho_{\mathrm{out}}^{G+D}(t)\\
&+i\epsilon\ \tr_{\{1,2,3\}}\Big(\left[K_{D}, U_D U_{G2} U_{G1} ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{000}\bra{000}) U_{G1}^\dagger U_{G2}^\dagger U_D^\dagger\right] \Big)\\
& +\mathcal{O}(\epsilon^2).
\end{align*}
The update of the generator, assuming the discriminator is fixed, can be written as
\begin{align*}
\rho_{\mathrm{out 2}}^{G+D}(t+\epsilon)
=&\tr_{\{1,2,3\}}\Big(U_D e^{i\epsilon K_{G2}}U_{G2} e^{i\epsilon K_{G1}}U_{G1} ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{000}\bra{000})\\
& U_{G1}^\dagger e^{-i\epsilon K_{G1}} U_{G2}^\dagger e^{-i\epsilon K_{G2}} U_D^\dagger \Big)\\
=&\tr_{\{1,2,3\}}\Big(U_D \Big( U_{G2} U_{G1} ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{000}\bra{000}) U_{G1}^\dagger U_{G2}^\dagger \\
&+ i\epsilon \ U_{G2} \left[K_{G1}, U_{G1} ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{000}\bra{000}) U_{G1}^\dagger \right] U_{G2}^\dagger \\
&+ i\epsilon \left[K_{G2}, U_{G2} U_{G1} ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{000}\bra{000}) U_{G1}^\dagger U_{G2}^\dagger \right]
\Big) U_D^\dagger +\mathcal{O}(\epsilon^2) \Big) \\
=&\rho_{\mathrm{out 2}}^{G+D}(t)\\
&+i\epsilon\ \tr_{\{1,2,3\}}\Big(U_D \Big(
U_{G2} \left[K_{G1}, U_{G1} ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{000}\bra{000}) U_{G1}^\dagger \right] U_{G2}^\dagger \\
&+ \left[K_{G2}, U_{G2} U_{G1} ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{000}\bra{000}) U_{G1}^\dagger U_{G2}^\dagger \right]
\Big) U_D^\dagger \Big)+\mathcal{O}(\epsilon^2).
\end{align*}
To derive the update matrices in general we assume in the following a generator consisting of unitaries $U_1^1 \dots U_{m_g}^g$ and a discriminator built of unitaries $U_1^{g+1}\dots U_{m_{L+1}}^{L+1}$. The update matrices $K_j^l$ update the generator, if $l\le g$ for the number of perceptron layers $g$ of the generator. Otherwise, the matrices $K_j^l$ describe discriminator updates.
\begin{prop}
\label{prop:QGAN_K}
The update matrix for a QGAN trained with pure states $\ket{\phi^\text{T}_x}$ has to be of the form
\begin{equation*}
K^l_j(t) = \frac{\eta 2^{m_{l-1}}i}{S}\sum_x\tr_\text{rest}\big(M^l_{j}(x,t)\big),
\end{equation*}
where
\begin{align*}
M_j^l =& \Big[ U_{j}^{l} \dots U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{0...0}\bra{0...0}) U_{1}^{1 \dagger} \dots U_{j}^{l \dagger}, \\
&U_{j+1}^{l\dagger}\dots U_{m_{L+1}}^{L+1 \dagger} \left(\mathbbm{1}_\mathrm{in+hid}\otimes \ket{1}\bra{1}\right)U_{m_{L+1}}^{L+1 } \dots U_{l+1}^{l}\Big]
\end{align*}
for $l\le g$ and
\begin{align*}
M_j^l =& \Big[ U_{j}^{l} \dots U_{1}^{g+1} \left(\ket{\phi^T}\bra{\phi^T} \otimes \ket{0...0}\bra{0...0}\right) U_{1}^{g+1 \dagger}\dots U_{j}^{l \dagger} \\
&- U_{j}^{l} \dots U_{1}^{g+1} U_{m_g}^{g} \dots U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{0...0}\bra{0...0}) U_{1}^{1 \dagger} \dots U_{m_{g}}^{g\dagger} U_{1}^{g+1 \dagger}\dots U_{j}^{l \dagger} ,\\
&U_{j+1}^{l\dagger}\dots U_{m_{L+1}}^{L+1 \dagger} \left(\mathbbm{1}_\mathrm{in+hid}\otimes \ket{1}\bra{1}\right)U_{m_{L+1}}^{L+1 } \dots U_{l+1}^{l}\Big]
\end{align*}
Here, $U_j^l$ is assigned to the $j$th perceptron acting on layers $l-1$and $l$, $g$ is the number of perceptron layers of the generator, and $\eta$ is the learning rate.
\end{prop}
\begin{proof}
First, we compute the output state of the discriminator after an update with $K_D$. Note that in the following the unitaries act on the current layers, e.g. $U_1^l$ denotes actually $U_1^l\otimes \mathbbm{1}^l_{2,3,\dots,m_l}$. We fix the generator. To derive the update for the discriminator, we need the state when it is fed with the training data, i.e.\
\begin{align*}
\rho_{\mathrm{out}}^{D}(t+\epsilon)
=&\tr_\mathrm{in(D)+hid}\Big(e^{i\epsilon K_{m_{L+1}}^{L+1}}U_{m_{L+1}}^{L+1} \dots e^{i\epsilon K_{1}^{g+1}}U_{1}^{g+1} \ \left(\ket{\phi^T}\bra{\phi^T} \otimes \ket{0...0}\bra{0...0}\right) \\
& U_{1}^{g+1 \dagger}e^{-i\epsilon K_{1}^{g+1}} \dots U_{m_{L+1}}^{L+1 \dagger}e^{-i\epsilon K_{m_{L+1}}^{L+1}}\Big)\\
=&\rho_{\mathrm{out}}^{D}(t)+i\epsilon\ \tr_\mathrm{in(D)+hid}\Big(
\big[K_{m_{L+1}}^{L+1},U_{m_{L+1}}^{L+1} \dots U_{1}^{g+1} \left(\ket{\phi^T}\bra{\phi^T} \otimes \ket{0...0}\bra{0...0}\right)\\
& U_{1}^{g+1 \dagger} \dots U_{m_{L+1}}^{L+1 \dagger} \big]+\dots + U_{m_{L+1}}^{L+1} \dots U_{2}^{g+1} \big[K_{1}^{g+1}, U_{1}^{g+1} \ \ket{\phi^T}\bra{\phi^T} \\
&\otimes \ket{0...0}\bra{0...0} \ U_{1}^{g+1 \dagger} \big] U_{2}^{g+1 \dagger} \dots U_{m_{L+1}}^{L+1 \dagger}\Big)+\mathcal{O}(\epsilon^2),
\end{align*}
and for the case it gets an input state from the generator, that is
\begin{align*}
\rho_{\mathrm{out}}^{G+D}(t+\epsilon)
=&\tr_\mathrm{in(G)+hid}\Big(e^{i\epsilon K_{m_{L+1}}^{L+1}}U_{m_{L+1}}^{L+1} \dots e^{i\epsilon K_{1}^{g+1}}U_{1}^{g+1 } U_{m_g}^{g}\dots U_1^1 ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \\
&\otimes \ket{0...0}\bra{0...0}) U_1^{1\dagger} \dots U_{m_g}^{g\dagger} U_{1}^{g+1\dagger}e^{-i\epsilon K_{1}^{g+1}} \dots U_{m_{L+1}}^{L+1\dagger}e^{-i\epsilon K_{m_{L+1}}^{L+1}}\Big)\\
=&\rho_{\mathrm{out}}^{D}(t)+i\epsilon\ \tr_\mathrm{in(G)+hid}\Big(
\big[K_{m_{L+1}}^{L+1},U_{m_{L+1}}^{L+1} \dots U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}}\\
& \otimes \ket{0...0}\bra{0...0}) U_{1}^{1 \dagger} \dots U_{m_{L+1}}^{L+1 \dagger} \big]+\dots \\
&+ U_{m_{L+1}}^{L+1} \dots U_{2}^{g+1} \big[K_{1}^{g+1}, U_{1}^{g+1} \ U_{m_g}^{g}\dots U_1^1 ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{0...0}\bra{0...0}) \\
& U_1^{1\dagger} \dots U_{m_g}^{g\dagger} \ U_{1}^{g+1 \dagger} \big] U_{2}^{g+1 \dagger} \dots U_{m_{L+1}}^{L+1 \dagger} \Big) \\
& +\mathcal{O}(\epsilon^2).
\end{align*}
The derivative of the discriminator loss function has the following form:
\begin{align*}
\frac{d\mathcal{L}_D}{dt}=&\lim_{\epsilon\rightarrow 0}\frac{\mathcal{L}_D(t)+i\epsilon\frac{1}{S} \sum_{x=1}^S\bra{1}\tr_\mathrm{in+hid}(\dots)\ket{1}-\mathcal{L}_D(t)}{\epsilon}\\
=&\frac{i}{S}\ \sum_{x=1}^S\tr_\mathrm{in+hid}\Big(\mathbbm{1}_\mathrm{in+hid}\otimes \ket{1}\bra{1}\Big(\Big(
\big[K_{m_{L+1}}^{L+1},U_{m_{L+1}}^{L+1} \dots U_{1}^{g+1} \ket{\phi^T}\bra{\phi^T} \\
&\otimes \ket{0...0}\bra{0...0} U_{1}^{g+1 \dagger} \dots U_{m_{L+1}}^{L+1 \dagger} \big] +\hdots \\
& + U_{m_{L+1}}^{L+1} \dots U_{2}^{g+1} \left[K_{1}^{g+1}, U_{1}^{g+1} \ \left(\ket{\phi^T}\bra{\phi^T} \otimes \ket{0...0}\bra{0...0}\right) \ U_{1}^{g+1 \dagger} \right]\\
& U_{2}^{g+1 \dagger} \dots U_{m_{L+1}}^{L+1 \dagger}\Big) \\
&- \Big(
\big[K_{m_{L+1}}^{L+1},U_{m_{L+1}}^{L+1} \dots U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \\
& \otimes \ket{0...0}\bra{0...0}) U_{1}^{1 \dagger} \dots U_{m_{L+1}}^{L+1 \dagger} \big]+\dots \\
& + U_{m_{L+1}}^{L+1} \dots U_{2}^{g+1} \big[K_{1}^{g+1}, U_{1}^{g+1} \ U_{m_g}^{g}\dots U_1^1 ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{0...0}\bra{0...0}) \\
& U_1^{1\dagger} \dots U_{m_g}^{g\dagger} \ U_{1}^{g+1 \dagger} \big] U_{2}^{g+1 \dagger} \dots U_{m_{L+1}}^{L+1 \dagger} \Big)\Big)\Big) \\
=&\frac{i}{S}\ \sum_{x=1}^S\tr_\mathrm{in+hid}\Big(\\
&\Big[ U_{m_{L+1}}^{L+1} \dots U_{1}^{g+1} \left(\ket{\phi^T}\bra{\phi^T} \otimes \ket{0...0}\bra{0...0}\right) U_{1}^{g+1 \dagger} \dots U_{m_{L+1}}^{L+1 \dagger}\\
&-U_{m_{L+1}}^{L+1} \dots U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{0...0}\bra{0...0}) U_{1}^{1 \dagger} \dots U_{m_{L+1}}^{L+1 \dagger} ,\\
&\mathbbm{1}_\mathrm{in+hid}\otimes \ket{1}\bra{1}\Big]K_{m_{L+1}}^{L+1}+\dots +\Big[ U_{1}^{g+1} \left(\ket{\phi^T}\bra{\phi^T} \otimes \ket{0...0}\bra{0...0}\right) U_{1}^{g+1 \dagger}\\
& - U_{1}^{g+1} U_{m_g}^{g} \dots U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{0...0}\bra{0...0}) \\
& U_{1}^{1 \dagger} \dots U_{m_{g}}^{g\dagger} U_{1}^{g+1\dagger} , U_{m_{L+1}}^{L+1\dagger}\dots U_{2}^{g+1\dagger} \mathbbm{1}_\mathrm{in+hid}\otimes \ket{1}\bra{1}U_{2}^{g+1} \dots U_{m_{L+1}}^{L+1}\Big]K_{1}^{g+1}\Big)\\
=&\frac{i}{S}\ \sum_{x=1}^S\tr_\mathrm{in+hid}\left(M_{m_{L+1}}^{L+1}K_{m_{L+1}}^{L+1}+\dots+M_{1}^{g+1}K_{1}^{g+1}\right).
\end{align*}
Note that at this point $ \ket{\phi^T}\bra{\phi^T} \otimes \ket{0...0}\bra{0...0}$ denotes $\mathbbm{1}_{in(G)+hid(G)} \otimes \otimes \ket{\phi^T}\bra{\phi^T} \otimes \ket{0...0}\bra{0...0} \otimes \mathbbm{1}_G \otimes \dots \otimes \mathbbm{1}_G $, to match the dimension of the other summand.
Until here we fixed the generator. Now we study the second part of the algorithm: the generator is fixed instead. Using the state
\begin{align*}
\rho_{\mathrm{out 2}}^{G+D}(s+\epsilon)
=&\tr_\mathrm{in(G)+hid}\Big(U_{m_{L+1}}^{L+1 } \dots U_{1}^{g+1} \ e^{i\epsilon K_{m_{g}}^{g}}U_{m_{g}}^{g} \dots e^{i\epsilon K_{1}^{1}}U_{1}^{1 } \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \\
&\otimes \ket{0...0}\bra{0...0}) U_{1}^{1\dagger} e^{-i\epsilon K_{1}^{1}}\dots U_{m_{g}}^{g\dagger} e^{-i\epsilon K_{m_{g}}^{g}}U_{1}^{g+1\dagger}\dots U_{m_{L+1}}^{L+1 \dagger}\Big)\\
=&\rho_{\mathrm{out}}^{D}(t)+i\epsilon\ \tr_\mathrm{in()+hid}\Big(
U_{m_{L+1}}^{L+1 } \dots U_{1}^{g+1}\big[K_{m_{g}}^{g},U_{m_{g}}^{g} \dots U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \\
& \otimes \ket{0...0}\bra{0...0}) U_{1}^{1 \dagger} \dots U_{m_{g}}^{g \dagger} \big] U_{1}^{g+1\dagger}\dots U_{m_{L+1}}^{L+1 \dagger} +\dots \\
&+ U_{m_{L+1}}^{L+1} \dots U_{2}^{1} \left[K_{1}^{1}, U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{0...0}\bra{0...0}) \ U_{1}^{1 \dagger} \right] U_{2}^{1 \dagger} \dots U_{m_{L+1}}^{L+1 \dagger} \Big) \\
& +\mathcal{O}(\epsilon^2)
G\end{align*}
the derivative of the loss function for training the generator becomes
\begin{align*}
\frac{d\mathcal{L}_G}{dt}=&\lim_{\epsilon\rightarrow 0}\frac{\mathcal{L}_G(t)+i\epsilon\frac{1}{S} \sum_x\bra{1}\tr_\mathrm{in+hid}(\dots)\ket{1}-\mathcal{L}_G(t)}{\epsilon}\\
=&\frac{i}{S}\ \sum_{x=1}^S\tr\Big(\mathbbm{1}_\mathrm{in+hid}\otimes \ket{1}\bra{1}\Big(\Big(
U_{m_{l+1}}^{l+1 } \dots U_{1}^{g+1}\big[K_{m_{g}}^{g},U_{m_{g}}^{g} \dots U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \\
&\otimes \ket{0...0}\bra{0...0}) U_{1}^{1 \dagger} \dots U_{m_{g}}^{g \dagger} \big] U_{1}^{g+1\dagger}\dots U_{m_{l+1}}^{l+1 \dagger} +\dots \\
&+ U_{m_{l+1}}^{l+1} \dots U_{2}^{1} \left[K_{1}^{1}, U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{0...0}\bra{0...0}) \ U_{1}^{1 \dagger} \right] U_{2}^{1 \dagger} \dots U_{m_{l+1}}^{l+1 \dagger} \Big) \Big)\Big) \\
%
=&\frac{i}{S}\ \sum_{x=1}^S\tr\Big(\\
&\Big[ U_{m_{g}}^{g} \dots U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{0...0}\bra{0...0}) U_{1}^{1 \dagger} \dots U_{m_{g}}^{g \dagger}, \\
&U_{1}^{g+1\dagger}\dots U_{m_{l+1}}^{l+1 \dagger} \left(\mathbbm{1}_\mathrm{in+hid}\otimes \ket{1}\bra{1}\right) U_{m_{l+1}}^{l+1 } \dots U_{1}^{g+1}\Big]K_{m_{g}}^{g}+\dots \\
%
&+\Big[ U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{0...0}\bra{0...0}) U_{1}^{1 \dagger} , \\
&U_{2}^{1 \dagger} \dots U_{m_{l+1}}^{l+1 \dagger} \left(\mathbbm{1}_\mathrm{in+hid}\otimes \ket{1}\bra{1}\right) U_{m_{l+1}}^{l+1} \dots U_{2}^{1}\Big]K_{1}^{1}\Big)\\
\equiv&\frac{i}{S}\ \sum_{x=1}^S\tr\left(M_{m_{g}}^{g}K_{m_{g}}^{g}+\dots+M_{1}^{1}K_{1}^{1}\right).
\end{align*}
In both updates, we parametrise the parameter matrices analogously as
\begin{equation*}
K_j^l(t)=\sum_{\alpha_1,\alpha_2,\dots,\alpha_{m_{l-1}},\beta}K^l_{j,\alpha_1,\dots,\alpha_{m_{l-1}},\beta}(t)\left(\sigma^{\alpha_1}\otimes\ \dots\ \otimes\sigma^{\alpha_{m_{l-1}}}\otimes\sigma^\beta\right),
\end{equation*}
where the $\alpha_i$ denote the qubits in the previous layer and $\beta$ denotes the current qubit in layer $l$. To achieve the maximum of the loss function as a function of the parameters \emph{fastest}, we maximise $\frac{d\mathcal{L}}{dt}$. Since this is a linear function, the extrema are at $\pm\infty$. To ensure that we get a finite solution, we introduce a Lagrange multiplier $\lambda\in\mathbbm{R}$. Hence, to find $K_j^l$ we have to solve the following maximisation problem (here for the discriminator update, the update for the generator is analogous):
\begin{align*}
\max_{K^l_{j,\alpha_1,\dots,\beta}}&\left(\frac{dC(t)}{dt}-\lambda\sum_{\alpha_i,\beta}{K^l_{j,\alpha_1,\dots,\beta}}(t)^2\right)\\
&=\max_{K^l_{j,\alpha_1,\dots,\beta}}\left(\frac{i}{S}\sum_{x=1}^S \tr\left(M_{m_{L+1}}^{L+1}K_{m_{L+1}}^{L+1}+\dots+M_{1}^{g+1}K_{1}^{g+1}\right)-\lambda\sum_{\alpha_1,\dots,\beta}{K^l_{j,\alpha_1,\dots,\beta}}(t)^2\right)\\
&=\max_{K^l_{j,\alpha_1,\dots,\beta}}\Big(\frac{i}{S}\sum_{x=1}^S\tr_{\alpha_1,\dots,\beta}\left(\tr_\mathrm{rest}\left(M_{m_{L+1}}^{L+1}K_{m_{L+1}}^{L+1}+\dots+M_{1}^{g+1}K_{1}^{g+1}\right)\right)\\
&-\lambda\sum_{\alpha_1,\dots,\beta}{K^l_{j,\alpha_1,\dots,\beta}}(t)^2\Big).
\end{align*}
Taking the derivative with respect to $K^l_{j,\alpha_1,\dots,\beta}$ yields
\begin{align*}
\frac{i}{S}\sum_{x=1}^S\tr_{\alpha_1,\dots,\beta}\left(\tr_\mathrm{rest}\left(M_j^l(t)\right)\left(\sigma^{\alpha_1}\otimes\ \dots\ \otimes\sigma^\beta\right)\right)-2\lambda K^l_{j,\alpha_1,\dots,\beta}(t)=0,
\end{align*}
hence,
\begin{align*}
K^l_{j,\alpha_1,\dots,\beta}(t)=\frac{i}{2S\lambda}\sum_{x=1}^S\tr_{\alpha_1,\dots,\beta}\left(\tr_\mathrm{rest}\left(M_j^l(t)\right)\left(\sigma^{\alpha_1}\otimes\ \dots\ \otimes\sigma^\beta\right)\right)
\end{align*}
This yields the matrix
\begin{align*}
K_j^l(t)&=\sum_{\alpha_1,\dots,\beta}K^l_{j,\alpha_1,\dots,\beta}(t)\left(\sigma^{\alpha_1}\otimes\ \dots\ \otimes\sigma^\beta\right)\\
&=\frac{i}{2S\lambda}\sum_{\alpha_1,\dots,\beta}\sum_{x=1}^S\tr_{\alpha_1,\dots,\beta}\left(\tr_\mathrm{rest}\left(M_j^l(t)\right)\left(\sigma^{\alpha_1}\otimes\ \dots\ \otimes\sigma^\beta\right)\right)\left(\sigma^{\alpha_1}\otimes\ \dots\ \otimes\sigma^\beta\right)\\
&=\frac{\eta2^{m_{l-1}}i}{2S}\sum_{x=1}^S\tr_\mathrm{rest}\left(M_j^l(t)\right),
\end{align*}
where $\eta=1/\lambda$ is the learning rate and $\tr_\text{rest}$ traces out all qubits that the perceptron unitary $U_j^l$ does not act on.
Notice again that $K_j^l$ updates the generator, if $j\le g$ for the number of layers $g$ of the generator. The definition of $M_j^l$ is
\begin{align*}
M_j^l =& \Big[ U_{j}^{l} \dots U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{0...0}\bra{0...0}) U_{1}^{1 \dagger} \dots U_{j}^{l \dagger}, \\
&U_{j+1}^{l\dagger}\dots U_{m_{L+1}}^{L+1 \dagger} \left(\mathbbm{1}_\mathrm{in+hid}\otimes \ket{1}\bra{1}\right)U_{m_{L+1}}^{L+1 } \dots U_{l+1}^{l}\Big]
\end{align*}
for $l\le g$ and
\begin{align*}
M_j^l =& \Big[ U_{j}^{l} \dots U_{1}^{g+1} \left(\ket{\phi^T}\bra{\phi^T} \otimes \ket{0...0}\bra{0...0}\right) U_{1}^{g+1 \dagger}\dots U_{j}^{l \dagger} \\
&- U_{j}^{l} \dots U_{1}^{g+1} U_{m_g}^{g} \dots U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{0...0}\bra{0...0}) U_{1}^{1 \dagger} \dots U_{m_{g}}^{g\dagger} U_{1}^{g+1 \dagger}\dots U_{j}^{l \dagger} ,\\
&U_{j+1}^{l\dagger}\dots U_{m_{L+1}}^{L+1 \dagger} \left(\mathbbm{1}_\mathrm{in+hid}\otimes \ket{1}\bra{1}\right)U_{m_{L+1}}^{L+1 } \dots U_{l+1}^{l}\Big]
\end{align*}
else.
\end{proof}
\section{Implementation of the DQNN\textsubscript{Q} as a PQC\label{section:dqnn_q_implementation_details}}
The DQNN\textsubscript{Q} intends to realise each neuron as a separate qubit. Thus, implementing the DQNN\textsubscript{Q} as a quantum circuit requires $M=\sum_{l=0}^{L+1} m_l$ qubits. This results in a $2^M$-dimensional Hilbert space $\mathcal{H}^{\otimes M}$ which is the tensor product of $M$ single qubit Hilbert spaces.
The main task of implementing the DQNN described by \cref{eq:DQNN_rhoOut} is to find an appropriate realisation of the quantum perceptron $U^l_j$ which is a general unitary acting on $m_{l-1}+1$ qubits. For the simulation on a classical computer it is sufficient to abstractly define the unitary matrix and update its entries during the training. However, to execute the DQNN on a quantum computer, a concrete realisation in the form of parameterised quantum gates is necessary to build. Once the parameterised quantum gates for representing the quantum perceptron are chosen, the full PQC can be built by composing the respective quantum perceptrons from all layers. When thinking about possible candidates for parameterised quantum gates, two objectives have to be well-balanced: on the one hand, the final realisation of the quantum perceptron should be as universal as possible, while on the other hand, the number of quantum gates and parameters should be kept as small as possible. If either one of these objectives is neglected, the DQNN\textsubscript{Q} will not perform as well as its classically simulated model.
Any arbitrary two-qubit unitary can be expressed by a two-qubit canonical gate and twelve single-qubit gates \cite{Crooks2019}. The two-qubit canonical gate is defined via three parameters as:
\begin{align}
\begin{split}
\text{CAN}(t_x,t_y,t_z) &= e^{-i\frac{\pi}{2}t_x X \otimes X}e^{-i\frac{\pi}{2}t_y Y \otimes Y}e^{-i\frac{\pi}{2}t_z Z \otimes Z} \\
&= \text{RXX}(t_x\pi)\;\text{RYY}(t_y\pi)\;\text{RZZ}(t_z\pi)
\end{split}
\end{align}
where $X = \begin{psmallmatrix} 0&1\\ 1&0 \end{psmallmatrix}$, $Y = \begin{psmallmatrix} 0&-i\\ i&0 \end{psmallmatrix}$, $Z = \begin{psmallmatrix} 1&0\\ 0&-1 \end{psmallmatrix}$ are the Pauli matrices, the RXX/RYY/RZZ gates are parameterised two qubit gates commonly available in quantum computing libraries, and $t_{x,y,z}\in\real$ are the parameters. The necessary single qubit gates are parameterised Pauli-$Y$ and Pauli-$Z$ operators. These are equivalent to the following rotations around the $y$- and the $z$-axis:
\begin{align}\label{eq:single_qubit_rotations}
\begin{split}
Y^t \simeq R_Y(\pi t) = e^{-i\frac{\pi}{2}tY} \\
Z^t \simeq R_Z(\pi t) = e^{-i\frac{\pi}{2}tZ}
\end{split}
\end{align}
up to a phase factor which is indicated by $\simeq$. By executing the two-qubit canonical gate in addition to prepending and appending three single qubit gates to each qubit in the following form:
\begin{equation}\label{eq:universal_two_qubit_gate}
\begin{tikzpicture}[xscale=1.2]
\draw (0,1)-- (10,1);
\draw (0,0)-- (10,0);
\node[operator0,minimum height=0.5cm] at (1,1){$Z^{t_1}$};
\node[operator0,minimum height=0.5cm] at (2,1){$Y^{t_2}$};
\node[operator0,minimum height=0.5cm] at (3,1){$Y^{t_3}$};
\node[operator0,minimum height=0.5cm] at (1,0){$Z^{t_4}$};
\node[operator0,minimum height=0.5cm] at (2,0){$Y^{t_5}$};
\node[operator0,minimum height=0.5cm] at (3,0){$Y^{t_6}$};
\node[operator1,minimum height=1.5cm] at (5,0.5){$\can(t_7,t_8,t_9)$};
\node[operator0,minimum height=0.5cm] at (7,1){$Z^{p_{10}}$};
\node[operator0,minimum height=0.5cm] at (8,1){$Y^{p_{11}}$};
\node[operator0,minimum height=0.5cm] at (9,1){$Y^{p_{12}}$};
\node[operator0,minimum height=0.5cm] at (7,0){$Z^{p_{13}}$};
\node[operator0,minimum height=0.5cm] at (8,0){$Y^{p_{14}}$};
\node[operator0,minimum height=0.5cm] at (9,0){$Y^{p_{15}}$};
\end{tikzpicture}
\end{equation}
any arbitrary two-qubit gate can be performed. As a graphical simplification, the used sequence $Z$-$Y$-$Z$ of single-qubit gates can be expressed as the commonly used single qubit gate $u(t_1,t_2,t_3)$:
\begin{align}\label{eq:single_qubit_sequence}
u(t_1,t_2,t_3) = R_Z(t_2)R_Y(t_1)R_Z(t_3) =
\begin{pmatrix}
\cos (t_1/2) & -e^{it_3}\sin (t_1/2) \\
e^{it_2}\sin (t_1/2) & e^{i(t_2+t_3)}\cos (t_1/2)
\end{pmatrix}
\end{align}
where the different parameterisation compared to \Cref{eq:single_qubit_rotations} should be noted.
The quantum perceptron is not, in general, a two-qubit unitary. Therefore the universal two-qubit gate from \ref{eq:universal_two_qubit_gate} can not directly be used. When thinking about implementing the universal two-qubit gate, it is helpful to think about the task fulfilled by the quantum perceptron, which is to process the states of its input qubits and change the output qubit's state accordingly. This motivates the application of separate two-qubit gates on each input-output qubit pair. However, numerical studies have shown that it is sufficient and often advantageous to refrain from using the single-qubit sequence from \Cref{eq:single_qubit_sequence} and only use the two-qubit canonical gate as the direct realisation of the quantum perceptrons. In addition to realising the entire layer unitary $U^l$, i.e., all quantum perceptrons corresponding to layer $l$, the three-parameter single-qubit gate $u$ is prepended to all input qubits and appended to all output qubits. To append single-qubit gates on the input qubits is pointless, as these are no longer used. To prepend single-qubit gates on the input qubits has proven unnecessary in numerical studies.
The interpretation of the DQNN as a quantum circuit employing the previously discussed methods looks as follows. The first $m_0$ qubits are initialised in a given, possibly mixed state $\rho_\text{in}$, while all remaining qubits are initialised in the computational basis state $\ket{0}$. The quantum circuit and the general DQNN architecture are structured layer-wise and will therefore be described accordingly. The $u$ gates are applied first, layer by layer ($l=1,\dots,L+1$), to the respective $m_{l-1}$ input qubits. After that, the layer unitary $U^l = \prod _{j=m_l}^1U_j^l$ is applied to all input and output qubits. Here, $U^l_j$ is a sequence of $m_{l-1}$ CAN gates where the $i^\text{th}$ CAN gate acts on the $i^\text{th}$ input and the $j^\text{th}$ output qubit. After each layer $l$, the $m_{l-1}$ input qubits are neglected, i.e., they are just ignored for the rest of the quantum circuit. This layer's $m_l$ output qubits serve as the input qubits for the next layer $l+1$. By this, the partial trace of \Cref{eq:DQNN_rhoOut} is realised. After the output layer $L+1$, again, $u$ gates are applied to the remaining $m$ output qubits. Thus, the quantum circuit consists of $N_p = 3m + 3\sum ^{L+1}_{l=1} m_{l-1} (1+m_{l})$ parameters.
\begin{figure}
\centering
\begin{subfigure}[t]{0.35\linewidth}
\centering
\begin{tikzpicture}[scale=1]
\foreach \x in {-.5,.5} {
\draw[line0] (0,\x) -- (2,-1);
\draw[line1] (0,\x) -- (2,-1);
\draw[line0] (0,\x) -- (2,0);
\draw[line1] (0,\x) -- (2,0);
\draw[line0] (0,\x) -- (2,1);
}
\draw[line1,densely dotted] (0,-.5) -- (2,1);
\draw[line1, dash pattern=on 6pt off 2pt] (0,.5) -- (2,1);
\foreach \x in {-1,0,1} {
\draw[line0] (2,\x) -- (4,-0.5);
\draw[line2] (2,\x) -- (4,-0.5);
\draw[line0] (2,\x) -- (4,0.5);
\draw[line2] (2,\x) -- (4,0.5);
}
\node[perceptron0] at (0,-0.5) {};
\node[perceptron0] at (0,0.5) {};
\node[perceptron0] at (2,-1) {};
\node[perceptron0] at (2,0) {};
\node[perceptron0] at (2,1) {};
\node[perceptron0] at (4,-0.5) {};
\node[perceptron0] at (4,0.5) {};
\end{tikzpicture}
\end{subfigure}\begin{subfigure}[t]{0.59\linewidth}
\centering
\begin{tikzpicture}[]
\matrix[row sep=0.25cm, column sep=0.4cm] (circuit) {
\node(start3){$\ket{\phi^\text{in}}$};
& \node[halfcross,label={\small 2}] (c13){};
& \node[operator0] (c23){$u^{\otimes 2}$};
& \node[]{};
& \node[dcross](end3){};
& \node[]{};
& \node[]{};
& \node[]{}; \\
\node(start2){$\ket{000}$};
& \node[halfcross,label={\small 3}] (c12){};
& \node[]{};
& \node[]{};
& \node[operator0] (c32){$u^{\otimes 3}$};
& \node[]{};
& \node[dcross](end2){};
& \node[]{}; \\
\node(start1){$\ket{00}$};
& \node[halfcross,label={\small 2}] (c11){};
& \node[]{};
& \node[]{};
& \node[]{};
& \node[]{};
& \node[operator0] (c41){$u^{\otimes 3}$};
& \node (end1){$\rho ^\text{out}$}; \\
};
\begin{pgfonlayer}{background}
\draw[] (start1) -- (end1)
(start2) -- (end2)
(start3) -- (end3);
\node[operator1, minimum height=2cm] at (-.48,0.5) {$U^1$};
\node[operator2,minimum height=2cm] at (1.49,-.7) {$U^2$};
\end{pgfonlayer}
\end{tikzpicture}
\end{subfigure}
\vspace{0.5 cm}
\begin{subfigure}[t]{0.35\linewidth}
\centering
\begin{tikzpicture}[scale=0.8]
\draw[white] (0,2.2)-- (1,2.2);
\draw (0.2,1)-- (1.8,1);
\draw (0.2,0)-- (1.8,0);
\node[operator1,minimum height=1.5cm] at (1,0.5){$U^1$};
\node[] at (2.1,0.5){$=$};
\begin{scope}[xshift=2.5cm]
\node[rounded corners=.35cm,draw=color3,line width=1pt,dashed,minimum height=2.0cm,minimum width=2.1cm, fill=color3L] at (2.6,0.5){};
\node[color3] at (2.6,2.3){2-3-2$^+$};
\draw (0,1)-- (4,1);
\draw (0,0)-- (4,0);
\node[operator1,minimum height=1.5cm] at (0.7,0.5){$U^1_a$};
\node[operator0]at (2,1) {$u^{\otimes 2}$};
\node[operator0]at (2,0) {$u^{\otimes 3}$};
\node[operator1,minimum height=1.5cm] at (3.3,0.5){$U^1_b$};
\end{scope}
\end{tikzpicture}
\end{subfigure}
\begin{subfigure}[t]{0.59\linewidth}
\centering
\begin{tikzpicture}[yscale=.4,xscale=0.8]
\draw (0,4)-- (2,4);
\draw (0,3)-- (2,3);
\draw (0,2)-- (2,2);
\draw (0,1)-- (2,1);
\draw (0,0)-- (2,0);
\node[operator1,minimum height=2.0cm] at (1,2){$U^1_{a,b}$};
\node[] at (2.5,2){$=$};
\begin{scope}[xshift=3cm]
\draw[white] (0,4.5)-- (7,4.5);
\draw (0,4)-- (7,4);
\draw (0,3)-- (7,3);
\draw (0,2)-- (7,2);
\draw (0,1)-- (7,1);
\draw (0,0)-- (7,0);
\node[operator1,minimum height=1.2cm,line width=1pt, dash pattern=on 6pt off 2pt] at (1,3){};
\node[operator1,minimum height=0.8cm, line width=1pt, densely dotted] at (2,2.5){};
\node[operator1,minimum height=1.6cm] at (3,2.5){};
\node[operator1,minimum height=1.2cm] at (4,2){};
\node[operator1,minimum height=2.0cm] at (5,2){};
\node[operator1,minimum height=1.6cm] at (6,1.5){};
\draw[line0] (0,3)-- (1.5,3);
\draw (0,3)-- (1.5,3);
\draw[line0] (2.5,3)-- (3.5,3);
\draw (2.5,3)-- (3.5,3);
\draw[line0] (4.5,3)-- (5.5,3);
\draw (4.5,3)-- (5.5,3);
\draw[line0] (2.5,2)-- (7,2);
\draw (2.5,2)-- (7,2);
\draw[line0] (4.5,1)-- (7,1);
\draw (4.5,1)-- (7,1);
\end{scope}
\end{tikzpicture}
\end{subfigure}
\caption{An exemplary DQNN\textsubscript{Q} implementation as a parameterised quantum circuit suitable for the execution on NISQ devices. The unitaries $U^l$ implement the layer-to-layer transition from the layer $l-1$ to $l$. In the standard 2-3-2 network, $U^l$ consists of $m_{l-1}\cdot m_l$ two-qubit CAN gates. In the computationally more powerful 2-3-2$^+$ network, $U^l$ features additional gates as shown in the pink dashed box.}
\label{fig:dqnn_circuit_implementation}
\end{figure}
Due to the limitations of the current NISQ devices one is often interested in increasing the computational power of the DQNN\textsubscript{Q} without using additional qubits. In this case, the quantum perceptron can be modified such that the DQNN\textsubscript{Q}'s layer-to-layer transition gets computationally more powerful. This modification is denoted with a $^+$ as in 2-3-2$^+$. The corresponding DQNN\textsubscript{Q} is defined with the additional parameterised quantum gates shown in the pink dashed box in \cref{fig:dqnn_circuit_implementation}. The layer unitaries $U^1_a$ and $U^1_b$ share the same structure but are parameterised independently.
\section{Further numerical results}\label{apnx:numerics}
In \cref{section_results} we discussed the classical simulation of the DQGAN algorithm. In the following we extend the numerical examples of this section.
\begin{figure}[H]
\centering
\begin{subfigure}{0.3\linewidth}
\input{numerics/T}
\subcaption{$\text{data}_\text{line}$}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\input{numerics/0}
\subcaption{$r_T=0$}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\input{numerics/100}
\subcaption{$r_T=100$}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\input{numerics/200}
\subcaption{$r_T=200$}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\input{numerics/300}
\subcaption{$r_T=300$}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\input{numerics/400}
\subcaption{$r_T=400$}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\input{numerics/500}
\subcaption{$r_T=500$}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\input{numerics/600}
\subcaption{$r_T=600$}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\input{numerics/700}
\subcaption{$r_T=700$}
\end{subfigure}
\caption{\textbf{Output of the generator.} To compare the output of the generator (b-i), during the training of a \protect\oneoneone DQGAN, to the data set $\text{data}_\text{line}$ (a) we plot the states in Bloch spheres.}
\label{fig:apdx_bloch}
\end{figure}
First of all, \cref{fig:apdx_bloch} gives an overview of the generator's different training situations following the training depicted in \cref{fig:GAN_line}. At every of these training steps we build a set of $100$ by the generator produced states and plot them in a Bloch sphere.
Secondly, in \label{fig:apdx_line} we train a 1-3-1 DQGAN with the training data
\begin{equation*}
\text{data}_\text{line'}=\left\{\frac{(N-x)\ket{000}+(x-1)\ket{001}}{||(N-x)\ket{000}+(x-1)\ket{001}||}\right\}_{x=1}^{N},
\end{equation*} for $N=50$.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{axis}[
xmin=0, xmax=20,
ymin=0, ymax=2,
width=0.8\linewidth,
height=0.5\linewidth,
grid=major,grid style={color0M},
xlabel= Training epochs $r_T$,
xticklabels={-100,0,100,200,300,400,500,600,700,800,900,1000},
ylabel=$\mathcal{L}(t)$,legend pos=north east,legend cell align={left},legend style={draw=none,legend image code/.code={\filldraw[##1] (-.5ex,-.5ex) rectangle (0.5ex,0.5ex);}}]
\coordinate (0,0) ;
\addplot[mark size=1.5 pt,color=color2] table [x=step times epsilon, y=costFunctionDis, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-3networkGen_3-1networkDis_lda1_ep0i01_rounds1000_roundsGen1_roundsDis1_connectedLine_training.csv};
\addlegendentry[mark size=10 pt,scale=1]{Training loss $\mathcal{L}_\text{D}$}
\addplot[mark size=1.5 pt,color=color1] table [x=step times epsilon, y=costFunctionGen, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-3networkGen_3-1networkDis_lda1_ep0i01_rounds1000_roundsGen1_roundsDis1_connectedLine_training.csv};
\addlegendentry[scale=1]{Training loss $\mathcal{L}_\text{G}$}
\addplot[mark size=1.5 pt,color=color3] table [x=step times epsilon, y=costFunctionTest, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-3networkGen_3-1networkDis_lda1_ep0i01_rounds1000_roundsGen1_roundsDis1_connectedLine_training.csv};
\addlegendentry[scale=1]{Validation loss $\mathcal{L}_\text{V}$}
\end{axis}
\end{tikzpicture}
\caption{\textbf{Training a DQGAN.} The evolution of the training losses and validationloss during the training of a \protect\oneothreeone DQGAN in $r_T=1000$ epochs with $\eta=1$ and $\epsilon=0.01$ using $50$ data pairs of the data set $\text{data}_\text{line'}$ where $10$ are used as training states.}
\label{fig:apdx_line}
\end{figure}
For a more comprehensive study, we averaged the histogram resulting after $200$ training rounds using ten independent training attempts and $10$ randomly chosen training states of $\text{data}_\text{line}$. \cref{fig:GAN_lineComp} shows that the diversity of the generator's output is good, since all elements in $\text{data}_\text{line}$ get produced quite equally.
Moreover, we build an equivalent plot with the difference of choosing randomly $10$ training states of $\text{data}_\text{cl}$, where
\begin{equation*}
\text{data}_\text{cl}= \left\{\frac{(2N-1)\ket{0}+(x-1)\ket{1}}{||(2N-1)\ket{0}+(x-1)\ket{1}||}\right\}_{x=1}^{\tfrac{N}{2}}\cup\left\{\frac{(2N-1)\ket{0}+(x-1)\ket{1}}{||(2N-1)\ket{0}+(x-1)\ket{1}||}\right\}_{x=\tfrac{3N}{2}}^{2N}.
\end{equation*}
\begin{figure}[H]
\centering
\begin{subfigure}{\textwidth}\centering
\begin{tikzpicture}[scale=1]
\begin{axis}[
ybar,
bar width=1.5pt,
xmin=0, xmax=51,
ymin=0, ymax=9,
width=.8\linewidth,
height=.28\linewidth,
grid=major,
grid style={color0M},
xlabel= State index $x$,
ylabel=Counts,legend pos=north east,legend cell align={left}]
\addplot[color=color3, fill=color3] table [x=index,y=countTTMean, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-1networkGen_1-1networkDis_lda1_ep0i01_rounds200_roundsGen1_roundsDis1_line_statMean.csv};
\end{axis}
\end{tikzpicture}
\caption{Line trained with DQNN.} \label{fig:GAN_lineComp}
\end{subfigure}
\begin{subfigure}{\textwidth}\centering
\begin{tikzpicture}[scale=1]
\begin{axis}[
ybar,
bar width=1.5pt,
xmin=0, xmax=51,
ymin=0, ymax=30,
width=.8\linewidth,
height=.28\linewidth,
grid=major,
grid style={color0M},
xlabel= State index $x$,
ylabel=Counts,legend pos=north east,legend cell align={left}]
\addplot[color=color3, fill=color3] table [x=index,y=countTTMean, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-1networkGen_1-1networkDis_lda1_ep0i01_rounds200_roundsGen1_roundsDis1_CvsLi_statMean.csv};
\end{axis}
\end{tikzpicture}
\caption{Two clusters trained with DQNN.} \label{fig:GAN_ClusComp}
\end{subfigure}
\begin{subfigure}{\textwidth}\centering
\begin{tikzpicture}[scale=1]
\begin{axis}[
ybar,
bar width=1.5pt,
xmin=0, xmax=51,
ymin=0, ymax=12,
width=.8\linewidth,
height=.28\linewidth,
grid=major,
grid style={color0M},
xlabel= State index $x$,
ylabel=Counts,legend pos=north east,legend cell align={left}]
\addplot[color=color3, fill=color3] table [x=indexDataTest,y=countOutTest, col sep=comma] {numerics/dqnn_q_eq_cluster_epoch_200_vs.csv};
\end{axis}
\end{tikzpicture}
\caption{Two clusters trained with DQNN\textsubscript{Q}.} \label{fig:qgan_q_cluster}
\end{subfigure}
\begin{subfigure}{\textwidth}\centering
\begin{tikzpicture}[scale=1]
\begin{axis}[
ybar,
bar width=1.5pt,
xmin=0, xmax=51,
ymin=0, ymax=15,
width=.8\linewidth,
height=.28\linewidth,
grid=major,
grid style={color0M},
xlabel= State index $x$,
ylabel=Counts,legend pos=north east,legend cell align={left}]
\addplot[color=color3, fill=color3] table [x=index,y=countTTMean, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-1networkGen_1-1networkDis_lda1_ep0i01_rounds200_roundsGen1_roundsDis1_conCvsLi_statMean.csv};
\end{axis}
\end{tikzpicture}
\caption{Two clusters plus $\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})$ trained with DQNN.} \label{fig:GAN_Clus+Comp}
\end{subfigure}
\caption{\textbf{Diversity analysis of a DQGAN.} This plot describes the output's diversity of a \protect\oneoneone DQGAN (DQGAN\textsubscript{Q}) trained in 200 epochs with $\eta=1$ ($\eta_D=0.5,\eta_G=0.1$) and $\epsilon=0.01$ ($\epsilon = 0.25$) using $10$ quantum states of the data sets $\text{data}_\text{line}$ (a), $\text{data}_\text{cl}$ (b,c) and $\text{data}_\text{cl+}$ (d) and compared the generator's output to the data set $\text{data}_\text{line}$.}
\label{fig:apnx_Div}
\end{figure}
\cref{fig:GAN_ClusComp} depicts the distribution of the generator's output after 200 training epochs of ten training attempts with $S=10$ randomly chosen training states. The generator does not produce all elements in $\text{data}_\text{line}$ equally often. Due to the average of ten independent training attempts, the states $\ket{0}$ and $\ket{1}$ are very prominent in this plot. Since the state $\ket{0}$ is produced more often, we assume that the training states randomly chosen in every training attempt the $S=10$ training states were more often states of the first part of the cluster.
Further, by removing one state of the data set $\text{data}_\text{cl}$ and replacing it by $\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})$ we obtain the data set $\text{data}_\text{cl+}$. \cref{fig:GAN_Clus+Comp} shows the diversity of a generator resulting by training a DQGAN with $\text{data}_\text{cl+}$. We can see, that some states in the middle of the $x$-range are generated more often compared to the plot in \cref{fig:GAN_ClusComp}. However, the generator does not produce the state $\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})$ ($x=25$) very often and the resulting peak in the histogram is rather shifted more in the direction of the $\ket{1}$ state ($x=50$).
Additionally, we trained a DQGAN\textsubscript{Q} using the clustered data set $\text{data}_\text{cl}$ and tested the generator's diversity after $r_T=200$ training epochs for a single execution on the $\text{data}_\text{line}$. The results are depicted in \cref{fig:qgan_q_cluster} which show the generator's ability to extend the clustered training data while keeping its main characteristics. However, as opposed to the DQGAN simulated on a classical computer, the DQGAN\textsubscript{Q} does not achieve to produce the full range of training data.
\section{Quantum neural networks\label{section_QNN}}
Many attempts on builing a QNN, the quantum analogue of the popular classical neural network, have been made \cite{
Andrecut2002,
Oliveira2008,
Panella2011,
Silva2016,
Cao2017,
Wan2017,
Alvarez2017,
Farhi2018,
Killoran2019,
Steinbrecher2019,
Torrontegui2019,
Sentis2019,
Tacchino2020,
Beer2020,
Skolik2020,
Zhang2020,
Schuld2020,
Sharma2020,
Zhang2021
}. In the following we describe the architecture of so-called \emph{dissipative quantum neural networks} (DQNNs) \cite{Beer2020, Beer2021, Beer2021a} as we will exploit this ansatz to form the DQGANs. We explain how their training algorithm can be simulated on a classical computer and how the DQNN can be implemented on a quantum computer \cite{Beer2021}.
\subsection{Dissipative quantum neural network\label{section_21}}
DQNNs are build of layers of qubits, which are connected via building blocks. Such a building block, named perceptron, is engineered as an arbitrary unitary operation.
We can express the propagation of a state $\rho^{\text{in}}$ trough the network as a composition of layer-to-layer transition maps, namely
\begin{equation}\label{eq:DQNN_rhoOut}
\rho^\text{out}=\mathcal{E}\left(\rho^{\text{in}}\right)= \mathcal{E}^{L+1}\left(\mathcal{E}^{L}\left(\dots \mathcal{E}^{2}\left(\mathcal{E}^{1}\left(\rho^{\text{in}}\right)\right)\dots\right)\right),
\end{equation}
where the transition maps are defined as
\begin{equation*}\label{eq:DQNN_E}
\mathcal{E}^{l}(X^{l-1}) \equiv \tr_{l-1}\big(\prod_{j=m_l}^{1} U^l_j (X^{l-1}\otimes \ket{0...0}_l\bra{ 0...0})\prod_{j=1}^{m_l} {U_j^l}^\dag\big),
\end{equation*}
where $U_j^l$ refers to the $j$th perceptron acting on layers $l-1$ and $l$, and $m_l$ is the total number of perceptrons connecting layers $l-1$ and $l$, see \cref{fig:DQNN_qnncircuitA}. These maps tensor the state of the current layer to the state of the next layer's qubits and apply the perceptron unitaries. Since the qubits from the first of the two layers are traced out additionally, these QNNs are called \emph{dissipative}.
\begin{figure}
\centering
\begin{subfigure}[t]{0.35\linewidth}
\centering
\begin{tikzpicture}[scale=1.1]
\begin{scope}[xshift=0.9cm,yshift=1.45cm]
\draw[brace0]
(-1.25,0) -- node[above=1ex] {$U^1=U_3^1U_2^1U_1^1$}
(1.5,0);
\end{scope}
\draw[line0] (0,-.5) -- (2,1);
\draw[line1,densely dotted] (0,-.5) -- (2,1);
\draw[line0] (0,.5) -- (2,1);
\draw[line1,densely dotted] (0,.5) -- (2,1);
\draw[line0] (0,-.5) -- (2,0);
\draw[line1, dash pattern=on 6pt off 2pt] (0,-.5) -- (2,0);
\draw[line0] (0,.5) -- (2,0);
\draw[line1, dash pattern=on 6pt off 2pt] (0,.5) -- (2,0);
\draw[line0] (0,-.5) -- (2,-1);
\draw[line1] (0,-.5) -- (2,-1);
\draw[line0] (0,.5) -- (2,-1);
\draw[line1] (0,.5) -- (2,-1);
\foreach \x in {-1,0,1} {
\draw[line0] (2,\x) -- (4,-0.5);
\draw[line2] (2,\x) -- (4,-0.5);
\draw[line0] (2,\x) -- (4,0.5);
\draw[line2] (2,\x) -- (4,0.5);
}
\node[perceptron0] at (0,-0.5) {};
\node[perceptron0] at (0,0.5) {};
\node[perceptron0] at (2,-1) {};
\node[perceptron0] at (2,0) {};
\node[perceptron0] at (2,1) {};
\node[perceptron0] at (4,-0.5) {};
\node[perceptron0] at (4,0.5) {};
\end{tikzpicture}
\subcaption{Network. }
\label{fig:DQNN_qnncircuitA}
\end{subfigure}
\begin{subfigure}[t]{0.64\linewidth}
\centering
\begin{tikzpicture}[scale=1.3]
\matrix[row sep=0.3cm, column sep=0.4cm] (circuit) {
\node(start3){$\ket{\phi^\text{in}}$};
& \node[halfcross,label={\small 2}] (c13){};
& \node[operator0] (c23){$u^{\otimes 2}$};
& \node[]{};
& \node[dcross](end3){};
& \node[]{};
& \node[]{};
& \node[]{}; \\
\node(start2){$\ket{000}$};
& \node[halfcross,label={\small 3}] (c12){};
& \node[]{};
& \node[]{};
& \node[operator0] (c32){$u^{\otimes 3}$};
& \node[]{};
& \node[dcross](end2){};
& \node[]{}; \\
\node(start1){$\ket{00}$};
& \node[halfcross,label={\small 2}] (c11){};
& \node[]{};
& \node[]{};
& \node[]{};
& \node[]{};
& \node[operator0] (c41){$u^{\otimes 3}$};
& \node (end1){$\rho ^\text{out}$}; \\
};
\begin{pgfonlayer}{background}
\draw[] (start1) -- (end1)
(start2) -- (end2)
(start3) -- (end3);
\node[operator1, minimum height=2cm] at (-.35,0.5) {$U^1$};
\node[operator2,minimum height=2cm] at (1.15,-.7) {$U^2$};
\end{pgfonlayer}
\end{tikzpicture}
\subcaption{Implementation. }
\label{fig:DQNN_qnncircuitB}
\end{subfigure}
\vspace*{10mm}
\caption{\textbf{DQNN} An exemplary DQNN consisting of two layers of quantum perceptrons (a) can be implemented as quantum circuit (b). The $u$-gates represent layers of single qubit operations. $U^l=U_{m_l}^l \dots U_1^l$ denote the layer unitaries, where every unitary $U^l_k$ is expressed trough two-qubit unitaries, see \cite{Beer2020}.}
\label{fig:DQNN_qnncircuit}
\end{figure}
The training of such an QNN architecture is done with respect to a data set containing $S$ input and desired output states, namely
$\{\ket{\phi^{\text{in}}_x}, \ket{\phi^{\text{SV}}_x}\} $.
For example, in \cite{Beer2020} it is shown that the DQNN algorithm can successfully characterize an unknown unitary $Y$, using desired output training states of the form $\ket{\phi^\text{SV}_x} = Y\ket{\phi^\text{in}_x}$.
Generally, the training is done via maximising a training loss function based on the fidelity $F$ of two states, e.g., of the form
\begin{equation}
\label{eq:DQNN_trainingloss}
\mathcal{L}_\text{SV}=\frac{1}{S}\sum_{x=1}^S F(\ket{\phi^{\text{SV}}_x}\bra{\phi^{\text{SV}}_x},\rho_x^{\text{out}}) = \frac{1}{S}\sum_{x=1}^S \braket{\phi^{\text{SV}}_x|\rho_x^{\text{out}}|\phi^{\text{SV}}_x}.
\end{equation}
The general aim is to optimise such a loss function by updating the variable parts of the DQNN.
In the following we explain the training process in two cases, the simulation on a classical computer and the quantum circuit implementation suitable for NISQ devices.
\subsection{Classical simulation implementation\label{section_QNN_cl}}
We can implement the algorithm using the quantum perceptron unitaries $U_j^l$ directly. Hence, every perceptron is described via $(2^{m_l+1})^2-1$ parameters.
Via feed-forward propagation of the input state trough the DQNN and back-propagation of the desired output state we can gain information on how every unitary $U_j^l$ has to be updated to minimize the training loss, exemplary defined in \cref{eq:DQNN_trainingloss}. We can formulate the update, using an update matrix $K_j^l(t)$, as
\begin{equation*}
\label{eq:DQNN_updateU}
U_j^l(t+\epsilon)=e^{i\epsilon K_j^l(t)} U_j^l(t),
\end{equation*}
where $\epsilon$ is the training step size and $t$ is the step parameter. The concrete formula of the update matrix is derived in \cite{Beer2020}. Remarkable is, that to evaluate the matrix $K_j^l(t)$, which updates a perceptron connecting layers $l-1$ and $l$, only two quantum states are needed: the output state of layer $l$ obtained by feed-forward propagation through the network, and the state of the layer $l+1$ obtained by back-propagation of the desired output. For more details of the classical simulation of a DQNN algorithm we point to \cite{Beer2020}. The code can be found at \cite{Github}.
\subsection{Quantum circuit implementation\label{section_QNN_q}}
To implement the quantum perceptrons on a quantum computer we have to abstract the perceptron unitaries into parameterised quantum gates. In \cite{Beer2021a} it is used that every arbitrary two-qubit unitary can be implemented with a two-qubit canonical gate and single qubit gates, see \cite{Zhang2003,Zhang2004,Blaauboer2008,Watts2013,Crooks2019, Peterson2020}. This yields the implementation of each perceptron via $m_{l-1}$ two-qubit unitaries connecting one qubit of the output layer $l$ to all qubits in the input layer $l-1$, respectively.
Rephrasing the sequence of single qubit gates in form of the gate $u$ and summarizing the two-qubit canonical gates in $U^l$ leads to the neat representation in \cref{fig:DQNN_qnncircuitB}. For the DQNN\textsubscript{Q}, $n = \sum_{l=1}^{L} m_l$ qubits are needed, where $L$ is the number of layers. The overall PQC consists of $3m + 3\sum_{l=1}^{L+1} m_{l-1}(1+m_l)$ parameters.
The DQNN\textsubscript{Q} implementation can be trained with gradient descent. At the beginning, the parameters of the quantum circuit are initialised as $\vec{\omega}_0$. All parameters are updated by $\vec{\omega}_{t+1} = \vec{\omega}_{t} + \vec{d \omega}_{t}$ in every training epoche, where $\vec{d\omega}_{t} = \eta {\nabla} \mathcal{L}_\text{SV} \left(\vec{\omega}_t\right)$ with the learning rate $\eta$ and the gradient is of the form
\begin{equation*}
\nabla _k \mathcal{L}_\text{SV} \left(\vec{\omega}_t\right) = \frac{\mathcal{L}_\text{SV}\left(\vec{\omega}_t + \vec\epsilon{e}_k\right) - \mathcal{L}_\text{SV}\left(\vec{\omega}_t - \vec\epsilon{e}_k\right)}{2\epsilon} + \mathcal{O}\left(\epsilon^2\right).
\end{equation*}
For a thorough explanation of the DQNN\textsubscript{Q} suitable for the execution on NISQ devices we refer to \cref{section:dqnn_q_implementation_details} and \cite{Beer2021a}.
\section{Dissipative quantum adversarial neural networks\label{section_QGAN}}
In the field of machine learning we can generally distinguish \emph{discriminative} and \emph{generative} models. For instance, classification problems such as classifying handwritten digits \cite{Nielsen2015} are common discriminative tasks. On the contrary, generative models produce data. Speaking in the example of handwritten digits, we would train a generative model to produce different \enquote{handwritten} digits from random input.
In the following, we describe \emph{generative adversarial networks} (GANs). These are built of two models, where one of them has a generative and the other one a discriminative task. It is much harder to train generative models than discriminative models. The proposal of GANs offered new possibilities and has since found a lot of applications\cite{Creswell2018}, ranging from classification or regression tasks \cite{Creswell2018, Zhu2016, Salimans2016} to the generation \cite{Reed2016} and improvement \cite{Ledig2017} of images.
\subsection{General concept\label{section_31}}
GANs were first introduced in \cite{Goodfellow2014}. The generative and discriminative parts of their GAN are implemented as a multi-layer perceptron, respectively. One the one hand, the generative model gets random noise samples as input and produces synthetic samples. On the other hand, the discriminator has access to both the generator's output and samples from the training data. In the original proposal, this data cannot be accessed by the generator.
The training aim of the discriminator is to distinguish correctly between the training data and the synthetic data. Since the generator's goal is to trick the discriminative model, the problem is called a \emph{minimax problem}.
\begin{figure}[h!]
\centering
\begin{tikzpicture}[scale=1.4]
\node[] (pz) at (-1.5,0) {$\ket{\psi^\text{in}}$};
\node[perceptron1] (a) at (0,0) {1};
\node[perceptron1] (b) at (1,-.5) {3};
\node[perceptron1] (c) at (1,.5) {2};
\draw (a) -- (b);
\draw (a) -- (c);
\node[color1] (Ug) at (.5,0) {$\mathcal{E}_G$};
\node[] (Rg) at (1.5,0) {$\rho_G$};
\draw[-stealth,shorten <=4pt, shorten >=4pt,color0] (pz) -- (-.25,0);
\draw[brace0](1.8,0.7)-- (1.8,-2.7) ;
\begin{scope}[shift={(0,-2)}]
\node[] (Rt) at (1.5,0) {$\ket{\phi^T}$};
\node[perceptron0] (b) at (1,-.5) {3};
\node[perceptron0] (c) at (1,.5) {2};
\end{scope}
\begin{scope}[shift={(2.5,-1)}]
\node[] (pd) at (2.5,0) {$\rho^\text{out}$};
\node[perceptron2] (d) at (0,-.5) {3};
\node[perceptron2] (e) at (0,.5) {2};
\node[perceptron2] (f) at (1,0) {4};
\draw (d) -- (f);
\draw (e) -- (f);
\node[color2] (Ud) at (.5,0) {$\mathcal{E}_D$};
\draw[-stealth,shorten <=4pt, shorten >=4pt,color0] (f) -- (pd);
\end{scope}
\end{tikzpicture}
\caption{\textbf{DQGAN.} The depicted DQGAN consists of four qubits. Qubits $2$ and $3$ are shared by the generative and the discriminative QNN. The state of this qubits is either the generator's output state $\rho_G$ on the input state, i.e., $\rho_G=\mathcal{E}_G (\ket{\psi^\text{in}}\bra{\psi^\text{in}})$ or a given training state $\ket{\phi^T}$. }
\label{fig:QGAN_qgan}
\end{figure}
Following the above-described ansatz, the DQGAN is constructed of two DQNNs, the generative model, and the discriminative model, described through the completely positive maps $\mathcal{E}_G$ and $\mathcal{E}_D$, respectively. The number of qubits in the generator's last layer equals the number of qubits in the discriminator's first layer. Hence, the generator's output can be used as input for the discriminator.
For the training a set of training states $\{\ket{\phi_x^T}\}_{x=1}^N$ and a set of random states $\{\ket{\psi ^\text{in}_x}\}$ is used. We assume the the states of both sets to be pure. The overall goal is to adversarially train both DQNNs, so that the generator produces states with characteristics similar to the training data.
We can describe the output of the discriminator DQNN as
\begin{singlespace}
\begin{equation*}
\rho^\text{out}=
\begin{cases}
\mathcal{E}_D (\mathcal{E}_G (\ket{\psi^\text{in}}\bra{\psi^\text{in}})) &\mbox{for generated data}\\
\mathcal{E}_D (\ket{\phi^T}\bra{\phi^T}) & \mbox{for training data.}
\end{cases}
\end{equation*}
\end{singlespace}
To be more precise we shortly discuss the DQGAN depicted in \cref{fig:QGAN_qgan} and consisting of four qubits. Please consider \cref{fig:DQNN_qnncircuit} for a better understanding of the following description. The generative model consists of two two-qubit unitaries $U_{G1}$ and $U_{G2}$, acting on qubits $1$ and $2$, and qubits $1$ and $3$, respectively. The discriminator is described by a single three-qubit unitary $U_D$. If the discriminative model gets a training data state $\ket{\phi^T}$ as input the resulting discriminator output state can be described as
\begin{align*}
\rho_{\mathrm{out}}^{D}
=&\tr_{\{2,3\}}\Big(U_D \left( \ket{\phi^T}\bra{\phi^T} \otimes \ket{0}\bra{0} \right) U_D^\dagger \Big).
\end{align*}
For the the generator's output as input, the discriminator has the output
\begin{align*}
\rho_{\mathrm{out}}^{G+D}
=&\tr_{\{1,2,3\}}\Big(U_D U_{G2} U_{G1} ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{000}\bra{000}) U_{G1}^\dagger U_{G2}^\dagger U_D^\dagger \Big).
\end{align*}
The general form of these output states is used in the proof of \cref{prop:QGAN_K}.
The original DQNN approach focuses on characterising a relation between input and output data pairs. However, we try to characterise a data set of single quantum states instead. We aim to train a generative model in a way that it is able to produce quantum states with similar properties compared to the training data set. Such extended quantum data sets can be, for example, useful for experiments or training other QNN architectures.
\subsection{Training algorithm\label{section_32}}
In analogy to the classical case described in \cite{Goodfellow2014} we can describe the training process through
\begin{equation}\label{eq:dqgan_minimax}
\mathcolorbox{\min_G
\max_D \left(\frac{1}{S}\sum_{x=1}^S \bra{0} \mathcal{E}_D (\mathcal{E}_G (\ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}}))\ket{0} + \frac{1}{S}\sum_{x=1}^S \bra{1} \mathcal{E}_D (\ket{\phi_x^T}\bra{\phi_x^T})\ket{1} \right).
}
\end{equation}
The updates of the discriminator and the generator take place alternately. For updating the generator we maximise the loss function
\begin{equation*}
\mathcal{L}_{D}(\mathcal{E}_D,\mathcal{E}_G)=\frac{1}{S}\sum_{x=1}^S \bra{0} \mathcal{E}_D (\mathcal{E}_G (\ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}}))\ket{0} + \frac{1}{S}\sum_{x=1}^S \bra{1} \mathcal{E}_D (\ket{\phi_x^T}\bra{\phi_x^T})\ket{1}
\end{equation*}
for $r_D$ rounds, whereas the generator is trained through maximising
\begin{equation*}
\mathcal{L}_{G}(\mathcal{E}_D,\mathcal{E}_G)=\frac{1}{S}\sum_{x=1}^S \bra{1} \mathcal{E}_D (\mathcal{E}_G (\ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}}))\ket{1}.
\end{equation*}
assume for $r_G$ rounds. Note that $\mathcal{L}_G$ differs from the corresponding term in \cref{eq:dqgan_minimax} in that the fidelity is calculated with respect to $\ket{1}$ instead of $\ket{0}$. Therefore, the generator is trained by maximising $\mathcal{L}_G$ rather than minimising. These procedures are repeated for $r_T$ epochs. The overall training algorithm is described in \cref{alg:QGAN_algorithmQ}.
\begin{algorithm}[H]
\caption{Training of the DQGAN.}
\label{alg:QGAN_algorithmQ}
\begin{algorithmic}
\State initialize network unitaries
\For{$r_T$ epochs}
\State make a list of $S$ randomly chosen states of the training data list $\{\ket{\phi_x^T}\}_{x=1}^N$
\For{$r_D$ epochs}
\State make a list of $S$ random states $\ket{\psi _x^\text{in}}$
\State update the discriminator unitaries by maximizing $\mathcal{L}_{D}$
\EndFor
\For{$r_G$ epochs}
\State make a list of $S$ random states $\ket{\psi_x^\text{in}}$
\State update the generator unitaries by maximising $\mathcal{L}_{G}$
\EndFor
\EndFor
\State make a list of $V$ random states $\ket{\psi_x^\text{in}}$
\State propagate each $\ket{\psi_x^\text{in}}$ through the generator to produce $V$ new states
\State calculate $\mathcal{L}_{V}$
\end{algorithmic}
\end{algorithm}
\begin{repprop}{prop:QGAN_K}
The update matrix for a QGAN trained with pure states $\ket{\phi^T_x}$ has to be of the form
\begin{equation*}
K^l_j(t) = \frac{\eta 2^{m_{l-1}}i}{S}\sum_x\tr_\text{rest}\big(M^l_{j}(x,t)\big),
\end{equation*}
where
\begin{align*}
M_j^l =& \Big[ U_{j}^{l} \dots U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{0...0}\bra{0...0}) U_{1}^{1 \dagger} \dots U_{j}^{l \dagger}, \\
&U_{j+1}^{l\dagger}\dots U_{m_{L+1}}^{L+1 \dagger} \left(\mathbbm{1}_\mathrm{in+hid}\otimes \ket{1}\bra{1}\right)U_{m_{L+1}}^{L+1 } \dots U_{l+1}^{l}\Big]
\end{align*}
for $l\le g$ and
\begin{align*}
M_j^l =& \Big[ U_{j}^{l} \dots U_{1}^{g+1} \ket{\phi^T}\bra{\phi^T} \otimes \ket{0...0}\bra{0...0} U_{1}^{g+1 \dagger}\dots U_{j}^{l \dagger} \\
&- U_{j}^{l} \dots U_{1}^{g+1} U_{m_g}^{g} \dots U_{1}^{1} \ ( \ket{\psi_x^\text{in}}\bra{\psi_x^\text{in}} \otimes \ket{0...0}\bra{0...0}) U_{1}^{1 \dagger} \dots U_{m_{g}}^{g\dagger} U_{1}^{g+1 \dagger}\dots U_{j}^{l \dagger} ,\\
&U_{j+1}^{l\dagger}\dots U_{m_{L+1}}^{L+1 \dagger} \left(\mathbbm{1}_\mathrm{in+hid}\otimes \ket{1}\bra{1}\right)U_{m_{L+1}}^{L+1 } \dots U_{l+1}^{l}\Big]
\end{align*}
else. Here, $U_j^l$ is assigned to the $j$th perceptron acting on layers $l-1$and $l$, $g$ is the number of perceptron layers of the generator, and $\eta$ is the learning rate.
\end{repprop}
The proof can be found in \cref{section_derivation}. Note, that in the following only DQGANs of three layers are used, i.e.\ both DQNNs are built of two qubit layers connected by one perceptron layer, respectively. Hence, we assume $g=1$ in the following.
In analogy to training the DQNN\textsubscript{Q}, the implementation on a quantum computer is done via parameterised quantum gates, which are updated using gradient descent. The training losses $\mathcal{L}_G$ and $\mathcal{L}_D$ are evaluated via measurement of the discriminator's output qubit.
At the end of the training the goal is that every generator output is close to at least one of the given states $\{\ket{\phi_x^T}\}_{x=1}^N$. To test this we additionally generate $V$ random states $\ket{\psi^\text{in}}$ as input states of the generator. We refer to the corresponding generated states as validation states. For each validation state, we search for the closest state of the data set via $\max_{x=1}^N \left( \bra{\phi_x^T} \mathcal{E}_G (\ket{\psi_i^\text{in}}\bra{\psi_i^\text{in}})\ket{\phi_x^T}\right)$. Using all validation states we define the \emph{validation loss}
\begin{equation*}
\mathcal{L}_{V}(\mathcal{E}_G)=\frac{1}{V}\sum_{i=1}^V \max_{x=1}^N \left( \bra{\phi_x^T} \mathcal{E}_G (\ket{\psi_i^\text{in}}\bra{\psi_i^\text{in}})\ket{\phi_x^T}\right).
\end{equation*}
Note that the above-defined validation loss would be optimised indeed if the generator produces only a small variety of states or even exactly one state. As long as these are close to at least one of the training states, the validation loss is high. Therefore, it is important to check the diversity of the generator's output, which will be described in \cref{section_discussion}.
\section{Results\label{section_results}}
In the following we test the training algorithm including the two training functions $\mathcal{L}_{G}$ and $\mathcal{L}_{D}$. Here, we use the simulation on a classical computer. The code can be found at \cite{Github}. As the training data we prepare a set of pure one-qubit states which build a line on the Bloch sphere, namely
\begin{equation*}
\text{data}_\text{line}=\left\{\frac{(N-x)\ket{0}+(x-1)\ket{1}}{||(N-x)\ket{0}+(x-1)\ket{1}||}\right\}_{x=1}^{N},
\end{equation*}
for $N=50$.
Next, we randomly shuffle this set of states. The first $S$ of the resulting set $\{\ket{\psi^T_x}\}_{x=1}^{S}$ will be used for the training process. The full data set $\{\ket{\psi^T_x}\}_{x=1}^{N}$ is used for computing the validation loss.
In \cref{fig:GAN_line} the evolution of the discriminator's and generator's training losses and the validation loss is plotted. The latter reaches values over $0.95$ at $t=9.5$, i.e., after training round $r_T=475$. Moreover, we can observe that in the first training epochs, the training loss of the generator shrinks and the discriminator training loss increases. This behaviour inverts at $t\approx2$. For the remaining training process, this switch between an increasing generator training loss and an increasing discriminator training loss happens repetitively. We explain this behaviour with the opposing goals of the generator and the discriminator and a changing dominance of one of the networks.
\afterpage{%
\thispagestyle{empty}
\begin{figure}[H]
\centering
\begin{subfigure}{\textwidth}\centering
\begin{tikzpicture}
\begin{axis}[
xmin=0, xmax=20,
ymin=0.2, ymax=1.5,
width=0.8\linewidth,
height=0.5\linewidth,
grid=major,grid style={color0M},
xlabel= Training epochs $r_T$,
xticklabels={-100,0,100,200,300,400,500,600,700,800,900,1000},
ylabel=$\mathcal{L}(t)$,legend pos=north east,legend cell align={left},legend style={draw=none,legend image code/.code={\filldraw[##1] (-.5ex,-.5ex) rectangle (0.5ex,0.5ex);}}]
\coordinate (0,0) ;
\addplot[mark size=1.5 pt, color=color2] table [x=step times epsilon, y=costFunctionDis, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-1networkGen_1-1networkDis_lda1_ep0i01_rounds1000_roundsGen1_roundsDis1_line_plot1_training.csv};
\addlegendentry[scale=1]{Training loss $\mathcal{L}_\text{D}$}
\addplot[mark size=1.5 pt, color=color1] table [x=step times epsilon, y=costFunctionGen, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-1networkGen_1-1networkDis_lda1_ep0i01_rounds1000_roundsGen1_roundsDis1_line_plot1_training.csv};
\addlegendentry[scale=1]{Training loss $\mathcal{L}_\text{G}$}
\addplot[mark size=1.5 pt, color=color3] table [x=step times epsilon, y=costFunctionTest, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-1networkGen_1-1networkDis_lda1_ep0i01_rounds1000_roundsGen1_roundsDis1_line_plot1_training.csv};
\addlegendentry[scale=1]{Validation loss $\mathcal{L}_\text{V}$}
\draw [line width=0.5mm,dashed] (60,0) -- (60,200);
\node at (70,10) {(b)};
\draw [line width=0.5mm,dashed] (100,0) -- (100,200);
\node at (110,10) {(c)};
\draw [line width=0.5mm,dashed] (160,0) -- (160,200);
\node at (170,10) {(d)};
\end{axis}
\end{tikzpicture}
\subcaption{Loss functions.}
\label{fig:GAN_line}
\end{subfigure}
\begin{subfigure}{\textwidth}\centering
\begin{tikzpicture}[scale=1]
\begin{axis}[
ybar,
bar width=1.5pt,
xmin=0, xmax=50,
ymin=0, ymax=12,
width=.8\linewidth,
height=.28\linewidth,
grid=major,
grid style={color0M},
xlabel= State index $x$,
ylabel=Counts,legend pos=north east,legend cell align={left},legend style={draw=none,legend image code/.code={\filldraw[##1] (-.5ex,-.5ex) rectangle (0.5ex,0.5ex);}}]
\addplot[color=color2, fill=color2] table [x=indexDataTest, y=countOutTest, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-1networkGen_1-1networkDis_lda1_ep0i01_rounds300_roundsGen1_roundsDis1_line_plot1_statisticsUSV.csv};
\addlegendentry[scale=1]{Validation states}
\addplot[color=color1,fill=color1] table [x=indexDataTrain, y=countOutTrain, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-1networkGen_1-1networkDis_lda1_ep0i01_rounds300_roundsGen1_roundsDis1_line_plot1_statisticsSV.csv};
\addlegendentry[scale=1]{Training states}
\end{axis}
\end{tikzpicture}
\subcaption{Diversity of the generator's output ater $r_T=300$ training epochs.} \label{fig:GAN_line300}
\end{subfigure}
\begin{subfigure}{\textwidth}\centering
\begin{tikzpicture}[scale=1]
\begin{axis}[
ybar,
bar width=1.5pt,
xmin=0, xmax=50,
ymin=0, ymax=25,
width=.8\linewidth,
height=.28\linewidth,
grid=major,
grid style={color0M},
xlabel= State index $x$,
ylabel=Counts,legend pos=north east,legend cell align={left},legend style={draw=none,legend image code/.code={\filldraw[##1] (-.5ex,-.5ex) rectangle (0.5ex,0.5ex);}}]
\addplot[color=color2, fill=color2] table [x=indexDataTest, y=countOutTest, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-1networkGen_1-1networkDis_lda1_ep0i01_rounds500_roundsGen1_roundsDis1_line_plot1_statisticsUSV.csv};
\addlegendentry[scale=1]{Validation states}
\addplot[color=color1,fill=color1] table [x=indexDataTrain, y=countOutTrain, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-1networkGen_1-1networkDis_lda1_ep0i01_rounds500_roundsGen1_roundsDis1_line_plot1_statisticsSV.csv};
\addlegendentry[scale=1]{Training states}
\end{axis}
\end{tikzpicture}
\subcaption{Diversity of the generator's output ater $r_T=500$ training epochs.} \label{fig:GAN_line500}
\end{subfigure}
\begin{subfigure}{\textwidth}\centering
\begin{tikzpicture}[scale=1]
\begin{axis}[
ybar,
bar width=1.5pt,
xmin=0, xmax=50,
ymin=0, ymax=110,
width=.8\linewidth,
height=.28\linewidth,
grid=major,
grid style={color0M},
xlabel= State index $x$,
ylabel=Counts,legend pos=north east,legend cell align={left},legend style={draw=none,legend image code/.code={\filldraw[##1] (-.5ex,-.5ex) rectangle (0.5ex,0.5ex);}}]
\addplot[color=color2, fill=color2] table [x=indexDataTest, y=countOutTest, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-1networkGen_1-1networkDis_lda1_ep0i01_rounds800_roundsGen1_roundsDis1_line_plot1_statisticsUSV.csv};
\addlegendentry[scale=1]{Validation states}
\addplot[color=color1,fill=color1] table [x=indexDataTrain, y=countOutTrain, col sep=comma] {numerics/QGAN_50data10sv_100statData_100statData_1-1networkGen_1-1networkDis_lda1_ep0i01_rounds800_roundsGen1_roundsDis1_line_plot1_statisticsSV.csv};
\addlegendentry[scale=1]{Training states}
\end{axis}
\end{tikzpicture}
\subcaption{Diversity of the generator's output ater $r_T=800$ training epochs.} \label{fig:GAN_line800}
\end{subfigure}
\caption{\textbf{Training a DQGAN.} (a) depicts the evolution of the loss functions during the training of a \protect\oneoneone DQGAN in $r_T=1000$ epochs with $\eta=1$ and $\epsilon=0.01$ using $50$ data pairs where $10$ are used as training states. The dashed lines mark the diversity checks 300 (b), 500 (c) and 800 (d) for the generator's output.}
\end{figure}}
The saturating validation loss in \cref{fig:GAN_line} gives the impression that the longer we train the DQGAN, the better the results. In the original proposal of the DQNN \cite{Beer2020} this was the case. However, the validation loss would be maximal if the generator would permanently produce the exact same state when this state is one of the training states $\{\ket{\psi^T_x}\}_{x=1}^{N}$. This would not fit our aim to train the generator to extended the training set. Hence, we explain in the following how we check the \emph{diversity} of the generator's output.
After training for $r_T$ rounds, we use the generator to produce a set of $100$ states. Using the fidelity, we find for each of these states the element of $\text{data}_\text{line}$, which is the closest. In this way, we obtain a number for every index $x$ of this set describing how often the output of the generator was most closely to the $x$th element of $\text{data}_\text{line}$. In \cref{fig:GAN_line300,fig:GAN_line500,fig:GAN_line800} we plot these numbers in the form of an histogram. The different colours describe whether an element of $\text{data}_\text{line}$ was used as a training state or not. We find that the diversity was good after $300$ training epochs. However, it decreases afterwards in the ongoing training. We point to \cref{apnx:numerics} for more numerical results.
\begin{figure}[H]
\centering
\begin{subfigure}{\textwidth}\centering
\begin{tikzpicture}[scale=1]
\begin{axis}[
ybar,
bar width=1.5pt,
xmin=0, xmax=50,
ymin=0, ymax=20,
width=.8\linewidth,
height=.28\linewidth,
grid=major,
grid style={color0M},
xlabel= State index $x$,
ylabel=Counts,legend pos=north east,legend cell align={left},legend style={draw=none,legend image code/.code={\filldraw[##1] (-.5ex,-.5ex) rectangle (0.5ex,0.5ex);}}]
\addplot[color=color2, fill=color2] table [x=indexDataTest, y=countOutTest, col sep=comma] {numerics/dqnn_q_eq_line_v2_epoch_100_vs.csv};
\addlegendentry[scale=1]{Validation states}
\addplot[color=color1,fill=color1] table [x=indexDataTrain, y=countOutTrain, col sep=comma] {numerics/dqnn_q_eq_line_v2_epoch_100_ts.csv};
\addlegendentry[scale=1]{Training states}
\end{axis}
\end{tikzpicture}
\subcaption{Diversity of the generator's output ater $r_T=100$ training epochs.} \label{fig:dqnn_q_eq_line_a}
\end{subfigure}
\begin{subfigure}{\textwidth}\centering
\begin{tikzpicture}[scale=1]
\begin{axis}[
ybar,
bar width=1.5pt,
xmin=0, xmax=50,
ymin=0, ymax=20,
width=.8\linewidth,
height=.28\linewidth,
grid=major,
grid style={color0M},
xlabel= State index $x$,
ylabel=Counts,legend pos=north east,legend cell align={left},legend style={draw=none,legend image code/.code={\filldraw[##1] (-.5ex,-.5ex) rectangle (0.5ex,0.5ex);}}]
\addplot[color=color2, fill=color2] table [x=indexDataTest, y=countOutTest, col sep=comma] {numerics/dqnn_q_eq_line_v2_epoch_440_vs.csv};
\addlegendentry[scale=1]{Validation states}
\addplot[color=color1,fill=color1] table [x=indexDataTrain, y=countOutTrain, col sep=comma] {numerics/dqnn_q_eq_line_v2_epoch_440_ts.csv};
\addlegendentry[scale=1]{Training states}
\end{axis}
\end{tikzpicture}
\subcaption{Diversity of the generator's output ater $r_T=440$ training epochs.} \label{fig:dqnn_q_eq_line_b}
\end{subfigure}
\caption{\textbf{Training a DQGAN\textsubscript{Q}.} The training set features $S=10$ equally spaced quantum states from $\text{data}_\text{line}$. The remaining states from $\text{data}_\text{line}$ are used as validation states. The DQGAN\textsubscript{Q} features a 1-1$^+$ generator DQNN\textsubscript{Q} and a 1-1$^+$ discriminator DQNN\textsubscript{Q}, and employs the hyper-parameters $r_D=4$, $\eta_D=0.5$, $r_G=1$ and $\eta_G=0.1$.}
\label{fig:dqnn_q_eq_line}
\end{figure}
In addition to the DQGAN simulation on a classical computer we also simulate the DQGAN\textsubscript{Q} under noiseless circumstances. Here, the same training loss functions $\mathcal{L}_G, \mathcal{L}_D$ are used as well as the same training data, $\text{data}_\text{line}$. In this case, the training states are not picked randomly but $S=10$ equally spaced training states are chosen from $\text{data}_\text{line}$. The hyper-parameters of the training are chosen such that for each of the $r_T$ epochs, a 1-1$^+$ discriminator DQNN\textsubscript{Q} is trained $r_D=4$ times with a learning rate $\eta_D=0.5$ and a 1-1$^+$ generator is trained $r_G=1$ times with a learning rate $\eta_G=0.1$. The $+$ denotes a slightly different implementation of DQGAN\textsubscript{Q} compared to implementation discussed in \cite{Beer2021a}. It uses additional gates and is explained in \cref{alg:QGAN_algorithmQ}.
The results of training the DQGAN\textsubscript{Q} are shown in are shown in \cref{fig:dqnn_q_eq_line}. The generator's diversity after training for $r_T=100$ epochs is depicted in \cref{fig:dqnn_q_eq_line_a}. Here, the generator achieves to produce states in a little more than half of the training data range. After $r_T=440$ training epochs, the generator's diversity is improved to two-thirds of the training data range as depicted in \cref{fig:dqnn_q_eq_line_b}. Please note that in both cases, the majority of the generator's produced states is closer to a validation state than a training state. This can be seen as a training success as the generator does not only learn to reproduce the training states but instead learns to extend the given training data.
For more numerical results using DQGAN\textsubscript{Q} we point to \cref{apnx:numerics} and \cite{Mueller2021}.
\section{Discussion\label{section_discussion}}
In this work, we introduced DQGANs, generative adversarial models based on the DQNN proposed in \cite{Beer2020}. A DQGAN features two DQNNs trained in an opposing manner: the discriminator's goal is to distinguish between synthetic, by the generator produced quantum states and elements of the training data set. On the contrary, the generator aims to produce data with similar properties as the states included in the training data set.
We aimed to extend a given data set with states with similar characteristics. Our examples have shown that this goal can be reached when training a DQGAN.
However, due to limitations in computational power, we could only train small DQGAN architectures and therefore leave questions open for future research. It would be interesting if using larger DQNNs for the generator or the discriminator leads to better validation loss values or more diversity in the generator's output.
Further, the study of other data sets is of interest. One example could be a set of states with similar degrees of entanglement (with respect to a chosen entanglement measure) \cite{Schatzki2021}.
Since in classical machine learning, the output of GANs is often used to train other neural network architectures, a similar application for DQGANs and DQNNs is conceivable.
\paragraph{Acknowledgements}
The authors would like to thank Tobias J. Osborne and Christian Struckmann for valuable discussions. Moreover, helpful correspondence with Dmytro Bondarenko, Terry Farrelly, Polina Feldmann, Daniel List, Jan Hendrik Pfau, Robert Salzmann, Daniel Scheiermann, Viktoria Schmiesing, Marvin Schwiering, and Ramona Wolf is gratefully acknowledged. This work was supported, in part, by the DFG through SFB 1227 (DQ-mat), Quantum Valley Lower Saxony, and funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germanys Excellence Strategy EXC-2123 QuantumFrontiers 390837967.
|
1,477,468,750,621 | arxiv | \section{Introduction}
In recent years, accelerating electromagnetic fields, i.e., solutions of Maxwell's equation, which propagate along curved trajectories in free space, without being subject to an external force, have been the subject of a rather intensive research. The archetype of accelerating beam, is surely the Airy beam, first introduced in quantum mechanics by Berry and Balasz in 1974 \cite{ref1}, and then brought to optics by Siviloglou and co-workers in 2007 \cite{ref2,ref3}. Do to their exotic nature, and novel features, Airy beams were studied within the context of nonlinear optics \cite{ref4}, particle manipulation \cite{ref4bis}, and gave rise to very interesting and innovative applications, such as the generation of curved plasma channels \cite{ref5}. Since 2007, accelerating beams were studied in different coordiante systems \cite{ref6,ref7}, their trajectory was suitably engineered to match different form of curved \cite{ref8,ref9,ref10} and arbitrary \cite{ref11} paths, and find new schemes of acceleration, such as radial \cite{nostroPRL,ref13}, and angular \cite{ref14, ref15} accelerating beams. The former, in particular, are often referred to as radially self-accelerating beams (RSABs), and propagate along spiralling trajectories around their optical axis, due to radial acceleration.
RSABs are typically described, in the monochromatic regime, in terms of superpositions of Bessel beams, with an angular velocity proportional to the amount of orbital angular momentum they carry \cite{nostroPRL}. The distinguishing characteristic of RSABs, however, is a transverse intensity distribution, that rotates around the propagation axis, without exhibiting diffraction, a consequence of RSABs being represented as a sum of nondiffracting beams. RSABs, moreover, have potential applications in different areas of physics, such as sensing \cite{ref5}, material processing \cite{ref16,ref17}, and particle manipulation \cite{ref18,ref19}. Despite this broad interest, however, RSABs have been so far only studied within the monochromatic regime, and the possibility of extending their properties to the domain of optical pulses, has not been investigated yet. Having at hand radially self-accelerating pulses, in fact, could drastically benefit, for example their applications in material processing, or particle manipulation, to name a few.
In this work, we focus the attention on the generalisation of the concept of self-acceleration to the pulsed domain. In doing that, we will show, how it is possible to create radially self-accelerating pulses (RSAPs) using superpositions of X-waves, rather than Bessel beams. This simple extension of the definition of RSAB given in Ref. \cite{nostroPRL}, however, has some important consequences on the nature of the self-accelerating character of such pulses.
This work is organised as follows: in Sect. II, we briefly recall the properties and definition of RSABs. Then, in Sect. III, we show that RSAPs can be constructed by suitably generalising their definition in the monochromatic domain, as a superposition of X-waves, rather than Bessel beams, for both the cases of field rotating, and intensity rotating RSAPs. For the latter case, we show, that the only possible analytical form of intensity rotating RSAPs, can be obtained, by assigning a different propagation constant, to each monochromatic beams, composing the pulse Finally, conclusions are drawn in Sect. IV.
\section{Radially Self-Accelerating Beams}
As a starting point of our analysis, let us consider a scalar, monochromatic beam, solution of the free space Helmholtz equation
\begin{equation}\label{eq1}
\left(\nabla^2+k^2\right)\psi(\vett{r};k)=0,
\end{equation}
where $k=2\pi/\lambda$ is the vacuum wave vector of the beam, and $\lambda$ its wavelength. In cylindrical coordinates, the most general solution to the above equation can be written in terms of Bessel beams, as follow
\begin{equation}\label{eq2}
\psi(\vett{r};k)=\sum_m\,\int\,d\kappa\,A_m(\kappa)\,\text{J}_m\left(R\sqrt{k^2-\kappa^2}\right)e^{im\theta+\kappa z},
\end{equation}
where $\text{J}_m(x)$ is the Bessel function of the first kind \cite{nist}, and the integration variable $\kappa\propto\cos\vartheta_0$ represents the characteristic Bessel cone angle \cite{durnin}. Following the prescriptions of Ref. \cite{nostroPRL}, is it possible to extract RSABs from the above equation by choosing $A_m(\kappa)=C_m\delta(\kappa-(m\Omega+\beta))$, where $\Omega$ is the actual angular velocity of the RSAB, and $\beta$ is a free parameter, with the dimension of a propagation constant. This choice ensures the possibility to define a co-rotating reference frame $\Phi=\theta+\Omega z$, in which the RSAB appears propagation invariant, namely $\partial\psi_{RSAB}(\vett{r};k)/\partial z=0$. The explicit form of a RSAB thus reads
\begin{equation}\label{eq3}
\psi_{RSAB}(R,\Phi;k)=e^{i\beta z}\sum_{m\in\mathcal{M}}C_m\,\text{J}_m(\alpha_m R)e^{im\Phi},
\end{equation}
where $\alpha_m=\sqrt{k^2-(m\Omega+\beta)^2}$ represents the transverse wave vector of the single Bessel component of the RSAB, and $\mathcal{M}=\{m\in\mathbb{N}: \alpha_m>0\}$. For $\beta=0$, the above equation represents the so-called field rotating RSABs, for which both amplitude and phase spiral around the propagation direction synchronously. For $\beta\neq 0$, instead, Eq. \eqref{eq3} describes the intensity-rotating RSABs, where the amplitude and phase distributions are not synchronised anymore during their rotation along the propagation direction, although the intensity distribution remains propagation invariant. An example of field rotating and intensity rotating RSABs is given in Fig. \ref{figure1} and \ref{figure2}, respectively.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.55\textwidth]{figure1.eps}
\caption{Intensity and phase distribution for field rotating RSABs. Panels (a) and (c) correspond to the intensity distributions at $z=0$, and $z=\pi/2\Omega$, respectively, while panels (b) and (d) depict the correspondent phase profiles. The intensity and phase distributions have been plotted in the region $0<R<12$ $\mu m$. For these figures, $\Omega=75$ $rad/s$, $\lambda=800$ $nm$, $\beta=0$ and $C_m=1$ for $0<m\leq 4$, and $C_m=0$, otherwise, have been used. The white arrow in the intensity distribution shows the direction of rotation of the RSAB.}
\label{figure1}
\end{center}
\end{figure}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.55\textwidth]{figure2.eps}
\caption{Intensity and phase distribution for intensity rotating RSABs. Panels (a) and (c) correspond to the intensity distributions at $z=0$, and $z=\pi/2\Omega$, respectively, while panels (b) and (d) depict the correspondent phase profiles. The intensity and phase distributions have been plotted in the region $0<R<1.5$ $mm$. The difference in plotting range with respect to Fig. \ref{figure1} reflects the paraxial character of intensity rotating RSABs. in contrast to the nonparaxial character of their field rotating counterparts. For these figures, $\Omega=75$ $rad/s$, $\lambda=800$ $nm$, $\beta=7.8$ $\mu m^{-1}$, and $C_m=1$ for $0<m\leq 4$, and $C_m=0$, otherwise, have been used. The white arrow in the intensity distribution shows the direction of rotation of the RSAB.}
\label{figure2}
\end{center}
\end{figure}
\section{Extension to Pulse Domain}
To extend the concept of RSABs to the polychromatic domain, we first notice that given a solution $\psi(\vett{r};k)$ of the Helmholtz equation \eqref{eq1}, it is possible to construct an exact solution of the wave equation
\begin{equation}\label{eq4}
\left(\nabla^2-\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\right)F(\vett{r},t)=0,
\end{equation}
as follows:
\begin{equation}\label{eq5}
F(\vett{r},t)=\int\,dk\,g(k)\,e^{-ickt}\,\psi(\vett{r};k),
\end{equation}
where $g(k)$ is an arbitrary spectral function. If we then substitute to $\psi(\vett{r};k)$ the expression of a RSABs, as given by Eq. \eqref{eq3}, we obtain the general expression for a radially self-accelerating pulse (RSAP), namely
\begin{eqnarray}\label{eq6}
F_{RSAP}(\vett{r},t)&=&\sum_{m\in\mathcal{M}}C_m\,e^{im\Phi}\,\int\,dk\,g(k)\,e^{i(\beta z-ckt)}\nonumber\\
&\times&\text{J}_m(R\sqrt{k^2-(m\Omega+\beta)^2}).
\end{eqnarray}
Before proceeding any further, it is worth spending a couple of words on the general structure of the above integral. First of all, we can distinguish two different cases, namely $\beta=0$, corresponding to field rotating RSAPs, and $\beta\neq 0$, corresponding to intensity rotating RSAPs. The latter case, however, can be further divided into two sub-classes, namely the case $\beta=\beta(k)$ (meaning, that each monochromatic component of the RSAP defined in Eq. \eqref{eq6} will have its own global propagation constant), and the case $\beta=\text{const}\neq 0$. In the latter case, discussed below in Sect. II.B, the spectrum of the RSAP is $m$-dependent, meaning, that each component in the sum in Eq. \eqref{eq6} has to first be transformed into a polychromatic signal with its own spectrum, and then summed to form the RSAP. We will show that, the case $\beta=\beta(k)$ results in a pseudo self-accelerating pulse, where self-acceleration is restored only asymptotically, while the case $\beta=\text{const}.$ will instead give rise to a rigorous, self-accelerating pulse.
\subsection{Intensity Rotating RSAPs with $\beta=\beta(k)$}
Let us first consider the case $\beta=\beta(k)\neq 0$. First, we observe, that, typically, $\Omega\ll k$, meaning that the rotation rate of the RSAB is much smaller, than its actual wave vector. If we substitute this Ansatz into the argument of the Bessel function appearing in Eq. \eqref{eq6}, we can Taylor expand the square root appearing as argument of the Bessel function in Eq. \eqref{eq6} with respect to the small parameter $\Omega/k$, thus obtaining
\begin{eqnarray}\label{eq7}
&&\sqrt{k^2-(m\Omega+\beta)^2}\simeq k\Bigg[\sqrt{1-\frac{\beta^2}{k^2}}\nonumber\\
&-&\frac{\beta}{\sqrt{k^2-\beta^2}}\left(\frac{m\Omega}{k}\right)-\frac{km^2\Omega^2}{2(k^2-\beta^2)^{3/2}}\nonumber\\
&+&\mathcal{O}\left(\frac{\Omega^3}{k^3}\right)\Bigg].
\end{eqnarray}
Since $\beta=\beta(k)$ can be chosen arbitrarily, we can assume, without loss of generality, that it can be written as $\beta(k)=k\cos\xi$, where $0<\xi<\pi/2$. If we do so, we can simplify the expansion above as follows:
\begin{equation}\label{eq7bis}
\sqrt{k^2-(m\Omega+\beta)^2}\simeq k\sin\xi-m\Omega\cot\xi+\mathcal{O}\left(\frac{\Omega^2}{k^2}\right),
\end{equation}
or, by defining $\Lambda=\Omega(\cos\xi/\sin^2\xi)$ as the new angular velocity of the RSAP, we obtain
\begin{equation}\label{eq7ter}
\sqrt{k^2-(m\Omega+\beta)^2}\simeq\sin\xi(k-m\Lambda).
\end{equation}
This approximation is valid, provided that $(m\Omega)/k\ll1$. Since the number of components of RSABs can be decided almost arbitrarily, however, it is possible to define a new set $\mathcal{M}'=\{m\in\mathbb{N}_0: m\ll(k/\Omega)\}$, and therefore restrict the summation in Eq. \eqref{eq6} to the subset $\mathcal{M}'\subset\mathcal{M}$. If we do so, and introduce the change of variables $k'=k-m\Lambda$, we obtain a rather simple form for RSAPs, namely
\begin{equation}\label{eq8}
F_{RSAP}^{(1)}(\vett{r},t)=\sum_{m\in\mathcal{M}'}\,C_me^{im\Theta_0}X_m(R,\zeta),
\end{equation}
where $\Theta_0=\theta+\Lambda\zeta$ is the co-rotating coordinate, and
\begin{equation}\label{eq12}
X_m^{(1)}(R,\zeta)=\int\,dk\,g(k)\,e^{ik\zeta}\text{J}_m(kR\sin\xi),
\end{equation}
represents the general expression of a X-wave \cite{localisedWaves, XwavesPRL}, with $\zeta=z\cos\xi-ct$ being its correspondent co-moving coordinate. This is the first result of our work. In the polychromatic domain, radially, self-accelerating fields can be constructed by taking superpositions of X-waves, rather than Bessel beams.
However, as it can be seen from Eq. \eqref{eq8}, intensity rotating RSAPs intrinsically contain a $\zeta$-dependence on both their co-rotating coordiante $\Theta_0$, and transverse distribution $X_m(R,\zeta)$. This fact, which will be discussed in detail in the next section, is ultimately the reason, why RSAPs only possess a pseudo self-accelerating character.
At a first glance, Eq. \eqref{eq12} has the same form of its monochromatic counterpart, namely Eq. \eqref{eq3}, and could be interpreted as its straightforward generalisation. One, in fact, could naively substitute Bessel beams, which are used in the monochromatic case to generate RSABs, with X-waves (i.e., polychromatic Bessel beams), thus realising RSAPs.
A closer analysis of Eq. \eqref{eq8}, however, reveals an important difference between the two cases, namely that while RSABs describe spiralling trajectories of constant transverse dimension \cite{nostroPRL}, RSAP describe spiralling trajectories of growing transverse dimension. Moreover, while the transverse structure of RSABs rigidly rotates around the propagation axis, this is not the case for RSAPs, which instead show a progressive self-adaption of the transverse intensity distribution to a ring, centered on the propagation axis.
To better understand this, let us consider explicitly the case of fundamental X-waves. These are characterised by an exponentially decaying spectrum, i.e., $g(k)=\text{H}(k)\exp{[-\alpha k]}$, where $\alpha$ accounts for the width of the spectrum, and has the dimensions of a length, and $\text{H}(x)$ is the Heaviside step function. If we substitute this exponentially decaying spectrum into Eq. \eqref{eq8}, and use Eq. 6.621.1 in Ref. \cite{gradsteyn}, we get, after some simple algebraic manipulation, the following result:
\begin{widetext}
\begin{equation}\label{eq9}
F_{RSAP}^{(1)}(\vett{r},t)=e^{i\arctan\left(\frac{\zeta}{\alpha}\right)}\sum_{m\in\mathcal{M}'}A_me^{im\Theta}\frac{\rho^m}{\sqrt{\alpha^2+\zeta^2}}\,_2F_1\left(\frac{m+1}{2},\frac{m+2}{2};m+1;-\rho^2e^{2i\arctan\left(\frac{\zeta}{\alpha}\right)}\right),
\end{equation}
\end{widetext}
where
\begin{equation}
\rho\equiv\rho(\zeta)=\frac{R\sin\xi}{\sqrt{\alpha^2+\zeta^2}},
\end{equation}
is an expanding, normalised, radial coordinate,
\begin{equation}
\Theta=\theta+\Lambda\zeta+\arctan\left(\frac{\zeta}{\alpha}\right),
\end{equation}
is the co-rotating, accelerating reference frame, $A_m=C_m/2^m$, and $\,_2F_1(a,b;c;x)$ is the Gauss hypergeometric function \cite{nist}. Notice, that although in general the hypergeometric function gives an extra $m-$ and $\zeta$-dependent phase contribution, which modifies the definition of $\Theta$, if we limit ourselves to the case $\xi\ll 1$, the phase contribution of the hypergeometric function can, at the leading order in $\xi$, be neglected.
We can now compare the two co-rotating coordiantes, in the monochromatic ($\Phi$) and polychromatic ($\Theta$) case: while $\Phi$ essentially describes an helix centered around the $z$-axis, whose transverse width remains constant, since the angular velocity of the RSAB is $\Lambda=const.$, this is not the case for the polychromatic co-rotating coordinate $\Theta$, as it represents an accelerating coordinate, with velocity
\begin{equation}\label{velocity}
\frac{\partial\Theta}{\partial\zeta}=\Lambda+\frac{\alpha}{\alpha^2+\zeta^2},
\end{equation}
and acceleration
\begin{equation}\label{acceleration}
\frac{\partial^2\Theta}{\partial\zeta^2}=-\frac{2\alpha\zeta}{(\alpha^2+\zeta^2)^2}.
\end{equation}
The above expressions for the angular velocity and acceleration of the RSAP, reveals that for large enough propagation distances, $\partial\Theta/\partial\zeta\rightarrow\Lambda$, and $\partial^2\Theta/\partial\zeta^2\rightarrow 0$, and the standard values of velocity and acceleration for RSABs are restored. This means, that the self-accelerating state represents an asymptotic equilibrium for the RSAP. For small propagation distances, on the other hand, the behaviour of RSAPs change significantly from traditionally self-accelerating beams, as can be seen from Eqs. \eqref{velocity} and \eqref{acceleration}. Since the co-rotating coordinate is now accelerating, and the acceleration is towards the center of the pulse, the transverse field distribution needs to adapt to this attractive force, which tends (asymptotically) to transform the intensity distribution into a ring-shaped pulse, around the co-moving propagation direction $\zeta$. For these reasons, one cannot formally speak anymore of self-accelerating pulses, as there exists no reference frame, in which Eq. \eqref{eq9} appears propagation invariant, or, said in other terms, there exist no reference frame, in which the motion of the pulse around the $\zeta$-axis, can be described by an helix. However, since the self-accelerating behaviour is an asymptotical equilibrium of the system, one could refer to such pulses as pseudo self-accelerating.
The intensity and phase distributions of intensity rotating RSAPs are shown in Figs. \ref{figure3}. For small propagation distances [Figs. \ref{figure3}(a)-(d)], the intensity distribution gets progressively distorted, while propagating along $\zeta$, up to the point, in which the RSAP reaches its equilibrium form of a ring [Fig. \ref{figure3}(e), and (f)]. From this point on, the transverse intensity distribution does not change anymore in shape, but only becomes bigger, due to the expanding nature of the radial co-moving coordinate $\rho$, as can be seen by coparing panels (e) and (g) of Fig. \ref{figure3}. If we compare the behavior of RSAP at large $\zeta$, with the one of RSABs, we can notice, that while the transverse dimension of the spiral described by RSABs remains constant (essentially, because $\Phi$ describes a helix, rather than a spiral), this is not the case for RSAPs, since $\Theta$ describes a spiral, whose transverse dimension is growing with $\zeta$.
To estimate this, let us calculate the average transverse size of the spiral, by considering the position of the center of mass of the RSAP intensity distribution, as follows:
\begin{eqnarray}
\langle R(\zeta)\rangle&=&\int\,d^2R\,R\,\left|F_{RSAP}(\vett{r},t)\right|^2\nonumber\\
&\equiv&R_0\sqrt{\alpha^2+\zeta^2},
\end{eqnarray}
where, at the leading order in $\xi$ \cite{note2},
\begin{equation}\label{erre0}
R_0=\frac{2\pi}{\sin^3\xi}\sum_{m\in\mathcal{M'}}|A_m|^2\int_0^{\infty}d\rho\,\rho^{2(m+1)},
\end{equation}
Thus, in the co-moving, expanding, reference frame, the transverse dimension of the spiral grows as $\sqrt{\alpha^2+\zeta^2}$.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.55\textwidth]{figure3.eps}
\caption{Intensity (left) and phase (right) distribution for intensity rotating RSAPs, as defined by Eq. eqref{eq9}. The plots are made at different values of the normalised propagation length $\zeta\equiv\zeta/\alpha$, namely $\zeta=0$ [panels (a), and (b)], $\zeta=2\pi/\Lambda$ [panels (c), and (d)], $\zeta=10\,(2\pi/\Lambda)$ [panels (e), and (f)], and $\zeta=50\,(2\pi/\Lambda)$ [panels (g), and (h)]. As it can be seen, the transverse profile of intensity rotating RSAPs gets progressively distorted, up to the point, at which it stabilises in a ring-shaped form [panel (e)]. The intensity and phase distributions have been plotted in the region $0<\rho<1250$, for the panels (a)-(f), and $0<\rho<7500$, for panels (g), and (h). For these figures, $\Omega=75$ rad/s, $\lambda=800$ nm, $\xi=0.01$ (correspondent to $\beta=7.853\times 10^6$ $m^{-1}$), and $C_m=1$ for $1<m\le 4$, and $C_m=0$, otherwise, have been used. The white arrow in the intensity distribution shows the direction of rotation of the RSAP.}
\label{figure3}
\end{center}
\end{figure}
\subsection{Intensity Rotating RSAPs with $\beta=\text{const}$}
Another possibility for intensity rotating RSAPs, is to choose $\beta=\text{const}.\neq 0$. If we use this assumption, introduce the change of variables $k'=\sqrt{k^2-(m\Omega+\beta)^2}$ in Eq. \eqref{eq6}, we allow the spectral function $g(k)$ to be $m$-dependent, and we redefine it as $g_m(k)=\left(2kG_m(k)/\sqrt{k^2-(m\Omega+\beta)^2}\right)\text{H}(k)$, where $\text{H}(k)$ is the Heaviside step function \cite{nist}, we get the following result
\begin{equation}\label{eqNew}
F_{RSAP}^{(2)}(\vett{r},t)=e^{i\beta z}\sum_{m\in\mathcal{M}'}C_me^{im\Phi}\, X_m^{(2)}(R,t;\beta),
\end{equation}
where
\begin{equation}\label{effeDue}
X_m^{(2)}(\vett{r};\beta)=\int_0^{\infty}dk\,G_m(k)e^{-ict A(k)}\text{J}_m(kR),
\end{equation}
where $A(k)=\sqrt{k^2+(m\Omega+\beta)^2}$. If we choose the spectral function $G(k)$ as
\begin{equation}
G(k)=\frac{1}{\sqrt{k^2+(m\Omega+\beta)^2}},
\end{equation}
Eq. \eqref{effeDue} admits the following, closed form analytical solution \cite{gradsteyn}
\begin{eqnarray}
X_m^{(2)}(R,t;\beta)&=&I_{m/2}\left(\frac{\alpha}{2}\left(\sqrt{R^2-c^2t^2}-ict\right)\right)\nonumber\\
&\times&K_{m/2}\left(\frac{\alpha}{2}\left(\sqrt{R^2-c^2t^2}+ict\right)\right),
\end{eqnarray}
where $I_m(x)$, and $K_m(x)$ are the modified Bessel function of the first, and second kind, respectively \cite{nist}, and $\alpha=m\Omega+\beta$.
As it can be seen, no $z$-dependence is contained in the expression of the transverse field distribution $X_m^{(2)}(R,t;\beta)$, and Eq. \eqref{eqNew} has the same form of Eq. \eqref{eq3}. In this case, therefore, we can define the co-rotating reference frame $\{R,\Phi,t\}$, where $F_{RSAP}^{(2)}(\vett{r},t)$ is manifestly propagation invariant. However, due to the presence of the modified Bessel function of the second kind $K_{m/2}(x)$, this field distribution is divergent in the origin, and therefore it cannot represent a physically meaningful solution. Analytical forms of intensity rotating RSAPs, therefore, only exist for $\beta=\beta(k)$. This is the second result of our work: intensity rotating RSAPs can be only constructed by assigning a proper $\beta(k)$ to each of the monochromatic components, that contribute to the pulse. If $\beta$ is constant, no physically meaningful analytical solution can be found.
Notice, however, that a trivial possibility for describing intensity rotating RSAPs with constant $\beta$, would be to consider the case $\beta=\text{const.}\ll 1$. This, however, can be treated (at least at the leading order in $\beta$), as a first order correction to the case $\beta=0$, which will be discussed in the next section. Since $\beta=0$ will correspond to field rotating RSAPs, one could then conclude, that intensity rotating RSAPs with constant small $\beta$, can be well described by field rotating RSAPs.
\subsection{Field Rotating RSAPs}
For the case $\beta=0$, the argument fo the Bessel function in Eq. \eqref{eq6} becomes $\sqrt{k^2+m^2\Omega^2}$. If we then assume that $\Omega\ll k$, and use the expansion in Eq. \eqref{eq7} with $\beta=0$, up to $\mathcal{O}(\Omega^2/k^2)$, we can write $\sqrt{k^2-m^2\Omega^2}\simeq k$. This approximation, however, holds as long as $m$ is chosen in such a way, that $m\Omega/k\ll1$. As stated before, since we are free to choose the set in which $m$ is defined, we can restrict the original set $\mathcal{M}$ to the new subset $\mathcal{M}''\equiv\{m\in\mathbb{N}_0: m\ll (\sqrt{2}k/\Omega)\}$. In this case, the explicit expression of field rotating RSAPs is given as follows:
\begin{equation}\label{fieldRSAP}
F_{RSAP}^{(3)}(\vett{r},t)=\sum_{m\in\mathcal{M}''}C_me^{im\Phi}X_m^{(3)}(R,\zeta_0),
\end{equation}
where $\zeta_0=-ct$, and
\begin{equation}
X_m^{(3)}(R,\zeta_0)=\int\,dk\,g(k)\,e^{ik\zeta_0}\text{J}_m(kR).
\end{equation}
Notice, that unlike the case $\beta\neq 0$, no $z$-dependence is present in the transverse form of the pulse $X_m^{(3)}(R,\zeta_0)$. This means that, field rotating RSAPs are truly self-accelerating fields, as one can define a reference frame, namely $\{R,\Phi,\zeta_0\}\equiv\{R,\Phi,t\}$, in which the RSAP appears propagation invariant, and fulfils all the required conditions for self-acceleration \cite{nostroPRL}. The intensity and phase distributions for field rotating RSAPs at different propagation lengths, are shown in Fig. \ref{figure4}. As it can be seen, the transverse intensity profile remains unchanged, as the pulse propagates along $z$. Notice, moreover, that field rotating RSAPs rotate in the opposite direction, with respect to intensity rotating RSAPs.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.55\textwidth]{figure4.eps}
\caption{Intensity (left) and phase (right) distribution for field rotating RSAPs, at different values of the propagation length namely $\zeta=0$ [panels (a), and (b)], $\zeta=0.25(2\pi/\Lambda)$ [panels (c), and (d)], $\zeta=0.5\,(2\pi/\Lambda)$ [panels (e), and (f)], and $\zeta=0.75\,(2\pi/\Lambda)$ [panels (g), and (h)]. As it can be seen, the transverse profile of field rotating RSAPs remains propagation invariant, and rotates synchronously with its phase profile. The intensity and phase distributions have been plotted in the region $0<R<10$ $\mu m$. For these figures, $\Omega=75$ rad/s, $\lambda=800$ nm, and $C_m=1$ for $1<m\le 4$, and $C_m=0$, otherwise, have been used. The white arrow in the intensity distribution shows the direction of rotation of the RSAP.}
\label{figure4}
\end{center}
\end{figure}
\section{Conclusions}
In this work, we have generalised the concept of radially self-accelerating field, to the domain of optical pulses. We have shown, how it is possible to define RSAPs as a superposition of OAM-carrying X-waves, rather than Bessel beams. For the case of fundamental X-waves, we have calculated the explicit expression for field rotating, as well as intensity rotating RSAPS, and we have shown, that while the former retains their self-acceleration character, the latter possess pseudo self-acceleration, and admit pure self-acceleration only asymptotically. We have also investigated intensity rotating RSAPs with constant $\beta$, and shown, that although in this case it is possible to recover pure self-acceleration, such field bare no physical meaning, as they are divergent in the origin. Our work represents the first attempt, to generalise the concept of self-acceleration to the domain of optical pulses, and discusses advantages and limitations of this process. Moreover, our work represents a guideline, which will be useful for the experimental realisation of radially self-accelerating optical pulses.
\section*{Acknowledgements}
The authors wish to thank the Deutsche Forschungsgemeinschaft (grant SZ 276/17-1) for financial support.
|
1,477,468,750,622 | arxiv | \section*{Introduction}
To each integer $n$ we assign a positive random integer
$a_n$. Then, $n$ is mapped $a_n$ units to the right. Given a probability space $(\Omega,\mathcal{F},\mathbb P)$ supporting the random sequence $\{a_n\}_{n\in \mathbb Z}$, we consider the function $f: \Omega\times \mathbb Z\to \mathbb Z$ defined by $f(\omega,n)=n+a_n(\omega)$. We assume the sequence $\{a_n\}_{n\in \mathbb Z}$ is $\hbox{i.i.d.}$ We study two objects arising from the random map $f$: a population dynamics
and a directed random graph. When there is no chance of ambiguity, we omit $\omega$ in our notation.
In the population dynamics,
individual $n$ is {\em born} at time $n$ and {\em dies} at time $f(n)$.
More precisely, the lifespan of individual $n$ is assumed to be $[n,n+a_n)$.
Let $\textbf{N}:=\{N_n\}_{n\in \mathbb Z}$ be the discrete time random process of the number of individuals
alive at time $n$, or simply the {\em population process}.
The process $\textbf{N}$ can be seen as the number of customers at arrival epochs in a D/GI/$\infty$
queue, namely, a queuing system with one arrival at every integer time, and $\hbox{i.i.d.}$ integer-valued service times, all distributed like $a_0$. As a new arrival takes place at each integer, the population
never goes extinct or, equivalently, the queue is never empty. Nonetheless, as we shall see, when $\e[a_0]<\infty$, the stationary population process is regenerative. To visualize the number of individuals
alive at time 0, count the edges that cross over 0, {\em and} count the
individual born at time 0 (Figure 1).
\begin{figure}
\center
\includegraphics[width=1\textwidth]{neppfig1.jpg}
\caption{\textbf{The population process.} Here, $a_{-6}=10,~a_{-5}=3,a_{-4}=8,~a_{-3}=3,~a_{-2}=3,~\hbox{and}~a_{-1}=6$. Assuming $a_{-i}<i$ for all $i>6$, there are four edges crossing $0$. Individuals -6,-4,-2, and -1 are still alive at time 0. Hence, $N_0=5$, as it includes the individual born at 0.}
\end{figure}
By letting $V^f=\mathbb Z$ and $E^f=\{(n,f(n)):n\in \mathbb Z\}$,
the random map $f$ also induces a random directed graph $T^f$. Further assuming $\mathbb P[a_0=1]>0$, we will show $T^f$ is a directed tree.
We interpret $T^f$ as a family tree and connect it to the population process $\textbf{N}$.
In order to gain insight, we draw a parallel with the classical age-dependent branching process. Such a process is
built upon a lifetime distribution $L$ on $\mathbb R^+$ and an offspring distribution $O$ on $\mathbb N$.
In this classical model, the first individual is born at time 0 and has a lifetime $l$ sampled from $L$.
When the first individual dies at $l$, it is replaced by its offspring, whose cardinality is sampled from $O$.
From then on, each individual statistically behaves as the first one, with all individuals having independent
lifetimes and offspring cardinalities (see, e.g., \cite{waugh1955age}).
Our family tree, in which individuals are indexed by $\mathbb Z$, is obtained by declaring that the individual
$f(n)$ is the parent of $n$, $f^2(n)$ is its grandparent, and so on. In terms of interpretation, this requires that we look at the
population dynamics in reverse time. In ``reverse time'', individual $n$ ``dies'' at time $n$
(note that it is the only one to die at that time) and was ``born earlier'', namely, at time $n+a_n$.
Since individual $m$ is born at time $n$ if
$f(n)=m$, the set of children of $n$ is $f^{-1}(n)$, the
set of its grandchildren is $f^{-2}(n)$, and so on. As in the age-dependent branching process, each individual has exactly one parent, but may have no children,
and the death time of an individual coincides with the birth time of its children.
Also notice $f^{-1}(n)\subset (-\infty,n-1]$. That is, as in the natural enumerations
used in branching processes, the children of individual $n$ have ``larger'' indices than that of individual $n$
(recall that individuals are enumerated using their ``death times''). Hence, each individual is born at the death of its parent and dies ``after'' its parent. Figure 2 illustrates the relation between the population process and $T^f$.
\begin{figure}
\center
\includegraphics[width=1\textwidth]{nefig4.pdf}
\caption{\textbf{From the population process to the family tree.} The top figure depicts the $f$ dynamics running on $\mathbb Z$. The curved black lines represent individual lifespans. For example, individual $7$ lives three units of time. The bottom figure depicts the tree representation of the dynamics on $\mathbb Z$, with the red edges representing parenthood. There, $7$ is the child of 10, and has four children: 1,4, 5, and 6. Individual $8$, for example, has no children.}
\end{figure}
However, our age-dependent family tree is far from being that of the age-dependent branching process discussed above.
In particular, there is no first individual: our family tree is {\em eternal} \cite{BOAunimodular}. More importantly, it lacks the independence
properties of branching processes. In particular, the offspring cardinalities of different individuals are dependent.
In the age-dependent branching process described above, if we set $L=1$,
we recover the Bienaym\'e-Galton-Watson process. Despite the fact that the building
block of both our model and the Bienaym\'e-Galton-Watson model is just a sequence of
$\hbox{i.i.d.}$ random variables, the two models are quite different.
In the former, the $\hbox{i.i.d.}$ random variables define the offspring cardinalities. In the latter,
they define the lifespans of individuals. Moreover, as we shall see, our model is
always {\em critical}: the mean offspring cardinality is one for all individuals.
The fact that, when $\mathbb P[a_0=1]>0$, $T^f$ is a unimodular directed tree \cite{BOAunimodular} allow us
to complement the classical queuing and regenerative analysis
by structural observations that we believe to be new:
\begin{enumerate}
\item this tree is two ended;
\item an integer (individual) is either successful or ephemeral depending on whether the number of
its descendants (pre-images by $f$) of all orders is infinite or finite a.s.;
\item the set of successful (resp. ephemeral) integers forms a stationary point process;
\item there is a stationary thinning of the successful point process consisting of individual
such that all individuals born after one of them are its descendants - these are
called original ancestors;
\item each individual has a finite number of cousins of all orders.
\end{enumerate}
These structural observations are completed by closed form expressions for the
intensities of the point processes in question.
The last interpretation of the family tree pertains to renewal theory.
One can see $n,f(n),f^2(n),\cdots$ as a renewal process on the integers with interarrivals distributed like $a_0$ and starting from time $n$. The graph $T^f=(V^f,E^f)$ can hence be seen as the {\em mesh} of all
such renewal processes.
By mesh, we mean that the renewal process of $n$ merges with that of $m$ at the first time when
the orbits $\{f^p(n)\}_{n\in \Bbb{N}}$ and $\{f^q(m)\}_{m\in \Bbb{N}}$ meet.
\section{Population and queue dynamics}
\label{populationdynamics}
In this section, we study the population process
\begin{align}
\label{individualsalive}
N_n=\#\{\hbox{all $m\in \mathbb Z$~such that } m<n~\hbox{and}~f(m)>n\}+1,\quad n\in \mathbb Z.
\end{align}
Section \ref{generationgeneralcase} introduces definitions and notation. In Section \ref{generalcasemainresults}, we show the population process is regenerative with independent cycles. In Section \ref{analyticsofN}, an explicit formula for the moment generating function of $N_0$ is given and it is indicated this random variable is always light-tailed, regardless of the distribution of $a_0$. Finally, in Section \ref{geometricmarks}, we work out the case in which $a_0$ follows a geometric distribution and, consequently, the population process is Markovian. The notions and proof techniques in this section are classical and details are kept to a minimum.
\subsection{Definitions and assumptions}
\label{generationgeneralcase}
Let $(\Omega,\mathcal{F}, \mathbb P,\{\theta_n\}_{n\in\mathbb Z})$ be the underlying probability space supporting all random elements discussed in this paper, endowed with discrete flow. We assume $\mathbb P$ is preserved by $\theta_n$, i.e., for all $n$,
\begin{align}
\label{stationaryPP}
\mathbb P\circ \theta_n^{-1}=\mathbb P.
\end{align}
A random integer-valued discrete sequence $\textbf{W}=\{W_n\}_{n\in \mathbb Z}$ defined on $\Omega$ is compatible with the flow $\{\theta_n\}_{n\in \mathbb Z}$ if $W_n(\omega)=W_0(\theta_n \omega)$ for all $n\in \mathbb Z$. Notice that, given (\ref{stationaryPP}), if a process $\textbf{W}$ is compatible with $\{\theta_n\}_{n\in \mathbb Z}$, then it is strictly stationary. All integer-valued discrete sequences considered here are $\{\theta_n\}_{n\in \mathbb Z}-$compatible (or, for short, stationary).
In particular, since $\textbf{a}:=\{a_n\}_{n\in \mathbb Z}$ is stationary, so is $\textbf{N}:=\{N_n\}_{n\in \mathbb Z}$, assuming the population process starts at $-\infty$.
Consider a stationary integer-valued discrete sequence $\{U_n\}_{n\in \mathbb Z}$
in which $U_n$ equals $0$ or $1$. Let $\{k_n\}_{n\in \mathbb Z}$, with
\begin{align}
\label{ordering}
\cdots<k_{-1}<k_0\leq 0 < k_1 <k_2 <\cdots,
\end{align}
be the sequence of times at which $U_n=1$. A simple stationary point process (henceforth ${s.s.p.p.}$)
on $\mathbb Z$ is then a random counting measure $\Phi(\cdot)=\sum_{n\in \mathbb Z}\delta_{k_n}(\cdot)$,
where $\delta_{k_n}(\cdot)$ is the Dirac measure at $k_n$.
Throughout the document, all ${s.s.p.p.}$s will be assumed to such that the associated $\Phi$ is a.s. not equal to the
empty measure.
We often identify $\Phi$ with the sequence $\{k_n\}_{n\in \mathbb Z}$, writing $k_n\in \Phi$, whenever $\Phi(\{k_n\})=1$.
The intensity of a ${s.s.p.p.}$, denoted by $\lambda_{\Phi}$, is given by $\mathbb P[0\in \Phi]$.
Let $\Phi$ be a ${s.s.p.p.}$ on $\mathbb Z$ such that $\lambda_{\Phi}>0$ and let $\mathbb P_{\Phi}[\cdot]:=\mathbb P[\cdot|0\in \Phi]$. Then $\mathbb P_{\Phi}$ is the {\em Palm probability} of $\Phi$. We denote by $\e_{\Phi}$ the expectation operator of $\mathbb P_{\Phi}$. Consider the operator $\theta_{k_1}$, i.e.,
\begin{align}
\label{invariantshift}
\Phi\circ \theta_{k_1}:=\{k_{n+1}\}_{n\in \mathbb Z}.
\end{align}
Since $\theta_{k_1}$ is a bijective map on $\Omega_0=\{k_0(\omega)=0\}$, the following holds \cite{heveling2005characterization}.
\begin{lem}
\label{palmpreservation}
The operator $\theta_{k_1}$ preserves the Palm probability $\mathbb P_{\Phi}$.
\end{lem}
A ${s.s.p.p.}$ $\Psi$ on $\mathbb Z$ is an {\em integer-valued renewal process} if $\{k_n-k_{n-1}\}_{n\in \mathbb Z}$ is $\hbox{i.i.d.}$ under $\mathbb P_{\Psi}$.
Finally, we say that a stationary process $\textbf{R}:=\{R_n\}_{n\in \mathbb Z}$ is {\em regenerative} if there exists an integer-valued renewal process $\Psi=\{k_n\}_{n\in \mathbb Z}$, such that, under $\mathbb P_{\Psi}$, $(\{k_n-k_{n-1}\}_{n > j},\{R_n\}_{n\geq k_j})$ is independent of $\{k_n-k_{n-1}\}_{n\leq j}$ for all $j\in \mathbb Z$ and its distribution does not depend on $j$. Moreover, $\textbf{R}$ is {\em regenerative with independent cycles} if $\textbf{R}$ is regenerative and $\{R_{n}\}_{n< 0}$ is independent of $\{R_n\}_{n\geq 0}$ under $\mathbb P_{\Psi}$. We call $\{k_n\}_{n\in \mathbb Z}$ the regeneration points of $\textbf{R}$.
For most of this work, unless otherwise stated, we assume: $\textbf{a}:=\{a_n\}_{n\in \mathbb Z}$ is $\hbox{i.i.d.}$, $\e[a_0]<\infty$, and $\mathbb P[a_0=1]>0$.
\begin{rem}
\label{aismixing}
Since randomness in this model comes from the sequence $\{a_n\}_{n\in \mathbb Z}$ and $\theta_1$ preserves $\mathbb P$, by assuming $\{a_n\}_{n\in \mathbb Z}$ is $\hbox{i.i.d.}$, there is no loss of generality in assuming $(\Omega,\mathcal{F},\mathbb P,\{\theta_n\}_{n\in \mathbb Z})$ is strongly mixing, i.e.,
$$\lim_{n\to \infty} \mathbb P[A\cap \theta_{\pm n} B]=\mathbb P[A]\mathbb P[B]~\forall~A,B\in \mathcal{F}.$$
\end{rem}
\subsection{General case: main results}
\label{generalcasemainresults}
\begin{thm}
\label{originalancestorsthm}
Let $\Psi^o:=\{m\in \mathbb Z: N_m=1\}$. Then, $\Psi^o$ is an integer-valued renewal process with intensity $\lambda^o:=\lambda_{\Psi^o}=\prod_{i=1}^{\infty} \mathbb P[a_0\le i]>0$.
\end{thm}
The atoms of $\Psi^o$ are called \emph{original ancestors} as all individuals born after any of them are necessarily its descendants in the family tree $T^f$ studied in Section \ref{netree}.
\begin{proof}[Proof of Theorem \ref{originalancestorsthm}]
First we prove $\mathbb P[N_0=1]>0$. By (\ref{individualsalive}),
$\mathbb P[N_0=1]=\mathbb P[\cap_{j=1}^{\infty} a_{-j}\le j]$.
Then, as the sequence $\{a_n\}_{n\in \mathbb Z}$ is \hbox{i.i.d.},
\begin{align*}
\mathbb P[N_0=1]=\prod_{j=1}^{\infty}\mathbb P[a_0\le j].
\end{align*}
Since $\mathbb P[a_0=1]>0$, none of the elements of the above product equals $0$. Hence, we can take logs on both sides to get:
\begin{align*}
\ln(\mathbb P[N_0=1])&=\sum_{j=1}^{\infty}\ln(1-\mathbb P[a_0>j])\geq \sum_{j=1}^{\infty}\ln(1-\mathbb P[a_0\ge j])\\
&\ge
\sum_{j=1}^{j^*-1}
\ln(1-\mathbb P[a_0>j])
-2\sum_{j=j^*}^{\infty} \mathbb P[a_0\ge j]=C-2\e[a_0]>-\infty,
\end{align*}
where $C= \sum_{j=1}^{j^*-1} (\ln(1-\mathbb P[a_0>j]) +2 \mathbb P[a_0>j])$.
Here we used the fact that $\mathbb P[a_0\ge j]> \frac 1 2$ for finitely many $j$ to define $j^*$,
the first $j$ such that $\mathbb P[a_0\ge j]\le \frac 1 2$, and
the fact that $-x\leq \ln\left(1-\frac{x}{2}\right)$ if $x\in [0,1]$.
Consequently, $\mathbb P[N_0=1]>0$. Stationarity of $\textbf{N}$ implies, for all $n\in \mathbb Z$, $\mathbb P[N_n=1]=\mathbb P[N_0=1]>0$.
Since $(\Omega,\mathcal{F},\mathbb P,\{\theta_n\}_{n\in \mathbb Z})$ is strongly mixing (Remark \ref{aismixing}), it is ergodic. Therefore, by Birkhoff's pointwise ergodic theorem, for all measurable functions $g:\Omega\to \R^+$ such that $\e[g]<\infty$,
\begin{align*}
\lim_{n\to \pm\infty}\frac{1}{n}\sum_{i=1}^{\pm n}g\circ \theta_{\pm n}=\e[g]~~\mathbb P-\hbox{a.s.}.
\end{align*}
Let $g=\textbf{1}\{N_0=1\}$. Then,
\begin{align*}
\lim_{n\to \pm\infty}\frac{1}{n}\sum_{i=1}^{\pm n}\textbf{1}\{N_0=1\}\circ \theta_{\pm n}=\mathbb P[N_0=1]>0~~\mathbb P-\hbox{a.s.}.
\end{align*}
Hence, there exists a subsequence of distinct integers $\Psi^o:=\{k^o_n\}_{n\in \mathbb Z}$, satisfying (\ref{ordering}) such that $N_{k^o_n}=1$ for all $n\in \mathbb Z$. That $\Psi^o$ is a renewal process is proved in Appendix \ref{someproofsa}, Proposition \ref{propositionregardingrenewal}.
\end{proof}
In order to show that $\textbf{N}$ is a stationary regenerative process with respect to $\Psi^o$ with independent cycles, we rely on the following lemma.
\begin{lem}
\label{independencefrompastfuture}
Under $\mathbb P_{\Psi^o}$, $\{a_n\}_{n<0}$ is independent of $\{a_n\}_{n\geq 0}$.
\end{lem}
\begin{proof}
Let $g,f:\Omega\to \R^+$ be two measurable, continuous, and bounded functions. Using the fact that the event $\{k^o_0=0\}$ is equal, by definition, to $\{N_0=1\}=\{\cap_{i> 0} a_{-i}\leq i\}$, we have
\begin{align*}
\e_{\Psi^o}[g(\{a_n\}_{n< 0})f(\{a_n\}_{n\geq 0})]&=\e[g(\{a_n\}_{n< 0})f(\{a_n\}_{n\geq 0})|k^o_0=0]\\
&=\e[g(\{a_n\}_{n< 0})f(\{a_n\}_{n\geq 0})|\cap_{i> 0} a_{-i}\leq i].
\end{align*}
Now as $\{a_n\}_{n\geq 0}$ is independent of $\{a_n\}_{n< 0}$ under $\mathbb P$,
\begin{align*}
& \e[g(\{a_n\}_{n< 0})f(\{a_n\}_{n\geq 0})|\cap_{i> 0} a_{-i}\leq i]\\
&=\e[g(\{a_n\}_{n< 0})|\cap_{i> 0} a_{-i}\leq i]\e[f(\{a_n\}_{n\geq 0})|\cap_{i> 0} a_{-i}\leq i]\\
&=\e_{\Psi^o}[g(\{a_n\}_{n< 0})]\e_{\Psi^o}[f(\{a_n\}_{n\geq 0})],
\end{align*}
completing the proof.
\end{proof}
\begin{cor}
The population process $\textbf{N}$ is a stationary regenerative process with respect to $\Psi^o$ with independent cycles.
\end{cor}
\begin{proof}
Given $k^o_0=0$ (i.e., under $\mathbb P_{\Psi^o}$), $N_0=1$ is a constant. Then \break $(\{k^o_n-k^o_{n-1}\}_{n>0},\{N_n\}_{n\geq 0})$ is a function of $\{a_n\}_{n\geq 0}$, while $\{k^o_n\}_{n\leq 0},$ is a function of $\{a_n\}_{n<0}$. Then, by Lemma \ref{independencefrompastfuture},
$(\{k_n-k_{n-1}\}_{n > 0},\{N_n\}_{n\geq k_j})$ is independent of $\{k_n-k_{n-1}\}_{n\leq j}$ for $j=0$. Following the same reasoning as in Lemma \ref{independencefrompastfuture}, $\{a_n\}_{n\geq k_j}$ is independent of $\{a_n\}_{n<k_j}$ for all $j\in \mathbb Z$. It follows that $(\{k_n-k_{n-1}\}_{n > 0},\{N_n\}_{n\geq k_j})$ is independent of $\{k_n-k_{n-1}\}_{n\leq j}$ for all $j$. We conclude $\{N_n\}_{n\in \mathbb Z}$ is a regenerative process.
Independence of the random vectors $\{a_n\}_{k_j<n\leq k_{j+1}}$, also a consequence of Lemma \ref{independencefrompastfuture}, implies $\{N_n\}_{n\in \mathbb Z}$ is regenerative with independent cycles.
\end{proof}
\begin{rem}
When we do not assume that $\mathbb P[a_0=1]>0$, let $\underline{m}>1$ be the smallest integer such that $\mathbb P[a_0=\underline{m}]>0$. Then, $\mathbb P[N_0<\underline{m}]=0$, while $\mathbb P[N_0=\underline{m}]=\prod_{j=\underline{m}}^{\infty}\mathbb P^0[a_0\le j]>0$. Proceeding as in the proof of Theorem \ref{originalancestorsthm}, we conclude there exists an integer-valued renewal process, $\tilde{\Phi}$ such that $k_n\in \tilde{\Phi}$ if and only if $N_{k_n}=\underline{m}.$
\end{rem}
\subsection{Analytical properties of $N_n$}
\label{analyticsofN}
We now turn to some analytical properties of $\textbf{N}$. Some of our results are adapted from the literature on GI/GI/$\infty$ queues, i.e., queues with an infinite number of servers, and independently distributed arrival times and service times. Proofs can be found in Appendix \ref{someproofsa}. The results in this subsection do not depend on the assumption $\mathbb P[a_0=1]>0$.
The following proposition can be extended to the case in which $\{a_n\}_{n\in \mathbb Z}$ is stationary rather than $\hbox{i.i.d.}$.
\begin{prop}
\label{proppopulation1}
For all $n\in \mathbb Z$, $\e[a_0]=\e[N_n]$.
\end{prop}
Proposition \ref{proppopulation1} is the Little's law for the infinite server queue, i.e., the expected number of individuals being served equals the arrival rate times the expected service time given that there is an arrival at the origin a.s. (see, e.g. \cite{asmussen2003applied}). This shows the mean number of customers in steady state is finite if and only if the expected service time is finite.
A general formula for the moment generating function of the number of customers in steady state in a GI/GI/$\infty$ queue can be found in \cite{yucesanrare}. We limit ourselves to showing $N_0$ is a light-tailed random variable, regardless of the distribution of $a_0$. The latter property has the following intuitive basis: for the value of $N_0$ to be large, it is necessary that several realizations of $\{a_n\}_{n<1}$ are large as well. Another way to get an intuition for this result is if we let the arrival times to be exponentially distributed with parameter $\lambda$. Then one can show $N_0$ has a Poisson distribution with parameter $\lambda \e[a_0]$, and it is light-tailed regardless of the tail of $a_0$.
\begin{prop}
\label{propmgfMn}
The moment generating function of $N_n$ is given by
\begin{align}
\label{mgfpopulationlookatit}
\e[e^{t N_0}]=e^{t}\prod_{i=1}^{\infty}\left(e^{t}\mathbb P[a_0>i]+\mathbb P[a_0\leq i]\right)~~\forall~t\in \R.
\end{align}
Moreover, $\e[e^{t N_0}]<\infty$ for all $t\in \R$.
\end{prop}
\subsection{Geometric marks}
\label{geometricmarks}
In general, the process $\textbf{N}$ is not Markovian. However, it is when the marks are geometrically distributed. Let us assume $a_0$ is geometrically distributed supported on $\Bbb{N}_+$ with parameter $s$. Let $r=1-s$. Then, due to the memoryless property, $\textbf{N}$ is a time-homogeneous, aperiodic, and irreducible Markov chain with state space $\{1,2,3,\ldots\}$, whose transition matrix is given by
\begin{align}
\label{populationmarkovchain}
\mathbb P[N_n=n|N_{n-1}=k]&=\left(\begin{array}{c} k \\ n-1\end{array}\right) r^{n-1} s^{k-n+1},~1\leq n\leq k+1.
\end{align}
\begin{prop}
In the geometric case, the probability generating function of $N_n$ at steady-state obeys the following functional relation
\begin{align}
\label{mgfpopulation}
G(z)=zG(sz+r).
\end{align}
\end{prop}
\begin{proof}
Let $\{\tilde{N}_n\}_{n\geq 0}$ the population process starting at $0$ and $G_m(z)$, $z\in [0,1]$, the probability generating function of $\tilde{N}_m$. Using (\ref{populationmarkovchain}),
\begin{align*}
\mathbb{E}[z^{\tilde{N}_n}|\tilde{N}_{n-1}=k]&=\sum_{n=1}^{k+1}\left(\begin{array}{c} k \\ n-1\end{array}\right) r^{n-1} s^{k-n+1}z^n=z\sum_{n'=0}^{k}\left(\begin{array}{c} k \\ n'\end{array}\right) r^{n'} s^{k-n'}z^{n'},
\end{align*}
where $n'=n-1$.
So,
\begin{align*}
\mathbb{E}[z^{\tilde{N}_n}|\tilde{N}_{n-1}=k]&=z(rz+s)^k.
\end{align*}
Hence,
\begin{align*}
G_m(z)&=\sum_{k=1}^{m} \mathbb P[\tilde{N}_{m-1}=k](z(rz+s)^k)=zG_{m-1}(rz+s).
\end{align*}
Letting $m\to \infty$ we get (\ref{mgfpopulation}).
\end{proof}
Using (\ref{mgfpopulation}) we can easily compute the moments of $N_0$. For example, by differentiating both sides and setting $z=1$, we get
\begin{align}
\e[N_0]&=\frac{1}{s},
\end{align}
which is the mean of $a_0$, as expected. Proceeding the same way, the second moment is given by
\begin{align}
\e[N_0^2]&=\frac{2r}{s(1-r^2)}.
\end{align}
\begin{rem}
By (\ref{mgfpopulationlookatit}), and noticing $\mathbb P[a_i>i]=r^i$, we get
\begin{align}
G(z)&=z\prod_{i=1}^{\infty}(zr^i+1-r^i).
\end{align}
By Lemma \ref{populatioauxlem1} in Appendix \ref{someproofsa}, $\prod_{i=1}^{\infty}(zr^i+1-r^i)$ converges for all $z\in \R$ as $\sum_{i=1}^{\infty} r^i<\infty$.
\end{rem}
\section{The Eternal Family Tree}
\label{netree}
In this section we study the directed graph $T^f=(V^f,G^f)$, where $V^f=\mathbb Z$ and $E^f=\{(n,f(n)):n\in \mathbb Z\}$. In Section \ref{globalpropertiesoftf}, we show $T^f$ is an infinite tree containing a unique bi-infinite path. The indexes of the nodes on this bi-infinite path form a ${s.s.p.p.}$ on $\mathbb Z$ with positive intensity. We derive this result by exploiting the fact that $T^f$ is an \textit{Eternal Family Tree}, i.e., the out-degrees of all vertices are exactly one \cite{BOAunimodular}.
In the following two sections, we delve deeper into the genealogy of $T^f$.
In Section \ref{genealogyofTf}, we give the basic properties of certain ${s.s.p.p.}$s derived from $T^f$. First, the process of integers forming the bi-infinite path: each integer in it is called \emph{successful}, since its lineage (the set of its descendants) has infinite cardinality a.s. Second, we consider the process coming from the complement of the bi-infinite path: each integer in it is called {\em ephemeral}, since its lineage is finite a.s. Third, we consider the process of original ancestors, defined in Theorem \ref{originalancestorsthm}, which is a subprocess of the first.
In Section \ref{directephemeralsubsubsection}, we look at the set of direct ephemeral descendants of a successful integer $n$, whose path to $n$ on $T^f$ consists only of ephemerals. The conservation law that defines unimodular networks allows us to establish probabilistic properties of the set of direct ephemerals descendants and the set of cousins of a typical successful node.
\subsection{The global properties of $T^f$}
\label{globalpropertiesoftf}
\begin{thm}
\label{genealogythm}
The directed random graph $T^f$ is a tree with a unique bi-infinite path for which the corresponding nodes, when mapped to $\mathbb Z$, form a ${s.s.p.p.}$ with positive intensity.
\end{thm}
In order to prove Theorem \ref{genealogythm}, we resort to recent results on dynamics on unimodular networks \cite{BOAunimodular}. In Appendix \ref{Unimodulardynamics}, we present a brief review of definition and properties of unimodular networks that we use. First, we notice the directed graph $G=(V,E)$, where $V=\mathbb Z$ and $E=\{(n,n+1):n\in \mathbb Z\}$ in which each node $n$ has mark $a_n$, rooted at $0$, is a locally finite unimodular random network. Second, we notice $f$ is a translation-invariant dynamics on this network, more precisely, a \emph{covariant vertex-shift} (see Appendix \ref{Unimodulardynamics}, Definition \ref{covariantvertexshift}).
Define the \emph{connected component} of $n$ as
\begin{align}
\label{connectedcomponent}
C(n)=\{m\in \mathbb Z~\hbox{s.t.}~\exists~i,j\in \Bbb{N}~\hbox{with}~f^i(n)=f^j(m)\}.
\end{align}
\begin{prop}
\label{uniquecomponentproposition}
The directed random graph $T^f$ has only one connected component.
\end{prop}
\begin{proof}
Consider the process of original ancestors, $\Psi^o$, as defined in Theorem \ref{originalancestorsthm}, and let $m\in \Psi^o$. Then, for every $n<m$, $n\in C(m)$. Hence, the result follows from the fact that $\Psi^o$ is a ${s.s.p.p.}$ consisting of an infinite number of points $\hbox{a.s.}$
\end{proof}
Let
\begin{align}
\label{descendants}
D(n)=\{m\in \mathbb Z~\hbox{s.t.}~\exists~j\in \Bbb{N}~\hbox{with}~f^i(m)=n\}
\end{align}
denote the set of {\em descendants} of $n$. Also let,
\begin{align}
\label{foilofn}
L(n)=\{m\in \mathbb Z~\hbox{s.t.}~\exists~j\in \Bbb{N}~\hbox{with}~f^j(m)=f^j(n)\}
\end{align}
denote the set of {\em cousins} of $n$ of all degrees (this set is referred to as the {\em foil} of $n$ in \cite{BOAunimodular}).
We further subdivide $D(n)$ and $L(n)$ in terms
\begin{align}
D_i(n)=\{m\in \mathbb Z~\hbox{s.t.}~f^i(m)=n\},~i\geq 0
\end{align}
and
\begin{align}
\label{foilofol}
L_i(n)=\{m\in \mathbb Z~\hbox{s.t.}~f^i(m)=f^i(n)\},~i\geq 0.
\end{align}
So $D_i(n)$ is the set of descendants of degree $i$ of $n$ and $L_i(n)$ the set of cousins of degree $i$ of $n$.
Lower case letters denote the cardinalities of the above sets. So $c(n)$ is the cardinality of $C(n)$, $d_i(n)$ is the cardinality of $D_i(n)$ and so on. Moreover, $d_{\infty}(n)$ denotes the weakly limit of $d_i(n)$ (if such a limit exists).
In \cite{BOAunimodular}, it is shown that each connected component of a graph generated by the action of a covariant vertex-shift on a unimodular network, $C(n)$, falls within one of the following three categories:
\begin{enumerate}
\item Class \textbf{F}/\textbf{F}: $c(n)<\infty$ and for all $v\in C(n)$, $l(v)<\infty$. In this case $C(n)$ has a unique cycle.
\item Class \textbf{I}/\textbf{F}: $c(n)=\infty$ and for all $v\in C(n)$, $l(v)<\infty$. In this case, $C(n)$ is a tree containing a unique bi-infinite path. Moreover, the bi-infinite path forms a ${s.s.p.p.}$ on $\mathbb Z$ with positive intensity.
\item Class \textbf{I}/\textbf{I}: $c(n)=\infty$ and for all $v\in C(n)$, $l(v)=\infty$. In this case, $C(n)$ is also a tree such that $d_{\infty}(v)=0$ for all $v\in C(n)$.
\end{enumerate}
Notice the dynamics $f$ precludes the connected component $\mathbb Z$ of being of class \textbf{F}/\textbf{F}.
Theorem \ref{genealogythm} follows from proving that, in our case, $C(0)$ is of class \textbf{I}/\textbf{F}. We rely on the following lemma derived from the results found in \cite{BOAunimodular}.
\begin{lem}
\label{ClassificationLemma}
A connected component $C(m)$ is of class \textbf{I}/\textbf{I} if and only if for all $n\in C(m)$, $\mathbb P[d(n)=\infty]=0$.
\end{lem}
\begin{proof}[Proof of Theorem \ref{genealogythm}]
For $k\in \Psi^o$, we have $d(k)=\infty$ a.s., and consequently, $C(k)$ is of class \textbf{I}/\textbf{F}. Since there is a unique component, the result follows.
\end{proof}
\begin{rem}
Coming back to the case where it is not assumed that $\mathbb P[a_0=1]>0$, let $\underline{m}>1$ be the smallest integer such that $\mathbb P[a_0=\underline{m}]>0$.
Define the sets
\begin{align*}
Y(i)&=\{m\in \mathbb Z:~\mathbb P[m\in C(i)]>0\}~\hbox{for $i\in \{0,\ldots,\underline{m}-1\}$},
\end{align*}
and notice $\{Y(i)\}_{i=0}^{\underline{m}-1}$ forms a partition of $\mathbb Z$. Let
\begin{align*}
N^i_m=\#\{m<n, m\in Y(i): f(m)>n\}+1.
\end{align*}
Then, using the same arguments as in the proof of Theorem \ref{originalancestorsthm}, one can show that, for each $i\in \{0,\ldots,\underline{m}-1\}$, there exists a ${s.s.p.p}$, $\Psi_o^i$, such that $k \in \Psi_i^o$ if and only if $N^i_k=1$. It follows that $Y(i)=C(i)$. Hence, $T^f$ has $\underline{m}$ connected components. By translation invariance, all connected components are of class \textbf{I}/\textbf{F}. In this case, $T^f$ is a forest.
\end{rem}
\begin{defi}[Diagonally invariant functions]
\label{translationinvariant}
A measurable function $g: \Omega \times \mathbb Z\times \mathbb Z\to \R$ is said to be diagonally invariant if
$h(\theta_k(\omega),m,n)=h(\omega,m+k,+k)$ for all $k,m,n\in \mathbb Z$.
\end{defi}
Finally, $T^f$ itself is a unimodular network (\cite{BOAunimodular}). Unimodularity is characterized by the fact such a network obeys {\em the mass transport principle} (see Appendix \ref{Unimodulardynamics} and \cite{BOAunimodular}). In our setting, the mass transport principle takes the following form, recalling $0$ is by convention the root of $T^f$: for all diagonally invariant functions $h$ (Definition \ref{translationinvariant}),
\begin{align}
\label{themtp}
\e\left[\sum_{n\in \mathbb Z} h(n,0)\right]=\e\left[\sum_{n\in \mathbb Z} h(0,n)\right].
\end{align}
\subsection{Successful and ephemeral individuals, and original ancestors}
\label{genealogyofTf}
From the analysis of the population process and the shape of $T^f$, we learned $f$ defines three ${s.s.p.p.}$s on $\mathbb Z$ related to the genealogy of $T^f$:
\begin{enumerate}
\item $\Phi^s$: the set of successful individuals, consisting of individuals $n\in \mathbb Z$ having an infinite number of descendants in $T^f$.
\item $\Psi^o$: the set of original ancestors (defined in Section \ref{populationdynamics}), consisting of all individuals $n\in \mathbb Z$ such that for all $m<n$, $m$ is a descendant of $n$ in $T^f$. Clearly, $\Psi^o\subset \Phi^s$.
\item $\Phi^e$: the set of ephemeral individuals, consisting of individuals $n\in \mathbb Z$ which have a finite number of descendants in $T^f$. Clearly, $\Phi^e\cup \Phi^s=\mathbb Z$.
\end{enumerate}
We now look at the basic properties of these processes. In what follows we let $\e^s:=\e_{\Phi^s}$, i.e., $\e^s$ is the expectation operator of the Palm probability of $\Phi^s$. In the same vein, $\e^e:=\e_{\Phi^e}$, and $\e^o:=\e_{\Psi^o}$.
\begin{prop}
Let $\lambda^s$ be the intensity of $\Phi^s$. Then,
\begin{align}
\label{lambdas}
\lambda^s=\frac{1}{\e[a_0]}.
\end{align}
It follows that the intensity of $\Phi^e$, $\lambda^e$, equals $1-\frac{1}{\e[a_0]}$.
\end{prop}
\begin{proof}
Let $S_0=0$ and $S_n=a_1+a_2+\cdots+a_n$. The associated renewal sequence $\{u_k\}_{k\geq 0}$ is defined as:
\begin{align*}
u_k=\mathbb P[S_n=k~\hbox{for some $n\geq 0$}],
\end{align*}
so $u_k$ is the probability that $k$ is hit at some renewal epoch $S_n$.
Then from asymptotic renewal theory (see, e.g., \cite{asmussen2003applied}), since the distribution of $\{a_n\}_{n\in \mathbb Z}$ is aperiodic (as $\mathbb P[a_0=1]>0$), $u_k\to \frac{1}{\e[a_0]}$ as $k\to \infty$, $\mathbb P-$a.s. As $\lim_{k\to \infty} u_k=\mathbb P[0\in \Psi^s]=\lambda^o$ and $\lambda^s+\lambda^e=1$, the result follows.
\end{proof}
\begin{rem}
As $\prod_{i=1}^{\infty}\mathbb P[a_0\leq i]\le \frac{1}{\e[a_0]}$, $\lambda^o\le \lambda^s$, as expected.
\end{rem}
We know from \cite{BOAunimodular} that $\e[d_n(0)]=1$, i.e., the expected number of descendants of all degrees of a typical integer is one. This result follows from the mass transport principle. The process of successful individuals is locally supercritical, while the process of ephemeral individuals is locally subcritical, as the next proposition shows.
\begin{prop}
Assume $\mathbb P[a_0=1]\in (0,1)$. Then, for all $n\geq 1$, $\e^s[d_n(0)]>1$, while $\e^e[d_n(0)]<1$.
\end{prop}
\begin{proof}
By the law of total probability and the definition of $\mathbb P^s$ and $\mathbb P^e$,
\begin{align*}
\e^s[d_n(0)]\lambda^s+\e^e[d_n(0)]\lambda^e=\e[d_n(0)]=1.
\end{align*}
Since a successful has at least one descendant of degree $n$ $\hbox{a.s.}$, $\mathbb P[a_0=1]<1$, and $\lambda^e=1-\lambda^s$,
$\e^s[d_n(0)]>1$ and $\e^e[d_n(0)]<1$.
\end{proof}
\subsection{Cousins and direct ephemeral descendants}
\label{directephemeralsubsubsection}
Given a successful node $n$, we say that an ephemeral individual $m$ is a {\em direct ephemeral descendant} of $n$ if $n$ is the first successful in the ancestry lineage of $m$. The set of direct ephemeral descendants of $n$ is hence:
\begin{align*}
D^e(n)=\{m\in D(n)\cap \Phi^e:\hbox{for the smallest $k>0$ s.t. $f^k(m)\in \Phi^s$, $f^k(m)=n$}\},
\end{align*}
where $D(n)$ is the set of descendants of all degrees of $n$ (see Equation (\ref{descendants})). By Theorem \ref{genealogythm}, the cardinality of $D^e(n)$, denoted by $d^e(n)$, is finite. Moreover, for $m\neq n\in \Psi^s$, $D^e(n)\cap D^e(m)=\emptyset$.
\begin{defi}[Direct ephemeral descendants partition]
Let $$\tilde{D}^e(n):=D^e(n)\cup \{n\}$$ be the directed ephemeral tree rooted at $n\in \Phi^s$.
The {\em direct ephemeral descendant partition} is \begin{align}\mathcal{P}^D:=\{\tilde{D}^e(n):{n\in \Phi^s}\}.\end{align}
\end{defi}
We notice that, by convention, any individual $n$ is a cousin of itself (i.e., $n$ is a $0-$degree cousin of itself). Moreover, if $m\neq n$ both belong to $\Psi^s$, then $L(n)\cap L(m)=\emptyset$, as either $m$ is a descendant or an ancestor of $n$. In other words, for $n\in \Phi^s$, $L(n)\backslash \{n\}\subset \Phi^e$. Hence, we get the following partition.
\begin{defi}[Successful cousin partition]
The cousin partition of $\mathbb Z$ is \begin{align}\mathcal{P}^L:=\{L(n):{n\in \Phi^s}\}.\end{align}
\end{defi}
\begin{figure}
\center
\includegraphics[width=1\textwidth]{nefig5.pdf}
\caption{\textbf{The direct ephemeral descendants and the cousins of 0.} Above we have a realization of $T^f$ and below the corresponding integers on $\mathbb Z$. Here, -3, 0, 3, and 6 belong to the bi-infinite path (denoted by circles with pink boundaries). The yellow individuals are the direct ephemeral descendants of $0$, while the blue ones are cousins. Individual $0$ has two ephemeral children (nodes -1 and -2), one successful (node -3), and one ephemeral grandchild (node -5). It has a first degree cousin (node 2) and two second degree cousins (nodes -6 and 4). While any descendant of $0$ must be to the left of it on $\mathbb Z$, cousins can be either to the left or right. }
\end{figure}
Figure 3 illustrates $\tilde{D}^e(0)$ and $L(0)$. For all $n\in \Psi^e$ and $j>0$, let $d^e_j(n)=\#\{D^e(n)\cap D_j(n)\}$ be the number of directed ephemeral descendants of degree $j$ of $n$. By construction, the following equality holds for all $j>0$ $\mathbb P^s-\hbox{a.s.}$ (see Figure 4):
\begin{align}
\label{cousintreerelationship1}
l_j(0)&=d^e_j(k^s_j),
\end{align}
so that
\begin{align}
\label{cousintreerelationship2}
l(0)&=\sum_{j=1}^{\infty}d^e_j(k^s_j)+1.
\end{align}
\begin{figure}
\center
\includegraphics[width=0.7\textwidth]{nefig6.pdf}
\caption{\textbf{Cousins and the direct ephemeral descendants trees:} The integers $\{k^s_i\}_{i=1}^{4}$ represent the successful individuals. The color of the boundary of the circles indicates the direct ephemeral descendant tree an individual belongs to. For example, all individuals represented by blue boundary circles belong to the direct ephemeral descendant tree of $k^s_4$. The notation $I_{(n,m)}$ reads ``individual $I$ of the $n^{th}$ layer of the direct ephemeral descendant tree of $k^s_m$''. For example, $2_{(2,4)}$ is the second individual in the second layer of the direct ephemeral descendant tree of $k^s_4$. Each colored box contains the cousins of $k^s_0$ of different degrees. The blue box contains the first-degree, the yellow box contains the second-degree, and the green box contains the forth-degree cousins (there are no third-degree cousins). Equivalently, the blue box contains all elements of the first layer of the direct ephemeral descendant tree of $k^s_1$, the yellow box contains all elements of the second layer of the direct ephemeral descendant tree of $k^s_2$, and the green box contains all elements of the forth layer of the direct ephemeral descendant tree of $k^s_4$. As the direct ephemeral descendants tree of $k^s_3$ has no third layer, $k^s_0$ has no third-degree cousins. }
\end{figure}
\begin{prop}
\label{immortalprop1}
For all $j\geq 1$ and $q\ge 0$, $\mathbb P^s[d^e_j(0)=q]=\mathbb P^s[l_j(0)=q]$.
\end{prop}
\begin{proof}
From (\ref{cousintreerelationship1}), for all $j\geq 1$ and $q\ge 0$,
\begin{align}
\label{immortalprop1eq1}
\mathbb P^s[l_j(0)=q]=\mathbb P^s[d^e_j(k^s_j)=q].
\end{align}
As $\theta_{k^s_j}$ preserves $\mathbb P^s$,
\begin{align}
\label{immortalprop1eq2}
\mathbb P^s[\theta_{k^s_j}\{d^e_j(k^s_j)=q\}]&=\mathbb P^s[d^e_j(0)=q],\quad\forall~q\geq 1.
\end{align}
We get the result by combining Equations (\ref{immortalprop1eq1}) and (\ref{immortalprop1eq2}).
\end{proof}
\begin{prop}
\label{masstpdirectdescendantsandfoil}
Given $0\in \Phi^e$, let $n^{(d)}$ be the unique random successful individual such that $0\in \tilde{D}^e(n^{(d)})$. In the same way, let $n^{(l)}$ be the unique successful individual such that $0\in L(n^{(l)})$. Then, for any diagonally invariant function $g$ (Definition \ref{translationinvariant}),
\begin{align}
\label{mtpdirectdescendants}
\lambda^s\e^s\left[\sum_{n\in \tilde{D}^e(0)}g(0,n)\right]&=\lambda^s\e^s[g(0,0)]+\lambda^e\e^e[g(n^{(d)},0)]
\end{align}
and
\begin{align}
\label{mtpfoildirectdescendants}
\lambda^s\e^s\left[\sum_{n\in L(0)}g(0,n)\right]&=\lambda^s\e^s[g(0,0)]+\lambda^e\e^e[g(n^{(l)},0)].
\end{align}
\end{prop}
\begin{proof}
Equation (\ref{mtpdirectdescendants}) follows from applying the mass transport principle to the function
\begin{align*}
h_1(\omega,0,n):=\textbf{1}\{0\in \Phi^s(\omega)\}\textbf{1}\{n\in \tilde{D}^e(0)(\omega)\}g(\omega,0,n),
\end{align*}
while (\ref{mtpfoildirectdescendants}) follows from applying the mass transport principle to the function
\begin{align*}
h_2(\omega,0,n):=\textbf{1}\{0\in \Phi^s(\omega)\}\textbf{1}\{n\in L(0)(\omega)\}g(\omega,0,n).
\end{align*}
\end{proof}
\begin{cor}
\label{cormtp}
The following holds:
\begin{align}
\label{mtpsamemean}
\e^s[\tilde{d}^e(0)]=\e^s[l(0)]=\e[a_0],
\end{align} where $\tilde{d}^e(0)$ is the cardinality of $\tilde{D}^e(0)$. Moreover,
\begin{align}
\label{mtpmean2eq}
\frac{\e^s\left[\sum_{n\in \tilde{D}^e(0)}|n|\right]}{\e^e[|n^{(d)}|]}=\frac{\e^s\left[\sum_{n\in L(0)}|n|\right]}{\e^e[|n^{(l)}|]}=\e[a_0]-1.
\end{align}
\end{cor}
\begin{proof}
The results follow from choosing particular $g(0,n)$ in Proposition \ref{masstpdirectdescendantsandfoil}.
Let $g(0,n)\equiv 1$. Then, Equation (\ref{mtpsamemean}) holds as $\lambda^s+\lambda^e=1$ and $\lambda^s=\frac{1}{\e[a_0]}.$
\item Set $g(0,n)=|n|$. Again, using Equation (\ref{mtpdirectdescendants})
\begin{align*}\
\lambda^s\e^s\left[\sum_{n\in \tilde{D}^e(0)}|n|\right]&=\lambda^s\e^s[0]+\lambda^e\e^e[|n^{(d)}|]=\lambda^e\e^e[|n^{(d)}|].
\end{align*}
Hence,
\begin{align*}
\e^e[|n|^{(d)}]&=\left(\frac{\e[a_0]}{\e[a_0]-1}\right)\frac{1}{\e[a_0]}\e^s\left[\sum_{n\in \tilde{D}^e(0)}|n|\right]\\
&=\frac{\e^s\left[\sum_{n\in \tilde{D}^e(0)}|n|\right]}{\e[a_0]-1}.
\end{align*}
Following the same steps using (\ref{mtpfoildirectdescendants}), we recover Equation (\ref{mtpmean2eq}).
\end{proof}
\section{Final remarks}
Here are a few comments on the population dynamics interpretation
of the model discussed here. Our model is concerned with a critical
population dynamics. In branching processes, the critical case leads to extinction unless there is no variability at all. In contrast, in our model, there is no extinction, although the population is infinitely often close to extinction, as the original ancestor point
process shows. In connection with this, we find it interesting to mention that there
is some genetic and archaeological evidence that the human population was close to extinction several times
in the distant past. (\cite{article2} and \cite{article1}).
|
1,477,468,750,623 | arxiv | \section{Introduction}
In uniform random permutations, long cycles occupy almost all the available space. Indeed, it is a standard
textbook exercise to show that in a permutation of length $n$, the probability to find an index $i$ in a cycle
of length $k$ is equal to $1/n$, which in turn means that cycles of a length below volume order play no role
asymptotically as $n \to \infty$. Of course, much more is known about uniform (and Ewens) random permutations,
including the precise distribution of long and short cycles. We refer to
\cite{ABT02} and the references therein.
It is interesting to see how the behaviour of random permutations changes when the uniform measure is changed in
a way that favours short cycles. Various such models have been studied in recent years. Many of them are
motivated by the model of spatial random permutations \cite{BeUe09}, which by its close connections to Bose-Einstein condensation \cite{Ue06} has a significant physical relevance. In this model, a spatial
structure is superimposed on the permutations, and the importance of that spatial structure is measured by an
order parameter which physically is the temperature. It is conjectured that this order parameter mediates a
phase transition between a regime of only short cycles and a regime of coexistence of long and short cycles.
Despite some successes in the explicitly solvable annealed case without interaction between different cycles \cite{BeUe10},
and significant recent progress (using the method of reflection positivity) in a closely
related model with such interaction \cite{LeTa19, Ta19}, many of the most relevant questions
in spatial random permutations remain to be answered.
A somewhat more direct and in general easier to analyse way to suppress long cycles is to introduce
cycle weights or hard constraints on cycle numbers. Cycle weights appear in an (uncontrolled)
approximation of the interacting Bose gas by a variant of the free one \cite{BeUe10b}, but have also been
studied intensively in their own right, both in cases
where the cycle weights do not depend on the system size $n$
\cite{BeUeVe11, ErUe11}, and in cases where they do
\cite{BoZe14, ElPe19}. In the latter case, it has been shown in the
cited papers that one recovers the model treated in \cite{BeUe10} by
a suitable choice of cycle weights, and the methods of analytic
combinatorics used in \cite{BoZe14, ElPe19} yield very precise
information about the asymptotic cycle distribution in various regimes.
The present paper deals with the other option of constraining
permutations, namely to completely disallow certain cycle lengths.
Again, a distinction has to be made between cases where the set of
disallowed cycle lengths is independent of the permutation length
$n$, and those where it depends on $n$. In the first case, a
significant amount of information has been obtained in the works of
Yakymiv (see e.g. \cite{Ya09a,Ya10a}); our interest lies
in the second case. Using precise asymptotic results by
Manstavi{\v{c}}ius and Petuchovas \cite{MaPe16}, in \cite{BeSc17, BeScZe17} we
investigated the case where a permutation of
length $n$ is prevented from having any cycles above a threshold
$\alpha(n)$ that grows strictly slower than volume order.
While the results in these papers were
reasonably detailed, some interesting questions and fine details
have been left out.
It is the purpose of the present paper to settle a significant
portion of them. We will describe our results in detail in the
next section. Here, we only briefly sketch what is new.
One difference to \cite{BeSc17} is that
we generalise the base model we constrain, from uniform
random permutations to the model of Ewens permutations. The latter originally
appeared in population genetics, see \cite{Ew72}, but has now become a
rather standard model of random permutations. It shares many
features and techniques with uniform permutations, and classical
results about uniform and Ewens random permutations include
convergence of joint
cycle counts towards independent Poisson random variables in total
variation distance \cite{ArTa92c}, the convergence of the renormalized cycle structure towards a
Poisson-Dirichlet distribution {\cite{Ki77, ShVe77}}, and a central limit theorem for
cumulative cycle counts \cite{DePi85}.
In the context of the methods we use, the difference between the
Ewens measure and uniform random permutations is not large, see \cite{Sc18} for details. What should be considered the main
contribution of the present paper compared to \cite{BeSc17,BeScZe17} are the following three items:
firstly,
we obtain much more precise asymptotics for the distribution of the longest cycles in various regimes
(Propositions \ref{prop:LongestDiv} and \ref{prop:LongestConv}, and Theorem \ref{thm:Longest0Poissonprocess}); secondly, we extend the validity of the
joint Poisson approximation (in variation distance) to the whole
regime of cycles of length $(o(\alpha(n))$ (Theorem \ref{thm:main_dtv}).
Finally, we remove a spurious additional assumption for the
central limit theorem for cycle numbers that was present in
\cite{BeScZe17}, see Theorem \ref{thm:Haupt}.
The paper is organised as follows: in Section \ref{sec:results},
we introduce the model, give our results and compare them to
previously existing ones. In Section \ref{sec:proofs}, we prove
those results.
\newpage
\section{Model and Results} \label{sec:results}
\subsection{The symmetric group and the Ewens measure}
\label{sec:sym_group}
For $n\in\mathbb{N}$, let $S_{n}$ be the group of all permutations of the set $\{1,\ldots,n\}$.
For $\sigma\in S_n$ and $m\in\mathbb{N}$, we denote by $C_m(\sigma)$ the number of cycles of length $m$
in the cycle decomposition of $\sigma$ into disjoint cycle.
Note that we typically write $C_m$ instead of $C_m(\sigma)$.
Let $n \mapsto \alpha(n)$ satisfy the condition
\begin{equation}
\label{eq:condition on alpha}
n^{a_1} \leq \alpha(n) \leq n^{a_2}
\end{equation}
with $a_{1},a_{2}\in\left(0,1\right)$.
We denote by $S_{n,\alpha}$ the subset of $S_{n}$ of all permutations $\sigma$ for which all cycles
in the cycle decomposition of $\sigma$ have length at most $\alpha(n)$.
In other words, $\sigma\in S_{n,\alpha}$ if and only if $C_m(\sigma) =0$ for $m > \alpha(n)$.
For $\vartheta >0$, the Ewens measure on $S_n$ with parameter $\vartheta$ is defined as
%
\begin{align}
\PT{\sigma}
:=
\frac{\prod_{m=1}^n\vartheta^{C_m(\sigma)}}{\vartheta(\vartheta+1)\cdots(\vartheta +n-1)}.
\label{eq:def_Ewens_measure}
\end{align}
Note that the case $\vartheta =1$ corresponds to the uniform measure.
Further, let $\mathbb{P}_{n,\alpha}$ denote the measure on $S_{n,\alpha}$ obtained by conditioning $\mathbb{P}_{n}$ on $S_{n,\alpha}$, i.e.
\begin{align}
\PTa{A}:= \PT{A|S_{n,\alpha}}
\qquad
\text{ for all }
A\subset S_{n,\alpha}.
\end{align}
Inserting the definition $\mathbb{P}_{n}$, we obtain for $\sigma\in S_{n,\alpha}$ that
%
\begin{align}
\PTa{\sigma}
=
\frac{\prod_{m=1}^n\vartheta^{C_m(\sigma)}}{Z_{n,\alpha}\, n!}
\quad
\text{ with }
\quad
Z_{n,\alpha} = \frac{1}{n!}\sum_{\sigma\in S_{n,\alpha}} \prod_{m=1}^n\vartheta^{C_m(\sigma)}.
\label{eq:def_Ewens_measure_alpha}
\end{align}
Also, we write $\mathbb{E}_{n}$ for the expectation with respect to $\mathbb{P}_{n}$
and $\mathbb{E}_{n,\alpha}$ for the expectation with respect to $\mathbb{P}_{n,\alpha}$.
\subsection{Notation}
If two sequences $(a_n)$ and $(b_n)$ are asymptotically
equivalent, i.e.\ if $\lim_{n\to\infty} a_n/b_n = 1$, we write
$a_n \sim b_n$.
Further, we write $a_n\approx b_n$ when there exist constants $c_1,c_2>0$ such that
\begin{align}
c_1 b_n \leq a_n \leq c_2 b_n
\end{align}
for large $n$. We also use the usual ${\mathcal O}$ and $o$ notation,
i.e. $f(n) = {\mathcal O}(g(n))$ means that there exists some constant
$c > 0$ so that $|f(n)| \leq c |{g(n)}|$ for large $n$,
while $f(n) = o(g(n))$ means that for all $c>0$ there exists
$n_c \in \mathbb{N}$ so that the inequality $|f(n)| \leq c |g(n)|$
holds for all $n > n_c$.
We further say that
$$
f_n(t)
=
\mathcal{O}\left(g_n(t)\right)\text{ uniformly in }t\in T_n \text{ as }n\to\infty
$$
if there are constants $c,N>0$ such that
$
\sup_{t\in T_n} |f_n(t)|\leq c |g_n(t)|
$
for all $n\geq N$.
\subsection{Expected cycle counts}
Here we recall some of the results from \cite{BeScZe17} and
\cite{Sc18} that are crucial for the following.
Let $x_{n,\alpha}$ be the unique positive solution of the equation
\begin{equation}
n=\vartheta \sum_{j=1}^{\alpha(n)}x_{n,\alpha}^{j},
\label{eq:StaSad}
\end{equation}
and
\begin{align}
\mu_m\left(n\right)
:=
\vartheta\frac{x_{n,\alpha}^{m}}{m}.
\label{eq:def_mu_n}
\end{align}
For the case where $m$ is replaced by an integer-valued
sequence $(m(n))_{n \in
\mathbb{N}}$, we simplify notation and write $\mu_{m(n)}$ instead of
$\mu_{m(n)}(n)$. For any such sequence that satisfies
$m\left(n\right)\leq\alpha\left(n\right)$, we have
\begin{equation} \label{eqn:exp asympt}
\ETa{C_{m\left(n\right)}}
\sim
\mu_{m\left(n\right)}
\qquad \text{ as } n\to\infty.
\end{equation}
This was proven for $\vartheta =1$ in \cite[Proposition 2.1]
{BeScZe17}, and for $\vartheta \neq 1$ in \cite{Sc18} along the
same lines. In view of \eqref{eqn:exp asympt} it is clear that we
are interested in information about the asymptotics of solutions
to equations like \eqref{eq:StaSad}.
The following result provides it:
\begin{lem}
\label{lem:saddle_point_with_c}
Let $0<c_1<c_2<\infty$ be fixed, but arbitrary real numbers.
For $c \in[c_1,c_2]$, let $x_{n,\alpha}(c)$ be the solution of
\begin{align}
cn = \vartheta\sum_{j=1}^{\alpha(n)} \big( x_{n,\alpha}(c) \big)^j.
\label{eq:def_xn(c)}
\end{align}
We then have uniformly in $c \in[c_1,c_2]$ as $n\to\infty$
\begin{equation}
\alpha\left(n\right)\log\left(x_{n,\alpha}(c)\right)
=
\log\left(\frac{cn}{\vartheta\alpha\left(n\right)}\log\left(\frac{cn}{\vartheta\alpha\left(n\right)}\right)\right)
+
{\mathcal O}\left(\frac{\log\left(\log\left(n\right)\right)}{\log\left(n\right)}\right).
\label{eq:StaSadAs}
\end{equation}
In particular,
$$x_{n,\alpha}(c)\geq1, \
\lim_{n\rightarrow\infty}x_{n,\alpha}(c)=1
\ \text{ and } \
\big(x_{n,\alpha}(c)\big)^{\alpha\left(n\right)}
\sim
\frac{cn}{\vartheta\alpha\left(n\right)}\log\left(\frac{cn}{\vartheta\alpha\left(n\right)}\right)$$
for large $n$. Furthermore,
\begin{equation}\label{eq:lambda2alt}
\sum_{j=1}^{\alpha(n)} j\big(x_{n,\alpha}(c)\big)^j \sim \frac{cn}{\vartheta}\alpha(n).
\end{equation}
\end{lem}
Lemma~\ref{lem:saddle_point_with_c} is a special case of \cite[Lemma~9]{MaPe16} and follows immediately by inserting our assumptions in \cite[Lemma~9]{MaPe16}.
We thus omit the proof.
\subsection{Asymptotics of longest cycles}
The first set of results that we present deals with the asymptotic
(joint) distribution of the longest cycles under the measure
$\mathbb{P}_{n,\alpha}$.
Let $\ell_{k}=\ell_{k}\left(\sigma\right)$ denote
the length of the $k$-th
longest cycle of the permutation $\sigma$.
We already know that for fixed $K \in \mathbb{N}$,
under the probability measures $\mathbb{P}_{n,\alpha}$,
we have as $n\to\infty$
\begin{align}
\frac{1}{\alpha\left(n\right)}\left(\ell_{1},\ell_{2},\dots,\ell_{K}\right)\stackrel{d}{\longrightarrow}\left(1,1,\dots,1\right),
\end{align}
where $\stackrel{d}{\longrightarrow}$ denotes convergence in
distribution (see equation (2.14) in \cite{BeScZe17} or
\cite{Sc18}). We will significantly improve on this information.
It turns out that the behaviour of the longest cycles depends on
the expected length given in \eqref{eqn:exp asympt}.
In other words, we have to look at the behaviour of $\mu_{\alpha\left(n\right)}$
in the three regimes
$$\mu_{\alpha\left(n\right)}\to\infty,\
\mu_{\alpha\left(n\right)} \to \mu \text{ with } \mu>0
\ \text{, and } \
\mu_{\alpha\left(n\right)}\to 0.$$
A discussion about which regime happens when in case of \
$\alpha(n) = n^\beta$ can be found in Section 2.2 of
\cite{BeScZe17}.
We start with the simplest case $\mu_{\alpha\left(n\right)}\to\infty$.
This case only occurs if $\alpha\left(n\right)=o((n\log n)^{\frac{1}{2}})$, see Proposition~\ref{prop:asymptotic_mu}
below. In this case, the distribution of the random vector
$(\ell_1, \ldots, \ell_K)$ becomes degenerate:
\begin{prop}
\label{prop:LongestDiv}
Suppose that $\mu_{\alpha(n)}\to\infty$.
Then, for each $K\in\mathbb{N}$, we have
\[
\lim_{n\to\infty}\PTa{\left(\ell_{1},\ell_{2},\dots,\ell_{K}\right) \neq \big(\alpha\left(n\right),\alpha(n),\dots,\alpha(n)\big)}=0.
\]
\end{prop}
A similar proposition was proven in \cite[Theorem 2.8]{BeScZe17} and \cite{Sc18}
under the additional assumption that $\alpha(n)\geq n^{\frac{1}{7}+\delta}$ for $\delta>0$.
The reason why we can omit this assumption here is our improved central limit theorem,
Theorem \ref{thm:Haupt}. We give the proof of Proposition
\ref{prop:LongestDiv} in Section
\ref{sect:proof_longest_1}.
Next, we now look at the case $\mu_{m(n)} \to \mu$ with $\mu>0$.
We find
\begin{prop}
\label{prop:LongestConv}
Suppose that $\mu_{\alpha\left(n\right)} \to \mu$ with $\mu>0$ as $n\to\infty$.
We then have for all $d\in\mathbb{N}_{0}$ and all $k\in\mathbb{N}$ that
\begin{align}
\PTa{\ell_{k}=\alpha\left(n\right)-d}
\xrightarrow{n\to\infty}
\frac{1}{\Gamma\left(k\right)}\int_{d\mu}^{\left(d+1\right)\mu}v^{k-1}\mathrm{e}^{-v}\mathrm{d}v.
\end{align}
In other words, $\alpha\left(n\right)-\ell_{k}$
converges in distribution to $\left\lfloor \mu^{-1}X\right\rfloor $,
where $X$ is a gamma-distributed random variable with parameters
$k$ and $1$ and $\lfloor x\rfloor = \max\{n\in\mathbb{Z};\, n\leq x \}$.
\end{prop}
The proof of this proposition is given in Section \ref{sect:proof_longest_2}. Moreover, the proof allows for deriving the joint distribution of the longest cycles, but the notation of results in this case is cumbersome.
Finally, we have the case where the expected number of cycles vanishes. Here we obtain the most interesting results, namely a functional convergence of the cumulative
numbers of long cycles to a Poisson process, on the correct scale.
By considering the jump times of this Poisson process, we establish limit
theorems for $\ell_{k}$.
Let us start with a small observation.
\begin{prop}
\label{prop:asymptotic_mu}
We have, as $n\to\infty$,
\begin{align}
\mu_{\alpha(n)}
\approx
\frac{n\log n}{(\alpha\left(n\right))^2}.
\label{eq:LongestConv0MuN}
\end{align}
\end{prop}
\begin{proof}
Inserting the definition of $\mu_{\alpha(n)}$, see \eqref{eq:def_mu_n}, and using Lemma~\ref{lem:saddle_point_with_c}, we obtain
%
\begin{align}
\mu_{\alpha(n)}
=
\vartheta \frac{x_{n,\alpha}^{\alpha(n)}}{\alpha(n)}
\sim
\frac{n}{(\alpha\left(n\right))^2}\log\left(\frac{n}{\vartheta\alpha\left(n\right)}\right)
\approx
\frac{n\log n}{(\alpha\left(n\right))^2}.
\end{align}
This completes proof of this proposition.
\end{proof}
This proposition immediately implies that $\mu_{\alpha(n)} \to 0$ if and only if $\frac{n\log n}{(\alpha\left(n\right))^2}\to 0$ as $n\to\infty$.
We now define
\begin{align}
d_{t}\left(n\right)
:=
\max\left\{ \alpha\left(n\right)-\left\lfloor \frac{t}{\mu_{\alpha\left(n\right)}}\right\rfloor ,0\right\}.
\label{eq:def_dt}
\end{align}
Note that $d_{t}(n) = \alpha(n)(1 + o(1))$ and $\left\lfloor \frac{t}{\mu_{\alpha\left(n\right)}}\right\rfloor \to\infty$ if $\mu_{\alpha(n)}\to 0$ for fixed $t$.
We now have
\begin{thm}
\label{thm:Longest0Poissonprocess}
Suppose that $\mu_{\alpha(n)}\to 0$ and define for $t\geq 0$
\[
P_{t}
:=
\sum_{j=d_{t}\left(n\right)+1}^{\alpha\left(n\right)}C_{j}.
\]
Then the stochastic process $\left\{P_{t}, t\geq0 \right\}$ converges under $\mathbb{P}_{n,\alpha}$ as $n\to\infty$
weakly in $\mathcal{D}\left[0,\infty\right)$ to a Poisson process
with parameter $1$, where $\mathcal{D}\left[0,\infty\right)$ denotes the space of c\`{a}dl\`{a}g-functions.
\end{thm}
This theorem is proved in Section \ref{sect:proof_longest_3}.
It immediately implies the following corollary.
\begin{cor}
\label{cor:Longest0}
Let $K\in\mathbb{N}$ be given, $\alpha(n)$ be as in \eqref{eq:condition on alpha} and suppose that $\mu_{\alpha\left(n\right)}\to 0$.
We have convergence in distribution of
\[
\mu_{\alpha\left(n\right)}\cdot\left(\alpha\left(n\right)-\ell_{1},\ell_{2}-\ell_{1},\dots,\ell_{K}-\ell_{K-1}\right)
\]
under $\mathbb{P}_{n,\alpha}$ to independent
exponentially distributed random variables with parameters $1$. In
particular, $\mu_{\alpha\left(n\right)}\left(\alpha\left(n\right)-\ell_{k}\right)$
converges in distribution to a gamma-distributed random variable with
parameters $k$ and $1$.
\end{cor}
\begin{proof}
The claim is a consequence of the convergence established in the proof
of Theorem \ref{thm:Longest0Poissonprocess} since the limit distribution
is the distribution of the jump times of the Poisson process (see, e.g. \cite[p.5]{Li10}).
\end{proof}
\subsection{Total variation distance}
Here we study the joint behaviour of the cycle counts $C_m$
in the region $m=o(\alpha(n))$.
Recall that the total variation distance of two probability measures $\mathbb{P}$ and
$\widetilde \mathbb{P} $ on a discrete probability space $\Omega$ is given by
$\| \mathbb{P} - \widetilde \mathbb{P} \|_{\rm TV} = \sum_{\omega \in \Omega} (\mathbb{P}(\omega) - \widetilde \mathbb{P}(\omega))_+$.
\begin{thm}[\protect{\cite[Theorem 2.2]{BeScZe17}}]
\label{thm:main_thm2_old}
Let $b = (b(n))_n$ be a sequence of integers
with $b(n) = o \big( \alpha(n) (\log n)^{-1}\big)$. Let
$\mathbb{P}_{n,b(n),\alpha}$ be the distribution of $(C_1, \ldots, C_{b(n)})$
under the uniform measure on $S_{n,\alpha}$, and let $\widetilde \mathbb{P}_{b(n)}$
be the distribution
of independent Poisson-distributed random variables
$(Z_{1}, \ldots Z_{b(n)})$ with
$\widetilde{\mathbb{E}}_{b(n)}(Z_{j}) = \frac{1}{j}$
for all $j \leq b(n)$. Then there exists $c<\infty$ so that for all
$n \in \mathbb{N}$, we have
\[
\| \mathbb{P}_{n,b(n),\alpha} - \tilde \mathbb{P}_{b(n)} \|_{\rm TV} \leq c
\left( \frac{\alpha(n)}{n} + b(n) \frac{\log n}{\alpha(n)}
\right).
\]
\end{thm}
In the special case $ \alpha(n) \geq \sqrt{n\log(n)}$, Judkovich \cite{Ju19} has computed the above total variation distance using Steins method and obtained a slightly better upper bound.
On the full symmetric group $S_n$, a similar result as Theorem~\ref{thm:main_thm2_old} holds with $b(n)=o(n)$, see \cite{ArTa92c}.
A natural question at this point is thus if one can replace $b(n) $ in Theorem~\ref{thm:main_thm2_old} by $b(n) = o \big( \alpha(n)\big)$.
Recall, we have seen in equation \eqref{eqn:exp asympt} that
\[
\ETa{C_{m\left(n\right)}}
\sim
\vartheta \frac{x_{n,\alpha}^m}{m}
\qquad \text{ as } n\to\infty.
\]
Using Lemma~\ref{lem:saddle_point_with_c}, we immediately see that $ \ETa{C_{m\left(n\right)}} \sim \E{Z_m}$
if and only if $m= o \big( \alpha(n) (\log n)^{-1}\big)$.
Thus $b(n) = o \big( \alpha(n) (\log n)^{-1} \big)$ is the most one can expect in Theorem~\ref{thm:main_thm2_old}.
To overcome the problem with the expectations, we replace the random variables $Z_j$ with fixed expectation by
random variables $Y_j^{(n)}$ with an expectation depending on $n$ so that
\begin{align}
\ETa{C_{m\left(n\right)}} \sim \E{Y_m^{(n)}} \ \text{ for all } m= o \big( \alpha(n) \big).
\end{align}
However, to simplify the notation, we write $Y_{j}$ instead $Y_{j}^{(n)}$.
We now have
\begin{thm}
\label{thm:main_dtv}
Let $b = (b(n))_n$ be a sequence of integers with $b(n) = o \big( \alpha(n) \big)$.
Let $\mathbb{P}_{n,\vartheta, b(n),\alpha}$ be the distribution of $(C_1, \ldots, C_{b(n)})$
under $\mathbb{P}_{n,\alpha}$ on $S_{n,\alpha}$.
Further, let $\widehat\mathbb{P}_{b(n)}$ be the distribution
of independent Poisson-distributed random variables
$(Y_{1}, \ldots, Y_{b(n)})$ with
$\E{Y_{j}} = \mu_{j}(n)$ for all $j \leq b(n)$
and $\mu_{j}(n)$ as in \eqref{eq:def_mu_n}.
Then
\begin{align}
\| \mathbb{P}_{n,\vartheta,b(n),\alpha} - \widehat \mathbb{P}_{b(n)} \|_{\rm TV}
= {\mathcal O}\left(n^{\epsilon} \left(\frac{\alpha(n)}{n}\right)^{\frac{5}{12}}\right),
\label{eq:thm:main_dtv1}
\end{align}
where $\epsilon>0$ is arbitrary.
Further, if $b(n) = o \big( \alpha(n) (\log n)^{-1}\big)$ then
\begin{align}
\| \mathbb{P}_{n,\vartheta,b(n),\alpha} - \widehat \mathbb{P}_{b(n)} \|_{\rm TV}
=
{\mathcal O}\left( \frac{\alpha(n)}{n} + \frac{b(n)\log n}{n^{\frac{5}{12}}\alpha^{\frac{7}{12}}}\right).
\label{eq:thm:main_dtv2}
\end{align}
\end{thm}
The proof of this theorem is given in Section \ref{sect:proof_dtv}.
\subsection{Central Limit Theorem for Cycle Numbers}
For the proof of Proposition~\ref{prop:LongestDiv}, we require a central limit theorem for the cycle counts in the case $\E{C_m}\to\infty$.
The main result of this section is to establish this theorem.
Explicitly, we prove the following.
\begin{thm}
\label{thm:Haupt}
Let $m_{k}:\mathbb{N\rightarrow\mathbb{N}}$ for
$1\leq k\leq K$ such that $m_{k}\left(n\right)\leq\alpha\left(n\right)$
and $m_{k_{1}}\left(n\right)\neq m_{k_{2}}\left(n\right)$ if $k_{1}\neq k_{2}$
for large $n$. Suppose that
\[
\mu_{m_{k}\left(n\right)}\left(n\right)\rightarrow\infty
\]
for all $k$. We then have as $n\to\infty$
\[
\left(\frac{C_{m_{1}\left(n\right)}-\mu_{m_{1}\left(n\right)}\left(n\right)}{\sqrt{\mu_{m_{1}\left(n\right)}\left(n\right)}},\dots,\frac{C_{m_{K}\left(n\right)}-\mu_{m_{K}\left(n\right)}\left(n\right)}{\sqrt{\mu_{m_{K}\left(n\right)}\left(n\right)}}\right)\xrightarrow{d}\left(N_{1},\dots,N_{K}\right),
\]
with $N_{1},\ldots, N_K$ independent standard normal distributed random variables.
\end{thm}
This theorem was proven in \cite{BeScZe17} under the additional assumption
\begin{equation}
n^{-\frac{5}{12}}\alpha\left(n\right)^{-\frac{7}{12}}\frac{x_{n,\vartheta}^{m_{k}\left(n\right)}}{\sqrt{\mu_{m_{k}\left(n\right)}\left(n\right)}}\rightarrow0.
\label{eq:CCcltAss}
\end{equation}
In Section \ref{sect:proof_clt} we present a proof that does not
require the addidional assumption.
\section{Proofs}
\label{sec:proofs}
\subsection{Generating functions and the saddle point method}
\label{sec:Generating}
Generating functions and their connection with analytic combinatorics form the backbone of the proofs in this paper.
More precisely, we will determine formal generating functions for all relevant moment-generating functions and then
use the saddle-point method to determine the asymptotic behaviour of these moment-generating functions as $n\to\infty$.
Let $\left(a_{n}\right)_{n\in\mathbb{N}}$ be a sequence of complex numbers. Then its ordinary generating function is defined as the formal power series
\[
f\left(z\right):=\sum_{n=0}^{\infty}a_{n}z^{n}.
\]
The sequence may be recovered by formally extracting the coefficients
\[
\left[z^n\right]f\left(z\right):=a_{n}
\]
for any $n$. The first step is now to consider a special case of P{\'o}lya's Enumeration Theorem, see \cite[\S 16, p.\:17]{Po37},
which connects permutations with a specific generating function.
\begin{lem}
\label{lem:polya}
Let $(q_j)_{j\in\mathbb{N}}$ be a sequence of complex numbers.
We then have the following identity between formal power series in $z$,
\begin{equation}
\label{eq:symm_fkt}
\exp\left(\sum_{j=1}^{\infty}\frac{q_j z^j}{j}\right)
=\sum_{k=0}^\infty\frac{z^k}{k!}\sum_{\sigma\in S_k}\prod_{j=1}^{k}
q_{j}^{C_j},
\end{equation}
where $C_j=C_j(\sigma)$ are the cycle counts. If either of the
series in \eqref{eq:symm_fkt} is absolutely convergent, then so is
the other one.
\end{lem}
Extracting the $n$th coefficient yields
\begin{equation}
\label{eq:relation to perms}
\left[z^n\right]\exp\left(\sum_{j=1}^{\infty}\frac{q_j z^j}{j}\right)
=
\frac{1}{n!}\sum_{\sigma\in S_n}\prod_{j=1}^{n}q_{j}^{C_j}.
\end{equation}
With this formulation, the parameters $(q_j)$ can depend on the system size $n$.
For instance, setting $q_j =\vartheta\,\mathbbm{1}_{\left\{j\leq \alpha(n)\right\}}$,
we obtain
\begin{align}
\label{eq:cNorm}
Z_{n,\alpha}
=
\left[z^n\right]\exp\left(\vartheta\sum_{j=1}^{\alpha(n)}\frac{z^j}{j}
\right)
\end{align}
with $Z_{n,\alpha}$ as in \eqref{eq:def_Ewens_measure_alpha}.
Similarly, we can get an expression for the moment generating function of $C_{m(n)}$, where $(m(n))_{n\in\mathbb{N}}$ is an integer sequence with $m(n)\leq \alpha(n)$.
Indeed, setting $q_{m(n)} =\vartheta e^{s}$ and $q_j =\vartheta\,\mathbbm{1}_{\left\{j\leq \alpha(n)\right\}}$ for $j\neq m(n)$, we get
\begin{align}
\label{eq:moment_Cm}
\ETa{e^{s C_{m(n)}}}
=
\frac{1}{Z_{n,\alpha} }
\left[z^n\right]\exp\left(\vartheta (e^s-1)\frac{z^{m(n)}}{m(n)}\right) \exp\left(\vartheta\sum_{j=1}^{\alpha(n)}\frac{z^j}{j}\right).
\end{align}
In view of \eqref{eq:cNorm} and \eqref{eq:moment_Cm}, we can compute the asymptotic behaviour of $Z_{n,\alpha}$ (and similar expressions)
by extracting the coefficients of power series as in \eqref{eq:cNorm} and \eqref{eq:moment_Cm}.
One way to extract these coefficients is the saddle point method, a standard tool in asymptotic analysis.
The basic idea is to rewrite the
expression \eqref{eq:relation to perms} as a complex contour integral and
choose the path of integration in a convenient way.
The details of this procedure depend on the situation at hand
and need to be done on a case by case basis.
A general overview over the saddle-point method can be found
in \cite[page~551]{FlSe09}.
An important part of this computations is typically to find a solution of the so-called saddle-point equation.
We now treat the most general case of the
saddle point method that is relevant for the present situation.
Let ${\boldsymbol q} = (q_{j,n})_{1 \leq j \leq \alpha(n), n \in
\mathbb{N}}$ be a triangular array. We assume that all
$q_{j,n}$
are nonnegative,
real numbers and that for each $n\in\mathbb{N}$ there exists a $j$ such that $q_{j,n}>0$.
We then
define $x_{n,{\boldsymbol q}}$ as the unique positive solution of
\begin{align}
n = \sum_{j=1}^{\alpha(n)} q_{j,n} x_{n,{\boldsymbol q}}^j.
\label{eq:GenSaddle}
\end{align}
Let further
\begin{align}
\lambda_{p,n}
:=
\lambda_{p,n,\alpha,\boldsymbol{q}} :=\sum_{j=1}^{\alpha(n)} q_{j,n}j^{p-1}x_{n,{\boldsymbol q}}^j,
\label{eq:def_lambda_p}
\end{align}
where $p\geq 1$ is a natural number. Due
to Equation \eqref{eq:GenSaddle},
\begin{equation}
\lambda_{p,n}\leq n\left(\alpha\left(n\right)\right)^{p-1}\label{eq:LambdaP}
\end{equation}
holds for all $p \geq 1$.
We now define
\begin{defn}
\label{def:admQ}A triangular array $\boldsymbol{q}$ is called admissible
if the following three conditions are satisfied:
\begin{enumerate}
\item \label{enu:sadAppr}It satisfies
\[
\alpha\left(n\right)\log\left(x_{n,\boldsymbol{q}}\right)\approx\log\left(\frac{n}{\alpha\left(n\right)}\right).
\]
\item \label{enu:lam2Appr}We have
\[
\lambda_{2,n}\approx n\alpha\left(n\right).
\]
\item \label{enu:qNichtzuklein}
There exist a non-negative sequence $\left(b_{n}\right)_{n\in\mathbb{N}}$
and constants $\delta,c>0$ such that $b\left(n\right)/\alpha\left(n\right)<1-\delta$
and $q_{j,n}\geq c>0$ for all $j\geq b\left(n\right)$ hold for $n$
large enough.
\end{enumerate}
\end{defn}
Note that condition~\eqref{enu:sadAppr} implies in particular that
$x_{n,\boldsymbol{q}}>1$ and that $ x_{n,{\boldsymbol q}} \to 1$ as $n\to\infty$.
Let $B_r(0)$ denote the ball with center $0$ and
radius $r$ in the complex plane.
\begin{defn}
\label{def:admF}
Let $\boldsymbol{q}$ be an admissible triangular array.
Then a sequence $\left(f_{n}\right)_{n\in\mathbb{N}}$ of functions is
called admissible (w.r.t. $\boldsymbol{q}$) if it satisfies the following
three conditions:
\begin{enumerate}
\item \label{enu:hol}
There is $\delta>0$ such that $f_{n}$ is holomorphic
on the disc $B_{x_{n,\boldsymbol{q}}+\delta}\left(0\right)$ if $n\in\mathbb{N}$
is large enough.
\item \label{enu:FdSad}
There exist constants $K,N>0$ such that
\begin{equation}
\sup_{z\in\partial B_{x_{n,\boldsymbol{q}}}\left(0\right)}\left|f_{n}\left(z\right)\right|\leq n^{K}\left|f_{n}\left(x_{n,\boldsymbol{q}}\right)\right|\label{eq:adm2}
\end{equation}
for all $n\geq N$.
\item \label{enu:Flieb}
With the definition
\begin{equation}
|\!|\!|f_{n}|\!|\!|_{n}:=n^{-\frac{5}{12}}\left(\alpha\left(n\right)\right)^{-\frac{7}{12}}\sup_{\left|\varphi\right|\leq n^{-\frac{5}{12}}\left(\alpha\left(n\right)\right)^{-\frac{7}{12}}}\frac{\left|f_{n}^{\prime}\left(x_{n,\boldsymbol{q}}\mathrm{e}^{\mathrm{i}\varphi}\right)\right|}{\left|f_{n}\left(x_{n,\boldsymbol{q}}\right)\right|},
\label{eq:adm3}
\end{equation}
we have $|\!|\!|f_{n}|\!|\!|_{n}\to0$ as $n\to\infty$.
\end{enumerate}
\end{defn}
We are now in the position to formulate our general
saddle point result.
\begin{prop}[{\cite[Proposition 3.2]{BeScZe17}}]
\label{prop:SPmethod}
Let $\boldsymbol{q}$ be an admissible
triangular array and $\left(f_{n}\right)_{n\in\mathbb{N}}$ an admissible
sequence of functions. Then we have as $n\to\infty$
\begin{align}
\left[z^{n}\right]f_{n}\left(z\right)\exp\left(\sum_{j=1}^{\alpha\left(n\right)}\frac{q_{j,n}}{j}z^{j}\right)
=
\frac{f_{n}\left(x_{n,\boldsymbol{q}}\right)e^{\lambda_{0,n}}}{x_{n,\boldsymbol{q}}^{n}\sqrt{2\pi\lambda_{2,n}}}\left(1+\mathcal{O}\left(\frac{\alpha\left(n\right)}{n}+|\!|\!|f_{n}|\!|\!|_{n}\right)\right).
\label{eq:prop:GenSaddle}
\end{align}
\end{prop}
Note that the implicit constants in the ${\mathcal O}(.)$ terms in \eqref{eq:prop:GenSaddle} can depend on $K$, $N$ and $\delta$ from the above definition of admissibility.
However, we require for our computations only the leading term in \eqref{eq:prop:GenSaddle}. Also we will not vary the values of $K$, $N$ and $\delta$.
Thus we need only the existence of $K$, $N$ and $\delta$, but not their values. We therefore can safely omit the dependence on $K$, $N$ and $\delta$.
In view of Proposition~\ref{prop:SPmethod},
we see it is important to understand the asymptotic behaviour of $x_{n,\boldsymbol{q}}$ and $\lambda_{j,n}$ as $n\to\infty$.
Lemma \ref{lem:saddle_point_with_c} will be very useful for this purpose.
\subsection{Proof or Proposition \ref{prop:LongestDiv}}
\label{sect:proof_longest_1}
We have by assumption $\mu_{\alpha(n)}\to \infty$. Thus we can apply Theorem~\ref{thm:Haupt}.
We conclude that
\begin{align}
\frac{C_{\alpha(n)}-\mu_{\alpha(n)}}{\sqrt{\mu_{\alpha\left(n\right)}\left(n\right)}}
\stackrel{d}{\longrightarrow} N,
\label{eq:CLT_for_long_cycles1}
\end{align}
where $N$ is a standard normal distributed random variable.
Since $\alpha(n)$ is the maximal cycle length, we have
\begin{align*}
\PTa{\left(\ell_{1},\ell_{2},\dots,\ell_{K}\right) \neq \big(\alpha\left(n\right),\alpha\left(n\right),\dots,\alpha\left(n\right)\big)}
=
\PTa{C_{\alpha\left(n\right)}<K}.
\end{align*}
Using \eqref{eq:CLT_for_long_cycles1}, we get
\begin{align*}
\PTa{C_{\alpha\left(n\right)}<K}
=
\PTa{\frac{C_{\alpha(n)}-\mu_{\alpha(n)}}{\sqrt{\mu_{\alpha(n)}}}<\frac{K-\mu_{\alpha\left(n\right)}\left(n\right)}{\sqrt{\mu_{\alpha\left(n\right)}\left(n\right)}}}\xrightarrow{n\to\infty}0,
\end{align*}
and the claim follows.
\subsection{Proof of Proposition \ref{prop:LongestConv}}
\label{sect:proof_longest_2}
As a first step, we state
\begin{prop}
\label{prop:CCconWeak}
Let $(m_k(n))_{n\in\mathbb{N}}$, $k=1,\ldots,K$, be integer sequences satisfying $1 \leq m_k(n)\leq \alpha(n)$ and
$m_{k}(n) \neq m_l(n) $ for $k\neq l$. Suppose that
%
\begin{align*}
\mu_{m_k(n)} \to \mu_k \in[0,\infty[
\end{align*}
for all $k$. Then
%
\begin{align}
\left(C_{m_1(n)},\, \ldots,\, C_{m_K(n)} \right)
\stackrel{d}{\longrightarrow}
\left(Y_1,\, \ldots,\, Y_K \right)
\end{align}
where $(Y_k)_{k=1}^K$ a sequence of independent Poisson distributed random variables
with parameters $\E{Y_k} = \mu_k$ for all $k=1,\ldots,K$.
\end{prop}
This proposition was proven in \cite{BeScZe17}, but we give the proof of this proposition for the case $K =1$ for the convenience of the reader.
\begin{proof}
Let $K=1$. We argue here with the moment generating function.
We saw in \eqref{eq:moment_Cm} that we have for $s\geq 0$
\begin{align}
\ETa{e^{s C_{m_1(n)}}}
=
\frac{1}{Z_{n,\alpha} }
\left[z^n\right]\exp\left(\vartheta (e^s-1)\frac{z^{m_1(n)}}{m_1(n)}\right) \exp\left(\vartheta\sum_{j=1}^{\alpha(n)}\frac{z^j}{j}\right).
\end{align}
We now apply Proposition~\ref{prop:SPmethod} to compute the asymptotic behaviour of this expression in the case $s\geq0$.
According to \cite{Ya11}, this is sufficient to prove the proposition.
We use ${\boldsymbol q} =(q_{j,n})$ with $q_j =\vartheta\,\mathbbm{1}_{\left\{j\leq \alpha(n)\right\}}$
and
$f_n(z) =\exp\left(\vartheta (e^s-1)\frac{z^{m_1(n)}}{m_1(n)}\right)$.
We thus have to show that ${\boldsymbol q}$ and the sequence $(f_n)_{n\in\mathbb{N}}$ are admissible, see Definitions~\ref{def:admQ} and~\ref{def:admF}.
Inserting the definition of ${\boldsymbol q}$, we immediately get that the corresponding saddle point equation is given by \eqref{eq:StaSad},
hence the solution is $x_{n,\alpha}$. The admissibility of ${\boldsymbol q}$ then follows immediately from Lemma~\ref{lem:saddle_point_with_c}.
It remains to show that $(f_n)_{n\in\mathbb{N}}$ is admissible.
All $f_n$ are entire functions and hence we can choose any $\delta>0$.
Since $s\geq 0$, we have for all $r>0$ and $\varphi\in[-\pi,\pi]$
\begin{align*}
\left|f_n(r\mathrm{e}^{\mathrm{i}\varphi}) \right| \leq |f_n(r)|.
\end{align*}
Thus the second condition is fulfilled with $K=0$.
For the third condition, we use
\begin{align*}
f'_n(z) = \vartheta (e^s-1) z^{m_1(n)-1} f_n(z)
\end{align*}
and that $\mu_{m_1(n)} = \vartheta \frac{x_{n,\alpha}^{m_1(n)}}{m_1(n)}$.
Inserting this and that $\mu_{m_1(n)} \to \mu_1$ immediately shows that the third condition is fulfilled.
So we can apply Proposition~\ref{prop:SPmethod}.
Using that $\ETa{e^{s C_{m_1(n)}}} =1$ for $s=0$, we obtain
\begin{align}
\ETa{e^{s C_{m_1(n)}}}
\longrightarrow
\exp\big(\vartheta (e^s-1)\mu_1\big).
\end{align}
This completes the proof.
\end{proof}
Now we turn to the proof of Proposition \ref{prop:LongestConv}.
In this proof, we write $\mu_{\alpha(n)}(n)$ instead of $\mu_{\alpha(n)}$.
Let $j\in\mathbb{N}_{0}$ be arbitrary.
Using the definition of $\mu_{m}(n)$ in \eqref{eq:def_mu_n} with $m=\alpha(n)-j$, we get
\[
\frac{\mu_{\alpha\left(n\right)}\left(n\right)}{\mu_{\alpha\left(n\right)-j}\left(n\right)}
=
\frac{\alpha\left(n\right)-j}{\alpha\left(n\right)}\,x_{n,\vartheta}^{j}\xrightarrow{n\to\infty}1.
\]
Since $\mu_{\alpha(n)}(n)\to \mu$ by assumption, we get that
\[
\mu_{\alpha\left(n\right)-j}\left(n\right)\xrightarrow{n\to\infty}\mu
\]
for all $j\in\mathbb{N}_{0}$.
Proposition~\ref{prop:CCconWeak} therefore implies that the cycle counts $\left(C_{\alpha\left(n\right)-j}\right)_{0\leq j\leq d}$ converge in
distribution to a sequence $\left(Z_{j}\right)_{j=0}^{d}$, where $(Z_{j})_{j=0}^d$ is i.i.d. Poisson distributed with parameter $\mu$.
We now have, as $n\to\infty$,
\begin{align*}
\PTa{\ell_{k}\leq\alpha\left(n\right)-d}
=
\PTa{\sum_{i=0}^{d-1}C_{\alpha\left(n\right)-i}\leq k-1}
\to
\mathbb{P}\left[\sum_{i=0}^{d-1}Z_{i}\leq k-1\right].
\end{align*}
By the independence of $\left(Z_{j}\right)_{0\leq i\leq d}$,
the random variable $\sum_{i=0}^{d-1}Z_{i}$ is Poisson-distributed
with parameter $d\mu$. Thus,
\[
\Pb{\sum_{j=0}^{d-1}Z_{j}\leq k-1}
=
\sum_{j=0}^{k-1}\mathrm{e}^{-d\mu}\frac{\left(d\mu\right)^{j}}{j!}
=
\frac{1}{\Gamma\left(k\right)}\int_{d\mu}^{\infty}v^{k-1}\mathrm{e}^{-v}\mathrm{d}v,
\]
where $\Gamma(s)$ denotes the gamma function.
The last equality follows by partial integration and induction.
We now have
\[
\PTa{\ell_{k}=\alpha\left(n\right)-d}
=
\PTa{\ell_{k}\leq \alpha\left(n\right)-d} - \PTa{\ell_{k}\leq \alpha\left(n\right)-(d+1)}.
\]
This implies
\[
\PTa{\ell_{k}=\alpha\left(n\right)-d}
\xrightarrow{n\to\infty}
\frac{1}{\Gamma\left(k\right)}\int_{d\mu}^{\left(d+1\right)\mu}v^{k-1}\mathrm{e}^{-v}\mathrm{d}v.
\]
The claim is proved.
\begin{rem}
The proof of Proposition \ref{prop:LongestConv} can also be used to compute the limit of
\[
\PTa{\left(\ell_{k}\right)_{k=1}^{K}=\left(\alpha\left(n\right)-d_{k}\right)_{k=1}^{K}}
\]
as $n$ tends to infinity since the event in question only depends
on a finite number of cycle counts $C_{\alpha\left(n\right)-j}$.
It is, however, cumbersome to provide a
closed form for such probabilities: The reason for this is that the
stochastic process $\left(\ell_{k}\right)_{k=1}^K$ is not Markovian, i.e. the
distribution of $\ell_{K}$ depends non-trivially on the distribution
of the random vector $\left(\ell_{k}\right)_{k=1}^{K-1}$. This is why
we only provide the readily interpretable results for one individual
$\ell_{k}$ at a time in the proposition.
\end{rem}
\subsection{Proof of Theorem \ref{thm:Longest0Poissonprocess}}
\label{sect:proof_longest_3}
We will first prove certain auxiliary results, assuming that $\mu_{\alpha(n)}\to 0$.
Inserting the definition of $\mu_{\alpha(n)}$, see \eqref{eq:def_mu_n}, we get
\[
\mu_{d_{t}\left(n\right)}\left(n\right)
=
\vartheta\frac{x_{n,\alpha}^{d_{t}\left(n\right)}}{d_{t}\left(n\right)}
=
\vartheta\frac{(x_{n,\alpha})^{\alpha\left(n\right)-\left\lfloor t/\mu_{\alpha(n)}\right\rfloor }}{\alpha\left(n\right)-\left\lfloor t/\mu_{\alpha(n)}\right\rfloor }
=
\mu_{\alpha(n)}\,\frac{\alpha\left(n\right)}{\alpha\left(n\right)-\left\lfloor t/\mu_{\alpha(n)}\right\rfloor }x_{n,\alpha}^{-\left\lfloor t/\mu_{\alpha(n)}\right\rfloor }.
\]
We now have
\[
\frac{\alpha\left(n\right)}{\alpha\left(n\right)-\left\lfloor t/\mu_{\alpha(n)}\right\rfloor }\xrightarrow{n\to\infty}1,
\]
locally uniformly in $t$ since $1/\mu_{\alpha(n)}=o\left(\alpha\left(n\right)\right)$
by Equation \eqref{eq:LongestConv0MuN}.
By Lemma \ref{lem:saddle_point_with_c} and Equation \eqref{eq:LongestConv0MuN}, we have as $n\to\infty$
\begin{align}
x_{n,\alpha}^{-\left\lfloor t/\mu_{\alpha(n)}\right\rfloor }
=&
\exp\left(-\left\lfloor \frac{t}{\mu_{\alpha(n)}}\right\rfloor \frac{1}{\alpha(n)}\log\left(\frac{n}{\vartheta\alpha\left(n\right)}\log\left(\frac{n}{\vartheta\alpha(n)}\right)\right) \big(1+o(1)\big)\right)\nonumber\\
= &
\exp\left(\mathcal{O}\left(t\frac{\alpha\left(n\right)}{n}\right)\right)
\longrightarrow 1,
\label{eq:useful_for_thight}
\end{align}
locally uniformly in $t\geq0$. Altogether, we have locally uniformly in $t$ that
\[
\mu_{d_{t}\left(n\right)}\left(n\right)
\sim
\mu_{\alpha(n)}.
\]
Furthermore, the function $m\to \mu_{m}\left(n\right)$ is increasing for
$m\geq \frac{\alpha(n)}{\log n}$.
This follows by computing the derivative with respect to $m$ of $\mu_{m}\left(n\right)$ in \eqref{eq:def_mu_n} and using Lemma~\ref{lem:saddle_point_with_c}.
We thus have locally uniformly in $t$
\begin{equation}
\sum_{m=d_t(n)+1}^{\alpha\left(n\right)}\mu_{m}\left(n\right)
\xrightarrow{n\to\infty}
\frac{t}{\mu_{\alpha(n)}}\mu_{\alpha(n)}
=
t.
\label{eq:LongestConv0zwischen}
\end{equation}
In order to establish convergence as a stochastic process, we begin
by proving convergence of the finite-dimensional distributions.
More precisely, for $0=t_{0}\leq t_{1}<...<t_{K}$ and $K\in\mathbb{N}$,
consider the increments $\left(P_{t_{k}}-P_{t_{k-1}}\right)_{k=1}^{K}$.
We now have
\begin{align}
P_{t_{k}}-P_{t_{k-1}} = \sum_{j=d_{t_k}\left(n\right)+1}^{d_{t_{k-1}}\left(n\right)}C_{j}.
\end{align}
We begin by determining the moment generating function.
We have
\begin{align}
&\ETa{\prod_{k=1}^{K}\exp\Big(s_{k}\left(P_{t_{k}}-P_{t_{k-1}}\right)\Big)}
\nonumber \\
=&
\frac{1}{Z_{n,\alpha,\vartheta}}
\left[z^{n}\right]
\exp\left(\sum_{k=1}^{K}\left(\mathrm{e}^{s_{k}}-1\right)\sum_{j=d_{t_k} +1}^{d_{t_{k-1}}}
\frac{\vartheta}{j}z^{j}\right)\exp\left(\sum_{j=1}^{\alpha\left(n\right)}\frac{\vartheta}{j}z^{j}\right),
\label{eq:LongestConv0MGF}
\end{align}
where $s_{k}\geq 0$ for all $1\leq k\leq K$.
Equation~\eqref{eq:LongestConv0MGF} follows immediately with Lemma~\ref{lem:polya} and a small computation.
We will apply Proposition~\ref{prop:SPmethod} with ${\boldsymbol q} =(q_{j,n})$ with $q_j =\vartheta\,\mathbbm{1}_{\left\{j\leq \alpha(n)\right\}}$
and the perturbations
\[
f_{n}\left(z\right)
=
\exp\left(\sum_{k=1}^{K}\left(\mathrm{e}^{s_{k}}-1\right)\sum_{j=d_{t_k}+1}^{d_{t_{k-1}} }\frac{\vartheta}{j}z^{j}\right).
\]
To do this, we have to check that the array ${\boldsymbol q}$ and the sequence $(f_n)_{n\in\mathbb{N}}$ are admissible,
see Definitions~\ref{def:admQ} and~\ref{def:admF}.
The array ${\boldsymbol q}$ is admissible by Lemma~\ref{lem:saddle_point_with_c}.
Let us now look at $(f_n)_{n\in\mathbb{N}}$.
The functions $f_{n}$ are entire.
Thus we can use any $\delta>0$.
Further, all coefficients of the Taylor expansion of $f_{n}(z)$ at $z=0$ are non-negative since all $s_{k}\geq0$.
This implies
\begin{align*}
\left|f_{n}\left(z\right)\right|\leq f_{n}\left(x_{n,\vartheta}\right)
\ \text{ for all }z\in\mathbb{C} \text{ with }\left|z\right|=x_{n,\vartheta}.
\end{align*}
It remains to check condition \eqref{eq:adm3}.
We have
\[
f_{n}^{\prime}\left(z\right)=\sum_{k=1}^{K}\left(\mathrm{e}^{s_{k}}-1\right)\sum_{j=d_{t_{k}} +1}^{d_{t_{k-1}} }\vartheta z^{j-1}f_{n}\left(z\right).
\]
We thus have for all $z\in\mathbb{C}$ with $\left|z\right|=x_{n,\vartheta}$
that
\begin{align}
\left|\frac{f_{n}^{\prime}\left(z\right)}{f_{n}\left(x_{n,\vartheta}\right)}\right|
&\leq
\sum_{k=1}^{K}\left(\mathrm{e}^{s_{k}}-1\right)\sum_{j=d_{t_{k}} +1}^{d_{t_{k-1}} }\vartheta x_{n,\vartheta}^{j-1}\nonumber\\
&\leq
\vartheta x_{n,\vartheta}^{\alpha(n)} \sum_{k=1}^{K}\left(\mathrm{e}^{s_{k}}-1\right)\sum_{j=\alpha\left(n\right)-\left\lfloor t_{k}/\mu_{\alpha(n)}\right\rfloor +1}^{\alpha\left(n\right)-\left\lfloor t_{k-1}/\mu_{\alpha(n)}\right\rfloor } 1.
\label{eq:upper_bounds_find_good_name}
\end{align}
Using the definition of $\mu_{\alpha(n)}$ in \eqref{eq:def_mu_n}, we see that we have locally uniformly in $s_k$
\begin{align*}
\left|\frac{f_{n}^{\prime}\left(z\right)}{f_{n}\left(x_{n,\vartheta}\right)}\right|
=
\mathcal{O}\left( \vartheta x_{n,\vartheta}^{\alpha(n)} \frac{t_{K}}{\mu_{\alpha(n)}} \right)
=
\mathcal{O}\left(\alpha(n) \right).
\end{align*}
Inserting this into \eqref{eq:adm3}, we obtain
\begin{align*}
|\!|\!|f_{n}|\!|\!|_{n}
&=
n^{-\frac{5}{12}}\left(\alpha\left(n\right)\right)^{-\frac{7}{12}}
\sup_{\left|\varphi\right|\leq n^{-\frac{5}{12}}\left(\alpha\left(n\right)\right)^{-\frac{7}{12}}}\frac{\left|f_{n}^{\prime}\left(x_{n,\boldsymbol{q}}\mathrm{e}^{\mathrm{i}\varphi}\right)\right|}{\left|f_{n}\left(x_{n,\boldsymbol{q}}\right)\right|}\\
&\leq
n^{-\frac{5}{12}}\left(\alpha\left(n\right)\right)^{-\frac{7}{12}} \mathcal{O}\left(\alpha(n) \right)
=
\mathcal{O}\left(\left(\frac{\alpha(n)}{n}\right)^{5/12} \right) \to 0.
\end{align*}
This implies that the sequence $(f_n)_{n\in\mathbb{N}}$ is admissible, so we
can apply Proposition~\ref{prop:SPmethod} to \eqref{eq:LongestConv0MGF}.
Observe that Equation~\eqref{eq:LongestConv0zwischen} entails
\[
\sum_{j=d_{t_{k}} +1}^{d_{t_{k-1}} }\mu_{j}(n)
=
\sum_{j=\alpha\left(n\right)-\left\lfloor t_{k}/\mu_{n}\right\rfloor +1}^{\alpha\left(n\right)-\left\lfloor t_{k-1}/\mu_{n}\right\rfloor }\mu_{j}\left(n\right)\xrightarrow{n\to\infty}
t_{k}-t_{k-1}
\]
for all $k$.
Since we use for all $s_k$ the same array ${\boldsymbol q}$, including the case $s_1=\ldots=s_K=0$,
we get with Proposition~\ref{prop:SPmethod} that
\begin{align*}
\ETa{\prod_{k=1}^{K}\exp\Big(s_{k}\left(P_{t_{k}}-P_{t_{k-1}}\right)\Big)}
&\sim
f_{n}\left(x_{n,\vartheta}\right)
=
\exp\left(\sum_{k=1}^{K}\left(\mathrm{e}^{s_{k}}-1\right)\sum_{j=d_{t_k}+1}^{d_{t_{k-1}} }\mu_{j}(n)\right)\\
&\longrightarrow
\sum_{k=1}^{K}\exp\left[\left(\mathrm{e}^{s_{k}}-1\right)\left(t_{k}-t_{k-1}\right)\right].
\end{align*}
This implies that the increments $ (P_{t_{k}}-P_{t_{k-1}})_{k=1}^K$
converge in distribution to independent random variables $\left(Z_{1},Z_{2},\dots,Z_{K}\right)$,
where $Z_{k}$ is Poisson-distributed with parameter $t_{k}-t_{k-1}$.
Thus the finite-dimensional distributions of $P_{t}$
converge weakly to the finite-dimensional distributions of the Poisson
process with parameter $1$.
To prove that the process $\{P_{t}, t\geq 0\}$ converges to the Poisson process with parameter $1$,
it remains to establish the tightness of the process $\{P_{t}, t\geq 0\}$.
By \cite[Theorem~13.5 and~(13.14)]{Bi99}, it is sufficient to show for each $T>0$ that
\begin{align}
\mathbb{E}_{n,\alpha}\left[\left(P_{t}-P_{t_{1}}\right)^{2}\left(P_{t_{2}}-P_{t}\right)^{2}\right]
=
\mathcal{O}\left(\left(t_{2}-t_{1}\right)^{2}\right)
\label{eq:tightness_criterium}
\end{align}
uniformly in $t,t_1,t_2$ with $0\leq t_{1}\leq t\leq t_{2}\leq T$.
Note that we can assume that $\frac{t_{2}}{\mu_{\alpha(n)}}-\frac{t_{1}}{\mu_{\alpha(n)}} \geq 1$.
Otherwise $\left(P_{t}-P_{t_{1}}\right)^{2}\left(P_{t_{2}}-P_{t}\right)^{2} =0$
and the above equation is trivially fulfilled.
Let $n$ be large enough such that $d_{T}\left(n\right)>0$.
By Equation \eqref{eq:LongestConv0MGF}, we have
\begin{align*}
& \mathbb{E}_{n,\alpha}\left[\left(P_{t}-P_{t_{1}}\right)^{2}\left(P_{t_{2}}-P_{t}\right)^{2}\right]
=
\left.\frac{\partial^{2}}{\partial s_{2}^{2}}\frac{\partial^{2}}{\partial s_{1}^{2}}
\mathbb{E}_{n,\alpha}\left[\mathrm{e}^{s_{1}\left(P_{t}-P_{t_{1}}\right)+s_{2}\left(P_{t_{2}}-P_{t}\right)}\right]\right|_{s_1=s_2=0}\\
= & \frac{1}{Z_{n,\alpha,\vartheta}}\left.\frac{\partial^{2}}{\partial s_{2}^{2}}\frac{\partial^{2}}{\partial s_{1}^{2}}\left[z^{n}\right]\exp\left(\left(\mathrm{e}^{s_{1}}-1\right)G_{n,t_{1},t}\left(z\right)+\left(\mathrm{e}^{s_{2}}-1\right)G_{n,t,t_{2}}\left(z\right)\right)\exp\left(\sum_{j=1}^{\alpha\left(n\right)}\frac{\vartheta}{j}z^{j}\right)\right|_{s_1=s_2=0}
\end{align*}
with $ G_{n,u,w}\left(z\right):=\sum_{j= d_w(n) +1}^{d_u(n) }\frac{\vartheta}{j}z^{j}$ for $0\leq u\leq w \leq T$.
Calculating the derivatives and entering $s_1=s_2=0$ gives
\begin{align*}
\mathbb{E}_{n,\alpha}\left[\left(P_{t}-P_{t_{1}}\right)^{2}\left(P_{t_{2}}-P_{t}\right)^{2}\right]
=
\frac{1}{Z_{n,\alpha,\vartheta}}\left[z^{n}\right]g_{n}\left(z\right)\exp\left(\sum_{j=1}^{\alpha\left(n\right)}\frac{\vartheta}{j}z^{j}\right)
\end{align*}
with
\begin{align*}
g_{n}\left(z\right):=G_{n,t_{1},t}\left(z\right)\left(1+G_{n,t_{1},t}\left(z\right)\right)G_{n,t,t_{2}}\left(z\right)\left(1+G_{n,t,t_{2}}\left(z\right)\right).
\end{align*}
We now apply again Proposition~\ref{prop:SPmethod}.
We use here the perturbations $(g_n)_{n\in\mathbb{N}}$ and as before ${\boldsymbol q} =(q_{j,n})$ with $q_j =\vartheta\,\mathbbm{1}_{\left\{j\leq \alpha(n)\right\}}$.
Thus we only have to show that $(g_n)_{n\in\mathbb{N}}$ is admissible.
All $g_n$ are entire and we thus can use any $\delta>0$.
Further the coefficients of the Taylor expansion of $g_n(z)$ at $z=0$ are all non-negative.
Thus $\left|g_{n}\left(z\right)\right|\leq g_{n}\left(\left|z\right|\right)$
for all $z$.
It remains to check condition \eqref{eq:adm3}.
We use here an estimate which is similar to the one in \eqref{eq:upper_bounds_find_good_name}.
We have for $z\in\mathbb{C}$ with $|z| = x_{n,\vartheta}$ that
\begin{align*}
|G_{n,u,w}'\left(z\right)|
&=
\left|\sum_{j= d_w(n) +1}^{d_u(n) }\vartheta z^{j-1}\right|
\leq
\vartheta \sum_{j= d_w(n) +1}^{d_u(n) } x_{n,\vartheta}^{j-1}
\leq
\vartheta x_{n,\vartheta}^{\alpha(n)}\sum_{j=\alpha\left(n\right)-\left\lfloor w/\mu_{\alpha(n)}\right\rfloor +1}^{\alpha\left(n\right)-\left\lfloor u/\mu_{\alpha(n)}\right\rfloor } 1\\
&=
\vartheta x_{n,\vartheta}^{\alpha(n)} \left( \left\lfloor w/\mu_{\alpha(n)}\right\rfloor -\left\lfloor u/\mu_{\alpha(n)}\right\rfloor \right).
\end{align*}
Similarly, we have
\begin{align}
|G_{n,u,w}(x_{n,\vartheta})|
&=
\vartheta\sum_{j= d_w(n) +1}^{d_u(n) } \frac{x_{n,\vartheta}^j}{j}
\geq
\frac{\vartheta}{d_u(n)} x_{n,\vartheta}^{d_w(n)+1} \sum_{j= d_w(n) +1}^{d_u(n) } 1\nonumber\\
&\geq
\frac{\vartheta}{d_u(n)} x_{n,\vartheta}^{d_w(n)+1}
\left( \left\lfloor w/\mu_{\alpha(n)}\right\rfloor -\left\lfloor u/\mu_{\alpha(n)}\right\rfloor \right).
\label{eq:useful_for_thight2}
\end{align}
Using \eqref{eq:useful_for_thight} and the definition of $d_w(n)$ in \eqref{eq:def_dt}, we get
\begin{align*}
\left| \frac{G_{n,u,w}'\left(z\right)}{G_{n,u,w}(x_{n,\vartheta})} \right|
\leq
d_u(n) x_{n,\vartheta}^{\left\lfloor w/\mu_{\alpha(n)}\right\rfloor +1}
\leq \alpha(n) \exp\left(\mathcal{O}\left(T\frac{\alpha\left(n\right)}{n}\right)\right)
=
\mathcal{O}\left( \alpha(n)\right).
\end{align*}
This estimate is uniform in $u,w$ with $0\leq u\leq w\leq T$.
Inserting this inequality into \eqref{eq:adm3} then gives
\begin{align*}
|\!|\!|G_{n,u,w}|\!|\!|_{n}
&\leq
n^{-\frac{5}{12}}\left(\alpha\left(n\right)\right)^{-\frac{7}{12}} O\left(\alpha(n) \right)
=
O\left(\left(\frac{\alpha(n)}{n}\right)^{5/12} \right) \to 0.
\end{align*}
We thus have
\begin{align}
|\!|\!|g_{n}|\!|\!|_{n} \leq 2 |\!|\!|G_{n,t_1,t}|\!|\!|_{n} + 2 |\!|\!|G_{n,t,t_2}|\!|\!|_{n}
=
O\left(\left(\frac{\alpha(n)}{n}\right)^{5/12} \right).
\label{eq_gn_triple_norm}
\end{align}
This estimate is uniform in $t,t_1,t_2$ with $0\leq t_1\leq t \leq t_2\leq T$.
This implies that the sequence $(g_n)_{n\in\mathbb{N}}$ is admissible.
Proposition~\ref{prop:SPmethod} then implies that
\begin{align*}
\mathbb{E}_{n,\alpha}\left[\left(P_{t}-P_{t_{1}}\right)^{2}\left(P_{t_{2}}-P_{t}\right)^{2}\right]
=
g_{n}\left(x_{n,\vartheta}\right) \left(1+\mathcal{O}\left(\frac{\alpha\left(n\right)}{n}+|\!|\!|g_{n}|\!|\!|_{n}\right)\right)
\leq
2 g_{n}\left(x_{n,\vartheta}\right).
\end{align*}
Using the definition of $g_n$ and an estimate similar to \eqref{eq:useful_for_thight2}, we get
\begin{align*}
g_{n}\left(x_{n,\vartheta}\right)
\leq &
\left(\sum_{j=d_{t_{2}\left(n\right)}+1}^{d_{t_{1}}\left(n\right)}\frac{\vartheta}{j}x_{n,\vartheta}^{j}\right)^{2}\left(1+\sum_{j=d_{t_{2}\left(n\right)}+1}^{d_{t_{1}}\left(n\right)}\frac{\vartheta}{j}x_{n,\vartheta}^{j}\right)^{2}\\
\leq &
2\left(d_{t_{1}}\left(n\right)-d_{t_{2}}\left(n\right)\right)^{2}\mu_{\alpha(n)}^{2}\left(1+2\left(d_{t_{1}}\left(n\right)-d_{t_{2}}\left(n\right)\right)\mu_{\alpha(n)}\right)^{2}.
\end{align*}
Using the definition of $d_t(n)$ in \eqref{eq:def_dt} and that $0\leq t_1\leq t_2\leq T$, we obtain
\begin{alignat*}{1}
g_{n}\left(x_{n,\vartheta}\right)
\leq\, &
2(1+2T)^2\big(d_{t_{1}}\left(n\right)-d_{t_{2}}\left(n\right)\big)^{2}\mu_{\alpha(n)}^{2}\\
=\,&
2(1+2T)^2\left(\left\lfloor \frac{t_{2}}{\mu_{\alpha(n)}}\right\rfloor -\left\lfloor \frac{t_{1}}{\mu_{\alpha(n)}}\right\rfloor \right)^{2}\mu_{\alpha(n)}^{2}\\
\leq\, &
2(1+2T)^2\left(\frac{t_{2}}{\mu_{\alpha(n)}}-\frac{t_{1}}{\mu_{\alpha(n)}}+1\right)^{2}\mu_{\alpha(n)}^{2}\\
\leq\, &
8(1+2T)^2\left(t_{2}-t_{1}\right)^{2}.
\end{alignat*}
Note that we used for the last equation the assumption $\frac{t_{2}}{\mu_{\alpha(n)}}-\frac{t_{1}}{\mu_{\alpha(n)}}\geq 1$.
This shows that \eqref{eq:tightness_criterium} holds.
This completes the proof.
\subsection{Proof of Theorem \ref{thm:main_dtv}}
\label{sect:proof_dtv}
The proof follows mainly the ideas in \cite{ArTa92c}, where the case of uniform permutations is treated, and is also similar to the proof of Theorem~\ref{thm:main_thm2_old} in \cite{BeScZe17}.
In order to establish Theorem~\ref{thm:main_dtv}, we have to introduce some notation.
We set
\begin{align}
d_{b\left(n\right)}:=
\|\mathbb{P}_{n,\vartheta, b(n),\alpha} - \widehat \mathbb{P}_{b(n)} \|_{\rm TV}.
\end{align}
Let $\left(Y_{j}\right)$ be as in Theorem~\ref{thm:main_dtv} and set for $b_1$, $b_2\in\mathbb{N}$
\begin{equation}
T_{b_{1}b_{2}}^{(n)}
:=
\sum_{j=b_{1}+1}^{b_{2}} j Y_{j}.
\label{eq:Tdef}
\end{equation}
Further, let $\boldsymbol{C}_b=\left(C_1,C_2,\dots, C_{b(n)}\right)$ the vector of the cycle counts up to length $b(n)$,
$\boldsymbol{Y}_b =\left(Y_1, Y_2, \dots, Y_{b(n)} \right)$, and $\boldsymbol{c}=\left(c_1,c_2,\dots,c_{b(n)}\right)\in\mathbb{N}^{b(n)}$ a vector.
We then have for all $\boldsymbol{c}$
\begin{align}
\PTa{\boldsymbol{C}_{b}=\boldsymbol{c}}
=
\Pb{\left.\boldsymbol{Y}_{b}=\boldsymbol{c}\right|T_{0\alpha\left(n\right)}^{\left(n\right)}=n}.
\label{eq:cond_relation_dtv}
\end{align}
The proof of this equality is the same as for the uniform measure on $S_n$ in \cite{ArTa92c} and we thus omit it.
As in \cite[Section~4.2]{BeScZe17}, one can use \eqref{eq:cond_relation_dtv} to show that
\begin{align}
d_{b\left(n\right)}
=
\sum_{r=0}^{\infty}\Pb{T_{0b(n)}^{\left(n\right)}=r} \left(1-\frac{\mathbb{P}\left[T_{b(n)\alpha(n)}^{\left(n\right)}=n-r\right]}{\mathbb{P}\left[T_{0\alpha\left(n\right)}^{\left(n\right)}=n\right]}\right)_{+},
\label{eq:dtv_with_Tn}
\end{align}
where $(y)_+ = \max(y,0)$.
We will split this sum into pieces.
We have
\[
d_{b\left(n\right)}
\leq
\mathbb{P}\left[T_{0b}^{\left(n\right)}\geq \rho \E{T_{0b(n)}^{\left(n\right)}}\right]
+
\max_{1\leq r \leq \rho \E{T_{0b(n)}^{\left(n\right)}}}
\left(1-\frac{\mathbb{P}\left[T_{b\alpha(n)}^{\left(n\right)}=n-r\right]}{\mathbb{P}\left[T_{0\alpha\left(n\right)}^{\left(n\right)}=n\right]}\right)_{+},
\]
where $\rho= \rho(n) > 1$ is arbitrary. We now have
\begin{lem}
\label{lem:rho}
Let $\rho>1$. Then,
\[
\Pb{T_{0b}^{\left(n\right)}\geq\rho \E{T_{0b(n)}^{\left(n\right)}}}
\leq
\exp\left(\E{T_{0b(n)}^{\left(n\right)}} \frac{\rho-\rho\log(\rho)}{b(n)}\right).
\]
\end{lem}
\begin{proof}
We set $m:=\E{T_{0b(n)}^{\left(n\right)}}$.
We then have for all $s\geq0$
\begin{align}
\Pb{T_{0b(n)}^{\left(n\right)}\geq\rho m }
=
\Pb{\mathrm{e}^{sT_{0b(n)}^{\left(n\right)}}\geq \mathrm{e}^{s\rho m}}
\leq
\frac{\E{ \mathrm{e}^{sT_{0b}^{\left(n\right)}}}}{\mathrm{e}^{s\rho m}}.
\label{eq:markov_T0b}
\end{align}
The independence of the $Y_j$ and $m=\sum_{j=1}^{b(n)} j \mu_j(n) = \vartheta\sum_{j=1}^b x_{n,\vartheta}^j$ imply that
\begin{align}
\log\left( \E{ \mathrm{e}^{sT_{0b}^{\left(n\right)}}} \right)
&=
\sum_{j=1}^b{\mu_j(n)}(\mathrm{e}^{js}-1)
=
\vartheta\sum_{j=1}^b x_{n,\vartheta}^j\int_0^{s} e^{jx}dx
\leq
\vartheta\sum_{j=1}^b x_{n,\vartheta}^j\int_0^{s} e^{bx}dx\nonumber\\
&\leq
m\int_0^{s} e^{bx}dx
=
m\frac{e^{bs}-1}{b}
\leq
\frac{me^{bs}}{b}.
\label{eq:markov_T0b:in}
\end{align}
We thus have $\Pb{T_{0b}^{\left(n\right)}\geq\rho m}
\leq \exp\left( \frac{me^{bs}}{b} - s\rho m \right)$.
We now use $s=\frac{1}{b}\log\left(\rho\right)$, which is by assumption non-negative.
Inserting this into the above inequality completes the proof.
\end{proof}
In order to choose a suitable $\rho$, we have to determine the asymptotic behavior of $\E{T_{0b(n)}^{(n)}}$.
Using the definition of $\mu_j(n)$ in \eqref{eq:def_mu_n}, we get
\begin{align}
\E{T_{0b(n)}^{(n)}}
=
\sum_{j=1}^{b(n)} j \mu_j(n)
=
\vartheta \sum_{j=1}^{b(n)} (x_{n,\alpha})^j
=
\vartheta x_{n,\alpha} \frac{(x_{n,\alpha})^{b(n)}-1 }{x_{n,\alpha} -1}.
\end{align}
We know from Lemma~\ref{lem:saddle_point_with_c} that $x_{n,\alpha} \to 1$ and
\begin{align}
(x_{n,\alpha})^{b(n)}
\sim
\left(\frac{n}{\vartheta \alpha(n)} \log\left(\frac{n}{\vartheta \alpha(n)}\right) \right)^{b(n)/\alpha(n)}.
\end{align}
If $b(n)= o\left(\alpha(n)/\log(n)\right)$ then $(x_{n,\alpha})^{b(n)} \to 1$ and thus $\E{T_{0b(n)}^{(n)}}\sim b(n)$.
However, we can also have $b(n)\geq c\frac{\alpha(n)}{\log(n)}$ for some $c>0$.
Using that $x_{n,\alpha} -1 \sim \log(x_{n,\alpha})$, we immediately obtain
\begin{align}
\E{T_{0b(n)}^{(n)}}
&\approx
\frac{\alpha(n)}{\log(n)} \left(\frac{n}{\vartheta \alpha(n)} \log\left(\frac{n}{\vartheta \alpha(n)}\right) \right)^{b(n)/\alpha(n)}.
\label{eq:E(Tb)}
\end{align}
This implies that we have for $n$ large
\begin{align}
\frac{\alpha(n)}{\log(n)}
\leq
\E{T_{0b(n)}^{(n)}}
\leq
\frac{\alpha(n)n^\epsilon}{\log(n)},
\label{eq:E(Tb)01}
\end{align}
where $\epsilon>0$ can be chosen arbitrarily.
In view of \eqref{eq:E(Tb)01} and $b(n)=o(\alpha(n))$, we use $\rho =\log^2(n)$ in Lemma~\ref{lem:rho}.
With this choice of $\rho$, we immediately get that $\Pb{T_{0b}^{\left(n\right)}\geq\rho \E{T_{0b(n)}^{\left(n\right)}}} = {\mathcal O}(n^{-A})$ where $A>0$ is arbitrary.
Inserting this into \eqref{eq:dtv_with_Tn}, with $A=2$, we get
\begin{align}
d_{b\left(n\right)}
\leq
\max_{r\leq \rho \E{T_{0b(n)}^{\left(n\right)}}} \left(1-\frac{\mathbb{P}\left[T_{b(n)\alpha(n)}^{\left(n\right)}=n-r\right]}{\mathbb{P}\left[T_{0\alpha\left(n\right)}^{\left(n\right)}=n\right]}\right)_{+}
+
{\mathcal O}(n^{-2}).
\label{eq:dtv_with_Tn2}
\end{align}
We look next at $T_{b(n)\alpha(n)}^{\left(n\right)}$.
Using that that all $Y_j$ are independent, we get that the probability generating function of $T_{b\alpha(n)}^{\left(n\right)}$ is
\begin{align}
\E{z^{T_{b\alpha(n)}^{\left(n\right)}}}
=
\exp\left(\sum_{j=b+1}^{\alpha(n)} \mu_j(n) (z^j-1)\right).
\end{align}
Using that $\mu_j(n) = \vartheta \frac{\left(x_{n,\alpha}\right)^{j}}{j}$, we get
\begin{align*}
\Pb{T_{b(n)\alpha(n)}^{\left(n\right)}=n-r}
&=
\exp\left(-\sum_{j=b+1}^{\alpha(n)}\mu_j(n) \right) x_{n,\alpha}^{n-r}
\left[z^{n}\right]z^r\exp\left(\vartheta\sum_{j=b+1}^{\alpha(n)}\frac{1}{j} z^{j}\right).
\end{align*}
Similarly, we obtain
\begin{align*}
\Pb{T_{0\alpha\left(n\right)}^{\left(n\right)}=n}
=
\exp\left(-\sum_{j=1}^{\alpha(n)}\mu_j(n) \right)
x_{n,\alpha}^{n}
\left[z^{n}\right]\exp\left(\vartheta\sum_{j=1}^{b}\frac{1}{j}z^{j}\right)\exp\left(\vartheta \sum_{j=b+1}^{\alpha(n)}\frac{1}{j}z^{j}\right).
\end{align*}
Thus we have to determine for $r\leq \rho \E{T_{0b(n)}^{\left(n\right)}}$ the asymptotic behaviour of
\begin{align*}
\left[z^{n}\right]z^r\exp\left(\vartheta\sum_{j=b+1}^{\alpha(n)}\frac{1}{j} z^{j}\right)
\ \text{ and } \
\left[z^{n}\right]\exp\left(\vartheta\sum_{j=1}^{b}\frac{1}{j}z^{j}\right)\exp\left(\vartheta\sum_{j=b+1}^{\alpha(n)}\frac{1}{j}z^{j}\right).
\end{align*}
We do this with Proposition~\ref{prop:SPmethod}.
We use for both the triangular array
\begin{align}
{\boldsymbol q} = (q_{j,n})_{1 \leq j \leq \alpha(n), n \in \mathbb{N}} \ \text{ with } \
q_{j,n} =\vartheta\,\mathbbm{1}_{\left\{b(n)+1\leq j\leq \alpha(n)\right\}}.
\label{eq:q_for_dtv}
\end{align}
Furthermore, we use the perturbations $f_{1,n}(z)=z^{r}$ for the first and $f_{2,n}(z)=\exp\left(\vartheta\sum_{j=1}^{b}\frac{1}{j}z^{j}\right)$ for the second expression.
We thus have to show that ${\boldsymbol q}$ and $f_{1,n}(z)$ and $f_{2,n}(z)$ are admissible, see Definitions~\ref{def:admQ} and~\ref{def:admF}.
We now have
\begin{lem}
\label{lem:NeuerSattel}
Let $b=o(\alpha(n))$ and define $x_n$ to be the solution of the equation
\begin{align}
n=\vartheta\sum_{j=b+1}^{\alpha}x_{n}^{j}.
\label{eq:def_xn}
\end{align}
We then have $x_{n,\alpha} \leq x_{n} \leq x_{n,\alpha-b}$ and $|x_{n}-x_{n,\alpha}|=\mathcal{O}\left(\frac{1}{\alpha(n)}\right)$.
Furthermore the triangular array ${\boldsymbol q}$ in \eqref{eq:q_for_dtv} is admissible.
\end{lem}
\begin{proof}
We have by definition that $x_{n,\alpha} \leq x_{n}$.
Further, $x_{n,\alpha-b}$ is the solution of
\[
n=\sum_{j=1}^{\alpha(n)-b} (x_{n,\alpha-b})^{j}.
\]
Since $\alpha(n)<n$, we have $x_{n}\geq 1$ and $x_{n,\alpha-b}\geq 1$.
This implies that $x_{n}\leq x_{n,\alpha-b}$.
Lemma~\ref{lem:saddle_point_with_c} now implies
\[
|x_{n}-x_{n,\alpha}|
=
x_{n}-x_{n,\alpha}
\leq
x_{n,\alpha-b}-x_{n,\alpha}
=
\mathcal{O}\left(\frac{1}{\alpha(n)}\right).
\]
Further, $x_{n,\alpha}$ and $x_{n,\alpha-b}$ are admissible by Lemma~\ref{lem:saddle_point_with_c}.
Thus $x_{n,\alpha} \leq x_{n} \leq x_{n,\alpha-b}$ together with Equation~\eqref{eq:StaSadAs}
immediately shows that $ x_{n}$ fulfills Condition~(1) in Definition~\ref{def:admQ}.
Furthermore, we also get
\begin{align}
\log(x_n) \approx \frac{\log(n)}{\alpha(n)}
\ \text{ and } \
x_n -1 \approx \frac{\log(n)}{\alpha(n)}.
\label{eq:approx_xn}
\end{align}
To see that $ x_{n}$ fulfills Condition~(2), one uses \eqref{eq:approx_xn} and the identity
\begin{align}
\sum_{j=0}^d jq^j = \frac{d q^{d+1}}{q-1} -\frac{q(q^d-1)}{(q-1)^2}
\text{ for all }d\in\mathbb{N}, q\neq 0.
\end{align}
Condition~(3) is obvious.
Thus ${\boldsymbol q}$ is admissible.
\end{proof}
We now can show
\begin{lem}
\label{lem:NeuerSattel2}
The sequences $(f_{1,n})_{n\in\mathbb{N}}$ with $f_{1,n}=z^r$ is admissible for all $ r=o\left(n^{\frac{5}{12}}\alpha^{\frac{7}{12}}\right)$.
Further, $(f_{2,n})_{n\in\mathbb{N}}$ with $f_{2,n}=\exp\left(\vartheta\sum_{j=1}^{b}\frac{1}{j}z^{j}\right)$ is admissible.
\end{lem}
\begin{proof}
We start with $(f_{1,n})_{n\in\mathbb{N}}$.
Since all $f_{1,n} =z^r$, the first two conditions of Definition~\ref{def:admF} are fulfilled with $\delta=N=1$ and $K=0$ for all $r$.
We now have
\begin{align*}
|\!|\!|f_{1,n}|\!|\!|_{n}
\leq
n^{-\frac{5}{12}}\left(\alpha\left(n\right)\right)^{-\frac{7}{12}} rx_n^{-1}.
\end{align*}
Since $x_n\to 1$, we have $|\!|\!|f_{1,n}|\!|\!|_{n}\to 0$ if and only if $r= o(n^{\frac{5}{12}}\left(\alpha\left(n\right)\right)^{\frac{7}{12}})$.
This completes the proof of the first half of the statement.
For $(f_{2,n})_{n\in\mathbb{N}}$, we also have only to check the third condition.
Lemma~\ref{lem:saddle_point_with_c} implies that $x_n-1 \geq c\log(n)/\alpha(n)$ for some $c>0$.
Since $x_{n,\alpha} \leq x_n\leq x_{n,\alpha-b}$ and $b=o(\alpha(n))$, we get with Lemma~\ref{lem:saddle_point_with_c}
\begin{align*}
\frac{|f_{2,n}^{\prime}(z)|}{|f_{2,n}(x_n)|}
\leq
\sum_{j=0}^{b-1} x_{n}^{j}
=
\frac{ x_{n,\alpha-b}^{b}-1}{x_{n,\alpha}-1}
=
{\mathcal O}(n^\epsilon \alpha(n))
\ \text{ for all $z$ with }|z|=x_n,
\end{align*}
where $\epsilon>0$ can be chossen arbitarily small.
We thus have $ |\!|\!|f_{2,n}|\!|\!|_{n} \leq n^{-\frac{5}{12}+\epsilon}(\alpha(n))^{\frac{5}{12}}$.
Since $\alpha(n)\leq n^{a_2}$ with $a_2<1$, we see that $|\!|\!|f_{2,n}|\!|\!|_{n}\to 0$ for $\epsilon>0$ small enough.
\end{proof}
We know from \eqref{eq:E(Tb)01} that $\E{T_{0b(n)}^{(n)}} \leq \frac{\alpha(n)n^\epsilon}{\log(n)}$ for each $\epsilon>0$ and $n$ large enough.
This shows that we can use Proposition~\ref{prop:SPmethod} to compute $\Pb{T_{b(n)\alpha(n)}^{\left(n\right)}=n-r}$
and $\Pb{T_{0\alpha(n)}^{\left(n\right)}=n}$ for $r\leq \rho \E{T_{0b(n)}^{\left(n\right)}}$.
We thus have
\begin{align}
\frac{\Pb{T_{b(n)\alpha(n)}^{\left(n\right)}=n-r}}{\Pb{T_{0\alpha(n)}^{\left(n\right)}=n}}
=
x_{n,\alpha}^{-r}x_{n}^{r} \exp\left(-\vartheta\sum_{j=1}^{b}\frac{1}{j}\left(x_{n}^{j}-x_{n,\alpha}^{j}\right)\right)
\left(1 + R_n\right),
\label{eq:dtv_estimate_allmost_finished0}
\end{align}
where
\begin{align*}
R_n
&=
{\mathcal O}\left(\frac{\alpha\left(n\right)}{n} +|\!|\!|f_{1,n}|\!|\!|_{n}+ |\!|\!|f_{2,n}|\!|\!|_{n} \right).
\end{align*}
Note that the implicit constant in the error term in Proposition~\ref{prop:SPmethod} only depends on the used
$K$, $N$ and $\delta$.
Since we use for each $r\leq \rho \E{T_{0b(n)}^{\left(n\right)}}$ the same $K$, $N$ and $\delta$, we get that $R_n$ is uniform in $r$.
We now have to distinguish the two cases $b(n) =o(\alpha(n))$ and $b(n) =o\left(\frac{\alpha(n)}{\log(n)}\right)$
for the error terms in \eqref{eq:thm:main_dtv1} and \eqref{eq:thm:main_dtv2}.
In the case $b(n) =o(\alpha(n))$, we get with \eqref{eq:E(Tb)01} and the proof of Lemma~\ref{lem:NeuerSattel2} that
$$
|\!|\!|f_{1,n}|\!|\!|_{n}
\leq
\frac{r }{n^{\frac{5}{12}}(\alpha(n))^{\frac{7}{12}}}
\leq
\frac{\rho \E{T_{0b(n)}^{\left(n\right)}} }{n^{\frac{5}{12}}(\alpha(n))^{\frac{7}{12}}}
=
{\mathcal O}\left(n^{\epsilon} \left(\frac{\alpha(n)}{n}\right)^{\frac{5}{12}}\right)
$$
for each $ \epsilon>0$. We thus have that $R_n$ is as in \eqref{eq:thm:main_dtv1}.
In the case $b(n) =o\left(\frac{\alpha(n)}{\log(n)}\right)$, we have $\E{T_{0b(n)}^{\left(n\right)}} \sim b(n)$.
Using this, we immediately get that $R_n$ is as in \eqref{eq:thm:main_dtv2}.
It thus remains to compute the asymptotic behaviour of the main term in \eqref{eq:dtv_estimate_allmost_finished0}.
We thus need an estimate for $x_n^b - x_{n,\alpha}^b$.
Unfortunately, the bounds obtained from the Lemmas~\ref{lem:NeuerSattel} and~\ref{lem:saddle_point_with_c} are not strong enough.
To overcome this issue, let us consider for $y\in\mathbb{R}$ the equation
\begin{align}
\vartheta e^{\alpha(n) y} =ny.
\label{eq:def_y_n_alpha}
\end{align}
It is straightforward to see that this equation has for $\frac{n}{\vartheta\alpha(n)}>e$ two solutions.
We denote these by $y_{n,\alpha,0}$ and $y_{n,\alpha}$ with $0<y_{n,\alpha,0}<y_{n,\alpha}$.
It is straightforward to see that $y_{n,\alpha,0}\sim\frac{\vartheta}{n}$ and $y_{n,\alpha}\sim\frac{\log (n/\alpha(n))}{\alpha(n)}$ as $n\to\infty$.
We have
\begin{lem}
\label{lem:NeuerSattel3}
We have
\begin{align}
\alpha(n)\,y_{n,\alpha}
=
\log\left(\frac{n}{\vartheta\alpha(n)}\log\left(\frac{n}{\vartheta\alpha\left(n\right)}\right)\right)
+
O\left( \frac{\log\log(n)}{\log(n)}\right).
\label{eq_y_n_asympt}
\end{align}
Furthermore, we have for $b=o(\alpha(n))$ that
\begin{align}
\log(x_{n,\alpha}) = y_{n,\alpha} + {\mathcal O}\left(\frac{1}{n\log(n)}\right)
\ \text{ and } \
\log(x_{n}) = y_{n,\alpha} + {\mathcal O}\left(\frac{e^{by_{n,\alpha}}}{n\log(n)}\right).
\label{eq:x_n_with_y_n_alpha}
\end{align}
\end{lem}
We first complete our computations of the main term in \eqref{eq:dtv_estimate_allmost_finished0} with Lemma~\ref{lem:NeuerSattel3}
and then give the proof of Lemma~\ref{lem:NeuerSattel3}.
We have
\begin{align}
\vartheta\sum_{j=1}^{b}\frac{1}{j}\left(x_{n}^{j}-x_{n,\alpha}^{j}\right)
&=
\vartheta \sum_{j=0}^{b-1}\int_{x_{n,\alpha}}^{x_{n}}v^{j}\mathrm{d}v
=
\vartheta \int_{x_{n,\alpha}}^{x_{n}}\frac{v^{b}-1}{v-1}\mathrm{d}v
\leq
\frac{\vartheta}{x_{n,\alpha} -1} \int_{x_{n,\alpha}}^{x_{n}}v^{b}\mathrm{d}v\nonumber\\
&=
\frac{\vartheta}{x_{n,\alpha} -1} \left( \frac{(x_n)^{b+1}}{b+1} - \frac{(x_{n,\alpha})^{b+1}}{b+1}\right).
\label{eq:dtv_estimate_allmost_finished}
\end{align}
We use \eqref{eq:x_n_with_y_n_alpha} and get for some $\epsilon>0$
\begin{align*}
(x_n)^{b+1} - (x_{n,\alpha})^{b+1}
&=
(x_{n,\alpha})^{b+1}
\left(\exp\Big( (b+1) (\log x_{n} - \log x_{n,\alpha}) \Big) - 1 \right)\\
&=
(x_{n,\alpha})^{b+1}
\left(\exp\left( (b+1) {\mathcal O}\left(\frac{e^{by_{n,\alpha}}}{n\log(n)}\right) \right) - 1 \right)\\
& =
(x_{n,\alpha})^{b+1} (b+1) {\mathcal O}\left(\frac{e^{by_{n,\alpha}}}{n\log n}\right).
\end{align*}
Equation~\ref{eq_y_n_asympt} and Lemma~\ref{lem:saddle_point_with_c} imply that
$e^{by_{n,\alpha}}= {\mathcal O}\left( n^{\epsilon} \right)$ and $(x_{n,\alpha})^{b+1} = {\mathcal O}\left( n^{\epsilon} \right)$,
where $\epsilon>0$ can be chosen arbitrarily small.
Using this and \eqref{eq:approx_xn}, we get
\begin{align*}
\vartheta\sum_{j=1}^{b}\frac{1}{j}\left(x_{n}^{j}-x_{n,\alpha}^{j}\right)
&=
{\mathcal O}\left(\frac{(b+1)e^{by_{n,\alpha}} (x_{n,\alpha})^{b+1} }{n\log^2(n)}\right)
=
{\mathcal O}\left(\frac{b+1}{n^{1-2\epsilon}\log^2(n)}\right).
\end{align*}
Inserting this into \eqref{eq:dtv_estimate_allmost_finished} gives
\begin{align*}
\frac{\Pb{T_{b(n)\alpha(n)}^{\left(n\right)}=n-r}}{\Pb{T_{0\alpha(n)}^{\left(n\right)}=n}}
&\geq
\exp\left( {\mathcal O}\left(\frac{b+1}{n^{1-2\epsilon}\log^2(n)}\right)\right) \left(1+ R_n\right)\\
&=
1+{\mathcal O}\left(n^{\epsilon} \left(\frac{\alpha(n)}{n}\right)^{\frac{5}{12}}\right).
\end{align*}
This equation together with \eqref{eq:dtv_with_Tn2} completes the proof of Theorem~\ref{thm:main_dtv}.
\begin{proof}[Proof of Lemma~\ref{lem:NeuerSattel3}]
We start with \eqref{eq_y_n_asympt}.
We insert the approach
$$y = \frac{1}{\alpha(n)}\log\left(\frac{n}{\vartheta\alpha(n)}\log\left(\frac{n}{\vartheta\alpha\left(n\right)}\right)\right) +v$$
with $v\in\mathbb{R}$ into \eqref{eq:def_y_n_alpha}.
This leads to the equation
\begin{align}
\log\left(\frac{n}{\vartheta\alpha(n)}\right) e^{\alpha(n) v}
=
\log\left(\frac{n}{\vartheta\alpha(n)}\log\left(\frac{n}{\vartheta\alpha\left(n\right)}\right)\right) + \alpha(n)v.
\label{eq:reform_saddle_equation0}
\end{align}
Note that we have
\begin{align}
\log(y) \leq \log(y\log(y)) \leq (1+\epsilon) \log(y)
\label{eq:bound_for_logs}
\end{align}
for all $\epsilon>0$ and $y$ large enough.
Using this, it is straightforward to see that equation \eqref{eq:reform_saddle_equation0} has exactly one solution in the region $v\geq 0$ and that this solution has to be $o\left(\frac{1}{\alpha(n)}\right)$ as $n\to\infty$. To obtain a lower bound for $v$, we use the inequality $e^{x}\leq 1+2x$ for $0\leq x\leq \log 2$.
Thus $v$ is larger than the solution $v'$ of the equation
\begin{align}
\log\left(\frac{n}{\vartheta\alpha(n)}\right) (1+2 \alpha(n) v')
=
\log\left(\frac{n}{\vartheta\alpha(n)}\log\left(\frac{n}{\vartheta\alpha\left(n\right)}\right)\right) + \alpha(n)v'.
\end{align}
A simple computation gives
\begin{align}
v'= \frac{\log\log\left(\frac{n}{\vartheta\alpha(n)}\right) }{2\alpha(n)\log\left(\frac{n}{\vartheta\alpha(n)}\right) +\alpha(n)}
\end{align}
This establishes a lower bound for $v$. For an upper bound, we argue similarly with $1+x \leq e^{x}$ for $x\geq 0$.
This completes the proof of \eqref{eq_y_n_asympt}.
We prove \eqref{eq:x_n_with_y_n_alpha} only for $x_n$.
The asymptotics for $x_{n,\alpha}$ then follows immediately by inserting $b=0$ into the asymptotics for $x_n$.
The defining equation \eqref{eq:def_xn} of $x_n$ has exactly one solution can be rewritten as
\begin{align}
\vartheta(x_n)^{\alpha(n)} -\vartheta(x_n)^{b}
=
n\left(1-(x_n)^{-1} \right).
\end{align}
We now insert $x_n = e^y$. This gives
\begin{align}
\vartheta e^{\alpha(n)y} -\vartheta e^{by}
=
n\left(1-e^{-y} \right).
\label{eq:reform_saddle_equation}
\end{align}
The equation \eqref{eq:reform_saddle_equation} has exactly one solution in the region $y>0$.
Further, both sides of \eqref{eq:reform_saddle_equation} are monotone increasing functions of $y$.
Inserting $y=y_{n,\alpha} \pm \frac{c}{\alpha(n)}$ with $c>0$ into \eqref{eq:reform_saddle_equation} and using \eqref{eq_y_n_asympt} shows that
the RHS of \eqref{eq:reform_saddle_equation} behaves like
\begin{align}
n\left(1-e^{-y_{n,\alpha} \pm \frac{c}{\alpha(n)}} \right)
\sim
\frac{n}{\alpha(n)}\log\left(\frac{n}{\vartheta\alpha(n)}\log\left(\frac{n}{\vartheta\alpha\left(n\right)}\right)\right).
\end{align}
On the other hand, the LHS of \eqref{eq:reform_saddle_equation} behaves like
\begin{align}
\vartheta e^{\alpha(n)\left(y_{n,\alpha} \pm \frac{c}{\alpha(n)}\right)} -\vartheta e^{b\left(y_{n,\alpha} \pm \frac{c}{\alpha(n)}\right)}
\sim
e^{\pm c}\frac{n}{\alpha(n)}\log\left(\frac{n}{\vartheta\alpha(n)}\right).
\end{align}
Using \eqref{eq:bound_for_logs}, we immediately see that the solution of \eqref{eq:reform_saddle_equation}
has to be in the interval $[y_{n,\alpha} - \frac{c}{\alpha(n)}, y_{n,\alpha} + \frac{c}{\alpha(n)}]$.
We now use the approach $y = y_{n,\alpha} +v$.
Clearly, we must have $v=o\left(\frac{1}{\alpha(n)}\right)$.
We now argue as for \eqref{eq_y_n_asympt}.
To get a lower bound for $v$, we use $1+x\leq e^x$ and $1-e^{-x}\leq x$.
This leads to the equation
\begin{align*}
\vartheta e^{\alpha(n) y_{n,\alpha}} (1 +\alpha(n) v') - \frac{3}{2}\vartheta e^{by_{n,\alpha}}
=
n(y_{n,\alpha} +v').
\end{align*}
Using the definition of $y_{n,\alpha}$ in \eqref{eq:def_y_n_alpha}, we immediately get
\begin{align*}
v'
=
\frac{3\vartheta e^{by_{n,\alpha}}}{2\vartheta e^{\alpha(n) y_{n,\alpha}}\alpha(n)- 2n}
=
\frac{3\vartheta e^{by_{n,\alpha}}}{2 n y_{n,\alpha} \alpha(n)- 2n}
\sim
\frac{3\vartheta e^{by_{n,\alpha}}}{2 n \log(n)}.
\end{align*}
The upper bound is obtained similarly. This completes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{thm:Haupt}}
\label{sect:proof_clt}
We give here the proof for the case $K=1$ only.
We thus write $m(n)$ and $\mu_{m(n)}$ instead of $m_1(n)$ and $\mu_{m_1(n)}(n)$.
This mainly simplifies the notation, but does not change the argument used.
As in \cite{BeScZe17}, the proof will be based upon point-wise convergence of moment-generating
functions.
Replacing $s$ by $\frac{s}{\sqrt{\mu_{m(n)}}}$ in \eqref{eq:moment_Cm}, we get
\begin{align*}
M_{n}(s)
&:=
\ETa{\exp\left(\frac{s}{\sqrt{\mu_{m(n)}}} C_{m(n)}\right)}\\
&=
\frac{1}{Z_{n,\alpha}}\left[z^{n}\right]
\exp\left(\vartheta\mathrm{e}^{\frac{s}{\sqrt{\mu_{m(n)}}}}\frac{z^{m(n)}}{m(n)}+\vartheta\sum_{\substack{1\leq j\leq \alpha(n),\\ j\neq m(n)}}\frac{z^{j}}{j}\right).
\end{align*}
In order to determine the asymptotic behaviour of $M_{n}(s)$, we apply Proposition~\ref{prop:SPmethod} with the triangular array ${\boldsymbol q} = (q_{j,n})_{1 \leq j \leq \alpha(n), n \in \mathbb{N}}$ with
\begin{align}
q_{j,n}
=
\begin{cases}
0 & \text{if }j>\alpha\left(n\right)\\
\vartheta \exp\left(s/\sqrt{\mu_{m(n)}}\right) & \text{if }j=m\left(n\right)\\
\vartheta & \text{otherwise},
\end{cases}
\label{eq:triangular_array_CLT_Cm}
\end{align}
together with $f_n(z)=1$ for all $n$.
We thus have to show that ${\boldsymbol q}$ and the sequence $(f_n)_{n\in\mathbb{N}}$ are both admissible, see Definition~\ref{def:admQ} and~\ref{def:admF}.
The sequence $(f_n)_{n\in\mathbb{N}}$ is admissible for all triangular arrays.
Thus we have only to show that ${\boldsymbol q}$ is admissible.
Hence, we have to study the solution $x_{n,{\boldsymbol q}}$ of the equation \eqref{eq:GenSaddle}.
Since this solution depends on the parameter $s$, we write $x_{n,{\boldsymbol q}}(s)$ instead of $x_{n,{\boldsymbol q}}$.
Also, we will write $\lambda_{2,n}(s)$ for $\lambda_{2,n,\alpha,{\boldsymbol q}}$ with $\lambda_{2,n,\alpha,{\boldsymbol q}}$ as in \eqref{eq:def_lambda_p}.
We now show
\begin{lem}
\label{lem:HilfeFuerAdm}
Let ${\boldsymbol q}$ be as in \eqref{eq:triangular_array_CLT_Cm} and $x_{n,{\boldsymbol q}}(s)$ be defined as in \eqref{eq:GenSaddle}.
Suppose that $\mu_{m(n)}\to\infty$ with $\mu_{m(n)}$ as in \eqref{eq:def_mu_n}.
Then we have, locally uniformly in $s\in\mathbb{R}$,
that
\begin{equation}
\alpha\left(n\right)\log\left(x_{n,{\boldsymbol q}}\left(s\right)\right)
\sim
\log\left(\frac{n}{\vartheta\alpha\left(n\right)}\log\left(\frac{n}{\vartheta\alpha\left(n\right)}\right)\right).\label{eq:XnCasymp}
\end{equation}
In particular, if $n$ is large enough,
\[
x_{n,{\boldsymbol q}}(s)\geq1
\text{ and }
\lim_{n\to\infty}x_{n,{\boldsymbol q}}(s)=1.
\]
Furthermore, we have
\begin{equation}
\lambda_{2,n}(s)
\sim
n \alpha\left(n\right)
\label{eq:Lambda2asymp}
\end{equation}
locally uniformly in $s$ with $\lambda_{2,n}(s)= \lambda_{2,n,{\boldsymbol q},\alpha}$ as in \eqref{eq:def_lambda_p}.
\end{lem}
\begin{proof}
We use Lemma~\ref{lem:saddle_point_with_c} to prove Lemma~\ref{lem:HilfeFuerAdm}.
Recall that $x_{n,\alpha}(c)$ is defined in \eqref{eq:def_xn(c)} for $c>0$ as the solution of
\begin{align*}
cn = \vartheta\sum_{j=1}^{\alpha(n)} \big( x_{n,\alpha}(c) \big)^j.
\end{align*}
Furthermore $x_{n,{\boldsymbol q}}(s)$ is the solution of the equation
\begin{align}
n
=
\vartheta\left(\mathrm{e}^{\frac{s}{\sqrt{\mu_{m(n)}}}}-1\right)\big(x_{n,{\boldsymbol q}}(s)\big)^{m(n)}
+
\vartheta\sum_{j=1}^{\alpha(n)}\big(x_{n,{\boldsymbol q}}(s)\big)^{j} .
\label{eq:lem:HilfeFuerAdm1}
\end{align}
We now assume that $0\leq s \leq U$ with $U>0$ an arbitrary, but fixed real number.
Since $\mathrm{e}^{\frac{s}{\sqrt{\mu_{m(n)}}}} \geq 1$, we get
\begin{align}
x_{n,{\boldsymbol q}}(s) \leq x_{n,\alpha}(1) = x_{n,\alpha},
\end{align}
where $x_{n,\alpha}$ is as in \eqref{eq:StaSad}.
Using the definition of $\mu_{m(n)}$ together with $s\leq U$ and $\mu_{m(n)}\to\infty$, we obtain for $n$ large
\begin{align*}
\left(\mathrm{e}^{\frac{s}{\sqrt{\mu_{m(n)}}}}-1\right)\big(x_{n,{\boldsymbol q}}(s)\big)^{m(n)}
&\leq
\frac{2U}{\sqrt{\mu_{m(n)}}} \big(x_{n,\alpha}\big)^{m(n)}
=
\frac{2U}{\sqrt{\vartheta}} \sqrt{m(n)} \big(x_{n,\alpha}\big)^{\frac{m(n)}{2}}\\
&\leq
\frac{2U}{\sqrt{\vartheta}} \sqrt{\alpha(n) \big(x_{n,\alpha}\big)^{\alpha(n)}}.
\end{align*}
Applying Lemma~\ref{lem:saddle_point_with_c} for $x_{n,\alpha} = x_{n,\alpha}(1) $, we get for $n$ large
\begin{alignat*}{1}
\left(\mathrm{e}^{\frac{s}{\sqrt{\mu_{m(n)}}}}-1\right)\big(x_{n,{\boldsymbol q}}(s)\big)^{m(n)}
\leq &
\frac{4U}{\vartheta}\sqrt{n \log\left(\frac{n}{\vartheta\alpha\left(n\right)}\right)}
\leq n^{1/2+\epsilon},
\end{alignat*}
for $\epsilon>0$ small.
Inserting this into \eqref{eq:lem:HilfeFuerAdm1}, we get
\begin{align}
\vartheta\sum_{j=1}^{\alpha(n)}\big(x_{n,{\boldsymbol q}}(s)\big)^{j}
=
n- \vartheta\left(\mathrm{e}^{\frac{s}{\sqrt{\mu_{m(n)}}}}-1\right)\big(x_{n,{\boldsymbol q}}(s)\big)^{m(n)}
\geq
n(1 - n^{-1/2+\epsilon}).
\end{align}
Using the definition if $x_{n,\alpha}(c)$, we see that
\begin{align}
x_{n,\alpha}\left(1 - n^{-1/2+\epsilon} \right) \leq x_{n,{\boldsymbol q}}(s) \leq x_{n,\alpha}(1).
\end{align}
Applying Lemma~\ref{lem:saddle_point_with_c} to $x_{n,\alpha}\left(1 - n^{-1/2+\epsilon} \right)$ and $x_{n,\alpha}(1)$
immediately completes the proof for $0\leq s\leq U$.
The argumentation for $-U\leq s \leq 0$ is similar and we thus omit it.
\end{proof}
Lemma~\ref{lem:HilfeFuerAdm} implies that $x_{n,{\boldsymbol q}}(s)$ with ${\boldsymbol q}$ in \eqref{eq:triangular_array_CLT_Cm} is admissible.
Thus we can apply Proposition~\ref{prop:SPmethod}.
We obtain for each $s\geq 0$ that
\begin{align}
M_{n}\left(s\right)
=
\frac{1}{Z_{n,\alpha}}\frac{\exp\left(h_{n}\left(s\right)\right)}{\sqrt{2\pi\lambda_{2,n}(s)}}\left(1+o\left(1\right)\right),
\end{align}
where
\begin{align}
h_{n}(s)
=
\vartheta\left(\mathrm{e}^{\frac{s}{\sqrt{\mu_{m(n)}}}}-1\right)\frac{\left(x_{n,{\boldsymbol q}}(s)\right)}{m(n)}^{m\left(n\right)}
+
\sum_{j=1}^{\alpha\left(n\right)}\frac{\left(x_{n,{\boldsymbol q}}(s)\right)}{j}^{j}-n\log\left(x_{n,{\boldsymbol q}}\left(s\right)\right).
\label{eq:def_h(s)_CLT}
\end{align}
Since $M_{n}\left(0\right)=1$, we have
\[
\frac{1}{Z_{n,\alpha}}\frac{\exp\left(h_{n}\left(0\right)\right)}{\sqrt{2\pi\lambda_{2,n}(0)}}\xrightarrow{n\to\infty}1.
\]
Our aim is to use this result to complete the proof of Theorem~\ref{thm:Haupt}.
We observe from \eqref{eq:Lambda2asymp} that the leading coefficient of $\lambda_{2,n}(s)$ is independent of $s$.
Therefore, we have proven Theorem~\ref{thm:Haupt} if we can show that for each $s\geq 0$
\begin{align}
h_{n}(s) = h_n(0) + s \sqrt{\mu_{m(n)}} + \frac{s^2}{2} + o\left(1 \right)
\qquad
\text{ as }n\to\infty.
\label{eq:def_h(s)_CLT_asympt}
\end{align}
We begin with the derivatives of $x_{n,{\boldsymbol q}}(s)$
\begin{lem}
\label{lem:SPabl}
The function $s\mapsto x_{n,{\boldsymbol q}}(s)$ is for each $n$ infinitely often differentiable. Further, we have
\begin{align}
\frac{x_{n,{\boldsymbol q}}^\prime(s)}{x_{n,{\boldsymbol q}}(s)}
=
-\frac{\exp\left(\frac{s}{\sqrt{\mu_{m(n)}}}\right)\left(x_{n,{\boldsymbol q}}\left(s\right)\right)^{m\left(n\right)}}{\sqrt{\mu_{m(n)}}\lambda_{2,n}(s)}.
\end{align}
\end{lem}
\begin{proof}
Let $n$ be fixed. Since all coefficients of ${\boldsymbol q}$ in \eqref{eq:triangular_array_CLT_Cm} are non-negative and not all $0$,
it follows that the equation \eqref{eq:GenSaddle} has for each $s\geq 0$ exactly one solution.
Thus the function $s \to x_{n,{\boldsymbol q}}(s)$ is a well defined function on $[0,\infty)$.
Applying the implicit function theorem to the function
\begin{align*}
g(s,x)
=
\vartheta\left(\mathrm{e}^{\frac{s}{\sqrt{\mu_{m(n)}}}}-1\right) x^{m(n)}
+
\vartheta\sum_{j=1}^{\alpha(n)}x^{j}
\end{align*}
and using that $\frac{\partial}{\partial x} g(s,x) >0$ for $x>0$ completes the proof.
\end{proof}
Applying Lemma~\ref{lem:SPabl} to $h_n(s)$, we obtain
\begin{lem}
We have
\begin{align}
h'_{n}(s)
&=
\vartheta\frac{\mathrm{e}^{\frac{s}{\sqrt{\mu_{m(n)}}}}}{\sqrt{\mu_{m(n)}}}\frac{\left(x_{n,{\boldsymbol q}}(s)\right)}{m\left(n\right)}^{m\left(n\right)},
\label{eq:def_h(s)_CLT'}\\
h''_{n}(s)
&=
\frac{1}{\sqrt{\mu_{m(n)}}} h'_{n}(s)- \frac{(m(n))^2}{\lambda_{2,n}(s)} \left(h'_{n}(s)\right)^2,
\label{eq:def_h(s)_CLT''}\\
h'''_{n}(s)
&=
\frac{1}{\sqrt{\mu_{m(n)}}} h''_{n}(s)
-
\frac{2(m(n))^2}{\lambda_{2,n}(s)} h'_{n}(s)h''_{n}(s)
+ \frac{\lambda_{3,n}(s) x_{n,{\boldsymbol q}}^\prime(s)}{\big(\lambda_{2,n}(s)\big)^2} \left(h'_{n}(s)\right)^2.
\label{eq:def_h(s)_CLT'''}
\end{align}
\end{lem}
\begin{proof}
Equation~\eqref{eq:def_h(s)_CLT'} follows immediately from \eqref{eq:def_h(s)_CLT} and the definition of $x_{n,{\boldsymbol q}}(s)$.
Equation~\eqref{eq:def_h(s)_CLT''} and~\eqref{eq:def_h(s)_CLT'''} follow from Lemma~\ref{lem:SPabl}, equation~\eqref{eq:def_h(s)_CLT'} and
the definition of $\lambda_{2,n}(s)$, see Definition~\ref{def:admQ}.
\end{proof}
Equation \eqref{eq:def_h(s)_CLT_asympt} now follows by using
$x_{n,{\boldsymbol q}}(0) = x_{n,\alpha}$, $\mu_{m} = \vartheta \frac{x^{m}_{n,\alpha}}{m}$
and Lemma~\ref{lem:HilfeFuerAdm}. This completes the proof of Theorem~\ref{thm:Haupt}.
\bibliographystyle{plain}
|
1,477,468,750,624 | arxiv | \section{Introduction}
\label{s:introduction}
IGRJ17361-4441 is a hard X-ray transient source first observed by \citet{gibaud2011}
with the IBIS/ISGRI telescope \citep{ubertini2003} onboard the {\it INTEGRAL} satellite \citep{winkler2003}
and was quickly recognized to be hosted in the galactic globular cluster NGC 6388 \citep{ferrigno2011}.
The location of the transient (close to the globular cluster gravitational center, but see later)
is of great importance since NGC 6388, among all the globular clusters in our Galaxy,
is one of the best candidates \citep{baumgardt2005} to host an intermediate mass black hole (hereafter IMBH). In particular,
by using high resolution optical observations, \citet{lanzoni2007} estimated the mass of the IMBH to be $\simeq 5700$
M$_{\odot}$.
It would be natural for such an IMBH to emit significant radiation
in the X-ray band due to the likely accretion of matter from its surroundings.
{In the context of the earliest observations of globular clusters,
\citet{bahcall1975} and \citet{silk1976} were the first to suggest that the X-ray
emission detected towards these clusters was due to IMBHs (in the mass range $20$ M$_{\odot}$ -- $10^6$M$_{\odot}$)
accreting from the intracluster medium. This issue was considered more recently by \citet{grindlay2001} who provided the census of the compact
object and binary population in the globular cluster 47 Tuc and obtained an upper limit to the central IMBH of a few hundred solar masses.}
Initial {\it XMM}-Newton and Chandra observations in this direction (\citealt{nucita2008} and \citealt{cseh2010}) showed that
the core of NGC 6388 hosts several {\it X}-ray sources.
{Based on the correlation between the X-ray and radio flux from black holes (\citet{merloni2003}); \citet{maccarone2004}
was the first to point out that the search for radio emission from faint black holes is useful to test the IMBH hypothesis in globular clusters and dwarf
spheroidal galaxies (\citealt{maccarone2005}). \citet{cseh2010}
observed the central region of NGC 6388 in the radio band using the Australia Telescope Compact Array (ATCA) to search for radio signatures of the IMBH.
The radio observation resulted in an upper limit of the IMBH mass of $\simeq 1500$ M$_{\odot}$.}
The discovery of a transient source close to the NGC 6388 gravitational center could be related to
the turning on of the putative globular cluster IMBH.
However, as will become clear in the subsequent sections,
the nature and spectral properties of the transient IGRJ17361-4441 are difficult to reconcile with the IMBH picture
and rather favour an interpretation as a high mass X-ray binary (HMXB) or
a low mass X-ray binary (LMXB).
Several observational campaigns (in the {\it X}-rays as well as in the
radio band) were organized in order to pinpoint IGRJ17361-4441 and draw firm conclusions
on the NGC 6388 IMBH paradigm.
In this paper we briefly discuss (see section \ref{s:previousObs}) the past $X$-ray observations of the NGC 6388 globular cluster (see \citealt{nucita2008}, and
\citealt{cseh2010}) and the discovery of
the hard transient IGRJ17361-4441 by {\it INTEGRAL} (\citealt{gibaud2011}) as well as the
follow-up observations conducted by Chandra (\citealt{pooley2011}),
Swift/XRT and RXTE observatories (\citealt{ferrigno2011} and \citealt{bozzo2011}).
Then we concentrate (see section \ref{s:xmmObs})
on the analsyis of two {\it XMM}-Newton slew observations of NGC 6388 conducted 15 days after the {\it INTEGRAL}
discovery of the source. The two slew observations had $\simeq 7.6$ seconds and $\simeq 7.7$ seconds on source exposure time. Finally our conlusions are presented in
section \ref{s:conclusion}.
\section{Previous observations of NGC 6388}
\label{s:previousObs}
\subsection{{\it XMM}-Newton and Chandra observations of NGC 6388}
By studying a combination of high resolution (HST ACS-HRC, ACS-WFC, and WFPC2) and wide field (ESO-WFI) observations
of the globular cluster NGC 6388, \citet{lanzoni2007} claimed the existence of a central IMBH. Such a compact object of mass
$\simeq 5700$ M$_{\odot}$ should reside in the globular cluster center of gravity\footnote{Note however that, as first pointed out by \citet{bahcall1976},
a black hole in a stellar cluster will experience a Brownian motion due to gravitational interactions with the surrounding objects.
Thus, the black hole is not necessarily at the dynamical center of the host cluster, but may move with mean square velocity given by
(\citealt{merritt2007})
\begin{equation}
\frac{1}{2}M<v_{rms}^2>\simeq \frac{3}{2}m\sigma^2
\label{vrms}
\end{equation}
where $M$, $m$ and $\sigma$ represent the black hole mass, the perturber average mass and the stellar velocity
dispersion within $\sim 0.6 r_i$, respectively. Here $r_i$ is the influence radius of the black hole (for details see \citealt{merritt2007}
and references therein).} localized at the coordinates (J2000)
$RA=17^{\rm h}~36^{\rm m}~17.23^{\rm s}$, $Dec=-44^0~44^{\prime}~7.1^{\prime\prime}$. An uncertainty of $0.3^{\prime\prime}$ is associated with both coordinates.
\citet{nucita2008} suggested that this IMBH should emit radiation in the $X$-ray band due to accretion from
the surrounding matter. A $48$ ks {\it XMM}-Newton observation was made on 21 March 2003. It resulted in a spectrum which was
well fit by an absorbed power-law model. The resulting best fit
parameters were $N_H=(2.7\pm0.3)\times 10 ^{21}$ cm$^{-2}$ for the hydrogen column density and $\Gamma=2.4\pm0.1$ for the power law index.
The unabsorbed flux in the $0.5-7$ keV band was $F_{0.5-7}=(4.0\pm0.2)\times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$ which,
for a distance of $13.2$ kpc corresponds to a luminosity
of $L_{0.5-7}\simeq (7.2\pm 0.4)\times 10^{33}$ erg s$^{-1}$. Note that the hydrogen column density is consistent with the average one
found in the direction of the target (\citealt{dickey1990}).
The Chandra satellite, with a much better angular resolution than that of $XMM$-Newton, observed towards NGC 6388 for $\simeq 45$ ks on
21 April 2005 (id 5505). \citet{nucita2008} identified 16 discrete
sources within the half mass radius ($\simeq 40^{\prime\prime}$, see \citealt{lanzoni2007}) of the cluster. The $3$ sources close to the gravitational center
were not spatially resolved by the authors, so that they were considered virtually as a single source (labeled as $\#14^*$).
The unabsorbed flux in the 0.5-7 keV band
of the $\#14^*$ is $F_{0.5-7}\simeq 1.7\times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$, corresponding to a luminosity of
$L_{0.5-7}\simeq 3\times 10^{33}$ erg s$^{-1}$.
A more detailed analysis on the same Chandra data set was conducted by \citet{cseh2010}. After removing the pixel randomization, these authors were able
to spatially resolve the source $\#14^*$ into three separate sources labeled as $\#12$, $\#7$ and $\#3$. In particular, the source $\#12$, which is consistent
with the position of the center of gravity of NGC 6388, is characterized by an unabsorbed flux of $F_{0.3-8}\simeq 4.0\times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$
corresponding to an intrinsic luminosiy of $L_{0.5-7}\simeq 8.3\times 10^{32}$ erg s$^{-1}$.
{\citet{cseh2010} searched for a radio counterpart of the putative IMBH in NGC 6388
using the ATCA facility}. Unfortunately, this search only resulted in an upper limit to the
radio flux at 5 GHz of $\simeq 81$ $\mu$Jy/beam. Therefore, it was only possible to determine an upper limit to the IMBH radio
luminosity of $L_R < 8.4\times 10^{28}$ erg s$^{-1}$.
Based on the fundamental plane of black hole accretion (\citealt{merloni2003} and \citealt{kording2006}) and using the observed $X$-ray and radio luminosities,
it was then possible to put a 3$\sigma$ upper limit of $\simeq 1500$ M$_{\odot}$ on the mass of the IMBH in NGC 6388 (\citealt{cseh2010}).
The estimated mass value has to be treated with caution for two reasons: {\it i}) the identification of the $X$-ray counterpart of such a black hole
is not trivial since several sources are close to the NGC 6388 center of gravity. If none of them are associated with the IMBH,
then one can not use the fundamental plane relation to get an estimate of the mass; {\it ii}) the fundamental plane relation
(as derived by \citealt{merloni2003} and \citealt{kording2006}) is not tested for black hole masses in the range of interest
for IMBHs, i.e. $10^3$ M$_{\odot}$ -- $10^4$ M$_{\odot}$. {Note however that \citet{maccarone2005} and \citet{maccarone2008} showed that the non-detection of a
radio source, in combination with the estimate of the globular cluster ISM density\footnote{The amount of gas contained in globular clusters is an issue of debate.
The intracluster medium density can be estimated by using the dispersion measures of the pulsars observed within the cluster
(see e.g. \citealt{freire2001, freire2003}) or inferred by the empirical knowledge about the stellar mass loss (\citealt{pfahl}).}
and the expected value of the accretion rate, can be used to get information
(at least as an order of magnitude estimate) of the IMBH mass.}
\subsection{{\it INTEGRAL} discovery of IGRJ17361-4441 and subsequent $X$-ray follow-up observations}
On 11 August 2011, \citet{gibaud2011} reported the discovery of a new hard $X$-ray transient (IGRJ17361-4441) by the IBIS/ISGRI
telescope \citep{ubertini2003} onboard the {\it INTEGRAL} satellite \citep{winkler2003}. The spectrum of the source,
associated with the globular cluster NGC 6388, was described by a power law with photon index $\Gamma=2.6^{+1.0}_{-0.7}$ and characterized by a flux
in the 20-100 keV of $F_{20-100}\simeq 9.7\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$.
Since this newly discovered transient is possibly associated with the IMBH in NGC 6388, IGRJ17361-4441
became the target of several $X$-ray follow-up observations aimed at obtaining accurate position and flux measurements
to test if this association is correct.
\citet{weinands2011} (but see also \citealt{ferrigno2011}) reported a Swift/XRT observation (1.9 Ks on 16 August 2011)
in which the astrometrically corrected position of
IGRJ17361-4441 was $RA=17^{\rm h}~36^{\rm m}~17.5^{\rm s}$, $Dec=-44^0~44^{\prime}~7.1^{\prime\prime}$.
A more detailed analysis on the XRT data conducted by \citet{bozzo2011} determined the new transient position
to be $RA=17^{\rm h}~36^{\rm m}~17.27^{\rm s}$, $Dec=-44^0~44^{\prime}~7.0^{\prime\prime}$ with an associated error (on both coordinates) of $1.9^{\prime\prime}$.
Thus, the distance\footnote{We note that the distances between the sources and the center of gravity of the globular cluser NGC 6388 are calculated by using the well
know Haversine formula. Distance uncertainties are calculated by corrctly propagating the errors on both $\alpha$ and $\delta$ coordinates.}
of the transient from the center of gravity of NGC 6388 is $0.4^{\prime\prime} \pm 1.4^{\prime\prime}$. Hence, the source position is consistent
(according to the Swift/XRT data) with the center of gravity of the cluster and (possibly) associated with the IMBH. Note also that
a 2.5 Ks Chandra observation was made on 29 August 2011 (\citealt{weinands2011}) in order
to improve the accuracy of the location of the transient.
IGRJ17361-4441 is located at the coordinates $RA=17^{\rm h}~36^{\rm m}~17.418^{\rm s}$, $Dec=-44^0~44^{\prime}~5.98^{\prime\prime}$
({the nominal Chandra positional accuracy is $0.6^{\prime\prime}$ on both coordinates}). In this case,
the estimated distance of the transient to the cluster center of gravity is $2.3^{\prime\prime}\pm0.5^{\prime\prime}$.
Based on the Swift/XRT astrometry only, one could conclude that the position of IGRJ17361-4441 is formally consistent with the center of gravity of NGC 6388 and
possibly related to the putative IMBH in the globular cluster. An updated radio observation conducted at ATCA by \citet{bozzo2011} put a more stringent upper
limit to the radio luminosity of $L_R< 5\times 10 ^{28}$ erg s$^{-1}$ so that, following the same procedure as in \citealt{cseh2010} and the 2005 Chandra $X$-ray
flux estimate, the new IMBH upper limit turns out to be $\simeq 600$ M$_{\odot}$ (\citealt{bozzo2011}).
However, a caveat on this conclusion is necessary. The new Chandra refined source coordinates (even if formally consistent with the source position
determined by Swift/XRT) indicate that the transient could be a new $X$-ray source (see later) not associated with the IMBH. In this case, and for the reasons explained above,
one should not use the black hole fundamental plane relation in order to estimate the IMBH mass.
If one believes that the transient is associated with the NGC 6388 center of gravity, then it should also be noted that
at least three sources (those labeled as $\#12$, $\#7$ and $\#3$ in \citealt{cseh2010}) are
within the error box of Swift/XRT. In particular, sources $\#12$ and $\#7$ have fluxes
$\simeq 4.0\times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$ and $\simeq 6.9\times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$, respectively.
If source $7$ is associated with the IMBH, the $X$-ray and radio observations together with the fundamental plane relation give
an upper limit of $\simeq 1200$ M$_{\odot}$.
The Swift/XRT spectrum of the transient source was fitted (\citealt{bozzo2011}) with an absorbed power law with photon index $\Gamma \sim 0.5-0.9$ and hydrogen column density
$N_H\simeq (0.5-0.9)\times 10 ^{22}$ cm$^{-2}$, i.e. consistent with that derived from the previous $XMM$-Newton data.
The flux in the $1.0-10$ keV band is
$F_{1-10}= (4.5-4.8)\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$, a factor $100$ more luminous
than the source $\#12$ possibly associated with the NGC 6388 IMBH (\citealt{cseh2010}). When the broad band spectrum (obtained by using Swift/XRT and {\it INTEGRAL/ISGRI}) was analysed
(and fit with a broken power law),
\citet{bozzo2011} obtained a flux of $F_{1-10}= (4.6^{+0.1}_{-0.5})\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$ in the $1.0-10$ kev band and
a flux of $F_{20-100}= (7.8^{+0.8}_{-3.8})\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$ in the $20-100$ kev band.
These results are consistent with those obtained by using the XRTE/PCA follow-up observation made on 17 August 2011. In particular, it was found that
$F_{3-15}= (6.7^{+0.1}_{-3.4})\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$ in the $3-15$ keV band (\citealt{bozzo2011}).
\section{The {\it XMM}-Newton slew observations}
\label{s:xmmObs}
The large collecting area of the nested mirrors together with the high quantum efficiency of the EPIC/PN camera
make the $XMM$-Newton satellite the most sensitive $X$-ray observatory available at present (\citealt{jansen2001}).
The $XMM$-Newton satellite was recognized to be a good instrument to collect data during slewing manoeuvres and is performing an
$X$-ray survey of the sky (\citealt{saxton2008}). Note that the $XMM$-Newton slew observations are moderately deep with a detection limit
of $1.2\times 10 ^{-12}$ erg cm$^{-2}$ s$^{-1}$ in the $0.2-12.0$ keV band.
Due to the scheduled observation program, the $XMM$-Newton satellite twice observed the region of the sky around NGC 6388.
The observations were taken on September 1$^{st}$ 2011 at 13:10:34 (herafter S1 observation) and 19:00:17 (UT) (herafter S2 observation), i.e.
15 days after the first Swift/XRT follow-up observation of IGRJ17361-4441. The transient
source was then observed serendipitously\footnote{The exposure times were estimated by calculating the distance travelled
by the source on the detector with the typical {\it XMM}-Newton slew speed of $90$ deg h$^{-1}$. The resulting exposure times were also corrected for chip gaps
and bad pixels (for details see \citealt{read2008}).} for
$\simeq 7.6$ s and $\simeq 7.7$ s in the two slew observations with the EPIC/PN instrument (see Fig. \ref{f0}).
\begin{figure}[htbp]
\vspace{6.5cm} \special{psfile=NGC6388_combined_new.ps
vscale=45 hscale=45
voffset=-10 hoffset=55 angle=0}
\caption{Contours (increasing by factors of 2) of lightly-smoothed 0.2-12 keV XMM-Newton slew
emission (EPIC-pn camera, the two 01/09/11 observations combined),
superimposed on a SAO-DSS image of NGC6388.}
\label{f0}
\end{figure}
\begin{figure}[htbp]
\vspace{8.2cm} \special{psfile=file_2.eps
vscale=44 hscale=44
voffset=250 hoffset=10 angle=-90}
\caption{The $XMM$-Newton spectrum of IGRJ17361-4441 (data points) collected during the first slew observation on September 1$^{st}$ 2011. The solid line
represents the best fit model (see text for details).}
\label{f1}
\end{figure}
\begin{figure}[htbp]
\vspace{8.2cm} \special{psfile=file_3.eps
vscale=44 hscale=44
voffset=253 hoffset=10.0 angle=-90}
\caption{The same as in figure 1, but for the second $XMM$-Newton slew observation on September 1$^{st}$ 2011.}
\label{f2}
\end{figure}
We decided to analyze the data sets separately in order to study a possible spectral variation on time-scales of hours.
The source spectrum has been extracted from a circle of radius $60^{\prime\prime}$ about the source with the
background being extracted from an annulus of inner radius $90^{\prime\prime}$ and outer radius $120^{\prime\prime}$ about the source.
The detector matrices are calculated taking into account
the transit of the source across the detector and using the method described in \citet{read2008}.
Hence, the source and background spectra (as well as the response matrices) were imported in the XSPEC package
(version 12.4.0) for the spectral analysis and fitting procedure.
The adopted model is an absorbed power law ({\it wabs*power}) with the hydrogen column density fixed to the average value
found by ROSAT in the direction of the target, i.e. $2.5\times 10^{21}$ cm$^{-2}$.
Note that this value is consistent with that derived by \citet{nucita2008} when analyzing the 2005 $XMM$-Newton observation of NGC 6388 and also similar
to the column density found by \citet{bozzo2011} (see also \citealt{ferrigno2011}) while studying the Swift/XRT follow-up observation of IGRJ17361-4441.
The adopted model has two free parameters; the photon index $\Gamma$ and the power law normalization $N$.
The fitting procedure to the S1 spectrum resulted in the best fit parameters ($\chi^2/\nu=1.4$ for 11 d.o.f.) $\Gamma = 1.16\pm0.20$ and
$N=(1.7\pm0.7)\times 10^{-3}$.
The absorbed fluxes in the $0.5-2.0$ keV, $2.0-10$ keV, and $0.5-10.0$ keV bands are
$F_{0.5-2}= (5.4^{+2.8}_{-3.2})\times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$, $F_{2-10}= (3.6^{+1.4}_{-1.7})\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$, and
$F_{0.5-10}= (4.1^{+1.6}_{-1.9})\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$, respectively.
Fitting the S2 spectrum with the same model gives the best
fit parameters ($\chi^2/\nu=0.4$ for 10 d.o.f.) $\Gamma = 1.28\pm0.35$ and $N=(1.2\pm0.6)\times 10^{-3}$.
The absorbed fluxes in the same bands as above are
$F_{0.5-2} = (3.4^{+2.3}_{-2.3}) \times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$, $F_{2-10}= (1.9^{+1.1}_{-1.4})\times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$, and
$F_{0.5-10}= (2.1^{+1.2}_{-1.8})\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$, respectively.
The maximum unabsorbed flux of the source in the 0.5-10 keV band is $\simeq 4.5\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$ corresponding to an intrinsic luminosity
of $\simeq 9.3\times 10^{35}$ erg s$^{-1}$.
In Figure \ref{f3}, the unabsorbed X-ray fluxes of NGC 6388 in the $0.5-10$ keV band (from 2003 to 2011) are shown. In the insert, we give
the data points corresponding to the IGRJ17361-4441 flare observed and monitored by several instruments (Swift/XRT, RXTE/PCA and $XMM$-Newton) in 2011 only.
Note that for the $XMM$-Newton observation in 2003, the Chandra observation in 2005, and the Swift and RXTE in 2011 the $0.5-10$ keV band
fluxes were obtained by extrapolating (to this energy band) the best fit models available
in the literature (see e.g. \citealt{bozzo2011}).
What can be easily observed from the results reported here, is that the $X$-ray flux in the 0.5-10 keV band
as detected by the $XMM$-Newton slew observations, i.e. $\simeq 4.5\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$, is consistent with that
observed by Swift/XRT 15 days earlier. {Note also that the spectrum power-law seems to be marginally consistent ($\Gamma\simeq 0.93-1.63$) with that
derived from the previous high energy observations ($\Gamma\simeq 0.5-0.9$, see \citealt{bozzo2011}), a point that could help in a classification of
the transient nature (see the subsequent discussion).}
\begin{figure}[htbp]
\vspace{7.cm} \special{psfile=file_4.eps
vscale=60 hscale=60
voffset=0 hoffset=55.0 angle=0}
\caption{The data points from the left to the right correspond to the NGC 6388 flux in the $0.5-10$ keV band from 2003 to 2011. In the insert,
we give the data points corresponding to the flare observed and monitored in 2011 only (see text for details).}
\label{f3}
\end{figure}
\section{Results and discussion}
\label{s:conclusion}
IGRJ17361-4441 is a hard transient recently observed by the $INTEGRAL$ satellite. $X$-ray follow-up observations have shown that
the source is within the globular cluster NGC 6388.
Based only on the astrometry of the Swift/XRT satellite, the transient position is consistent with the center of gravity of the globular cluster and
this opens the possibility that IGRJ17361-4441 is associated with an IMBH which is turning on.
However, if one believes that the transient is associated with the IMBH in NGC 6388, then it should be noted that
at least three $X$-ray sources (those labeled as $\#12$, $\#7$ and $\#3$ in \citealt{cseh2010}) are within the error box of Swift/XRT.
In particular, the sources $\#12$ and $\#7$ have fluxes which differ at least by a factor $2$ between them.
If source $\#7$ is associated with the IMBH, then the observed Chandra $X$-ray flux (see \citealt{nucita2008} and \citealt{cseh2010}) and
the updated radio observation of \citet{bozzo2011} together with the fundamental plane relation give an upper limit of $\simeq 1200$ M$_{\odot}$.
In the IMBH hypothesis, the intrinsic luminosity of the source as determined by using the $XMM$-Newton slew data
(i.e. $\simeq 9.3\times 10^{35}$ erg s$^{-1}$) should be
compared with that derived by using the 2005 Chandra data (and in particular for the source $\#12$ in \citealt{cseh2010})
when the putative IMBH was in quiescent state. In this case, one finds that the source luminosity increased by at least a factor $\simeq 1000$.
Moreover, the spectrum seems to follow a power law with photon index $\Gamma\simeq 0.96-1.63$.
Nevertheless, the refined source position given by the Chandra satellite
(even if still in agreement with the Swift/XRT result)
argues against the IMBH hypothesis in favor of a newly discovered source.
In this case, the $XMM$-Newton intrinsic source luminosity should be compared with the upper limit for the
quiescent state of the source. \citet{pooley2011}, based on the non-detection of the source in the 2005 Chandra observation, estimated
this limit to be $\simeq 10^{31} $erg s$^{-1}$. Thus, in this case the transient source has increased its luminosity by a factor
close to $10^5$.
Two more possibilities for the nature of the transient source are that it is either a HMXB or a LMXB.
{The first possibility is actually unlikely since these systems involve companion stars with mass
larger than $\simeq 10$ M$_{\odot}$ (\citealt{lewin2006}), i.e. O/B stars which are not expected to exist in globular clusters.
Note also that NGC 6388 was extensively observed by the HST instruments (see e.g. \citealt{hst}) and the collected data
did not show the presence of any O or B star in the globular cluster.
Hence, among the IMBH alternatives, the LMXB option is the most favorable.
This is supported by the $X$-ray luminosity ($\simeq 9.3\times 10^{35}$ erg s$^{-1}$) and by the soft spectrum
($\Gamma\simeq 0.93-1.63$) observed in the
$XMM$-Newton slew observation which seems to be consistent with the typical characteristics of the LMXB class of objects.
A long $X$-ray observation (sufficient to allow a detailed timing and spectral analysis) may
help in understanding the physics underlying this transient source.}
|
1,477,468,750,625 | arxiv | \section{Introduction} \label{s1}
This note deals with the correlation functions and distribution functions of binary sequences. The purpose is to improve the current theoretical results for the Mobius sequence
$\{\mu(n): n\geq 1 \}$ and the Liouville sequence $\{\lambda(n): n\geq 1 \}$. Given a large number $x\geq 1$, these results imply that the short sequences
$\{\mu(n),\mu(n+1), \ldots, \mu(n+r-1)\}$ and $\{\lambda(n),\lambda(n+1), \ldots, \lambda(n+r-1)\}$ of length $r=[x] \geq 1$ are well distributed and have ``two-value" correlation
functions. The theory and recent results for the correlation functions of some binary sequences are investigated in \cite{CS00}, et alii.\\
The main results are Theorem \ref{thm1.1} for the Mobius sequence and Theorem \ref{thm1.2} for the Liouville sequence.\\
\begin{thm} \label{thm1.1} Let $C>2$ be a constant, and let $k\geq 1$ be a small fixed integer. Then, for every large number $x>1$,
\begin{equation}
\sum_{n \leq x} \mu(n+\tau_{1}) \mu(n+\tau_{2})\cdots\mu(n+\tau_{k})=O\left (\frac{x}{(\log x)^{C}} \right )
\end{equation}
for any fixed sequence of integers $\tau_{1}<\tau_{2}<\cdots<\tau_{k}$.
\end{thm}
This result improves the upper bound for the average order of the correlations of Mobius functions
\begin{equation}
\sum_{1 \leq \tau_{i} \leq T} \left |\sum_{x\leq n \leq 2x} \mu(n+\tau_{1}) \mu(n+\tau_{2})\cdots\mu(n+\tau_{k}) \right | \ll k \left (\frac{\log\log T}{\log T}+\frac{1}{\log^{1/3000} x}T^k x \right ),
\end{equation}
where $10 \leq T\leq x$, see \cite[p. 4]{EP94}, and \cite{MR15}. Other related works are the special case
\begin{equation}
\sum_{n \leq r}\mu(n) \mu(r-n)=O(x/(\log x)^B)
\end{equation}
for almost every $r>1$, and $B>0$ constant, which is proved in \cite{DK15}, and the functions fields versions in \cite{CD15}, \cite{CR14}, and \cite{MW16}. Specifically, there is an explicit upper bound
\begin{equation}
\left |\sum_{F \in M_{n}} \mu(F+f_{1}) \mu(F+f_{2})\cdots\mu(F+f_{k}) \right | \leq 2knq^{n-1/2}+3rn^2q^{n-1},
\end{equation}
where $f_i \in \mathbb{F}_{q}[x]$ are distinct fixed polynomials of degree $\deg(f_{i})<n$, and $M_n=\{F \in \mathbb{F}_{q}[x]:\deg(F)=n \}$ is the subset of polynomials of degree $\deg(f_{i})=n$, this appears in \cite{CR14}.\\
\begin{thm} \label{thm1.2} Let $C>2$ be a constant, and let $k\geq 1$ be a small fixed integer. Then, for every large number $x>1$,
\begin{equation}
\sum_{n \leq x} \lambda(n+\tau_{1}) \lambda(n+\tau_{2})\cdots\lambda(n+\tau_{k})=O\left (\frac{x}{(\log x)^{C}} \right )
\end{equation}
for any fixed sequence of integers $\tau_{1}<\tau_{2}<\cdots<\tau_{k}$.
\end{thm}
The first few sections cover basic concepts, and the average orders of some arithmetic functions. The upper bound for simpler correlation functions $\sum_{n\leq x}\mu(n) \mu(n+1) \ll x/(\log x)^C$, in Theorem \ref{thm5.1} in Section \ref{s5}, and $\sum_{n\leq x}\mu(n) \mu(n+1) \mu(n+2) \ll x/(\log x)^C$, in Theorem \ref{thm6.1} in Section \ref{s6}, are considered first. The more general proofs for Theorem \ref{thm1.1} and Theorem \ref{thm1.2} appear in Section \ref{s9}. \\
\newpage
\section{Representations of Liouville and Mobius Functions} \label{s2}
The symbols $\mathbb{N} = \{ 0, 1, 2, 3, \ldots \}$ and $\mathbb{Z} = \{ \ldots, -3, -2, -1, 0, 1, 2, 3, \ldots \}$ denote the subsets of integers. For $n \in \mathbb{N}$, the prime divisors counting functions are defined by
\begin{equation}
\omega(n)=\sum_{p \mid n} 1 \qquad \text{ and } \qquad \Omega(n)=\sum_{p^v \mid n} 1
\end{equation}
respectively. The former is not sensitive to the multiplicity of each prime $p \mid n$, but the latter does count the multiplicity of each prime $p \mid n$. \\
\subsection{Analytic Expressions}
For $n \geq 1$, the quasi Mobius function $\mu_{*}:\mathbb{N} \longrightarrow \{-1,1\}$ and the Liouville function $\lambda:\mathbb{N} \longrightarrow \{-1,1\}$, in terms of the prime divisors counting functions, are defined by
\begin{equation}
\mu_{*}(n)=(-1)^{\omega(n)} \qquad \text{ and } \qquad \lambda(n)=(-1)^{\Omega(n)}
\end{equation}
respectively. In addition, the Mobius function $\mu:\mathbb{N} \longrightarrow \{-1,0,1\}$ is defined by
\begin{equation}
\mu(n) =
\left \{
\begin{array}{ll}
(-1)^{\omega(n)} &n=p_1 p_2 \cdots p_v\\
0 &n \ne p_1 p_2 \cdots p_v,\\
\end{array}
\right .
\end{equation}
where the $p_i\geq 2$ are primes. \\
The quasi Mobius function and the Mobius function coincide on the subset of squarefree integers. From this observation arises a fundamental identity.\\
\begin{lem} \label{lem2.1} For any integer $n \geq 1$, the Mobius function has the expansion
\begin{equation}
\mu(n)= (-1)^{\omega(n)} \mu^2(n).
\end{equation}
\end{lem}
\begin{lem} \label{lem2.2} For any integer $n \geq 1$, the quasi Mobius function has the expansion
\begin{equation}
(-1)^{\omega(n)}= \sum_{q \mid n} \mu(q)d(q),
\end{equation}
where $d(n)=\sum_{d \mid n}1$ is the number of divisors function, see { \normalfont \cite[p.\ 473]{RD96}}.
\end{lem}
\begin{lem} \label{lem2.3} For any integer $n \geq 1$, the Liouville function has the expansion
\begin{equation}
\lambda(n)= \sum_{d^2 \mid n} \mu(n/d^2).
\end{equation}
\end{lem}
\begin{proof} Observe that $\lambda(n)$ is a completely multiplicative function, so it is sufficient to verify the claim for prime powers $p^v, v \geq 1$, refer to
\cite[p.\ 50]{AP76}, and \cite[p.\ 471]{RD96}, for similar information. \end{proof}
A direct approach can be used to derive the last convolution formula via the series
\begin{equation}
\sum_{n \geq 1} \frac{\lambda(n)}{n^s} = \sum_{n \geq 1} \frac{1}{n^{2s}} \sum_{n \geq 1} \frac{\mu(n)}{n^s}.
\end{equation}
The characteristic function for squarefree integers is closely linked to the Mobius function.
\begin{lem} \label{lem2.4} For any integer $n \geq 1$, the Mobius function has the expansion
\begin{equation}
\mu(n)= \sum_{d^2 \mid n} \mu(d)\lambda(n/d^2).
\end{equation}
\end{lem}
\begin{proof} Use Lemma \ref{lem2.4}, and the inversion formula
\begin{equation}\label{20000}
f(n)=\sum_{d \mid n}g(d) \text{ and } g(n)=\sum_{d \mid n}\mu(d)f(n/d),
\end{equation}
refer to \cite{AP76}, and \cite{RD96}, for similar information.
\end{proof}
\begin{lem} \label{lem2.5} For any integer $n \geq 1$, the characteristic function for squarefree integers has the expansion
\begin{equation}
|\mu(n)|= \sum_{d^2 \mid n} \mu(d)
=
\left \{
\begin{array}{ll}
1 &n=p_1 p_2 \cdots p_v,\\
0 & n \ne p_1 p_2 \cdots p_v,\\
\end{array}
\right .
\end{equation}\\
where the $p_i \geq 2$ are primes.
\end{lem}
The characteristic function for square integers is closely linked to the Liouville function and has similar properties. \\
\begin{lem} \label{lem2.6} For any integer $n \geq 1$, the characteristic function for square integers has the expansion
\begin{equation}
\sum_{d \mid n} \lambda(d)=
\left \{
\begin{array}{ll}
1 &n=m^2,\\
0 & n \ne m^2,\\
\end{array}
\right .
\end{equation}
where $m \in \mathbb{N}$.
\end{lem}
\subsection{Problems}
\begin{exe} {\normalfont Use Lemma \ref{lem2.3} to show that
$$
\sum_{n \leq x} \frac{\mu(n)}{n}<1 \quad \text{ and } \quad \sum_{n \leq x} \frac{\lambda(n)}{n}<2.
$$
These are simple explicit estimates; other sharper explicit estimates of the forms $\sum_{n \leq x} \frac{\mu(n)}{n}<1/\log x$ are proved in \cite{BO15}.
}
\end{exe}
\newpage
\section{Some Average Orders Of Arithmetic Functions}\label{s3}
The averages orders for several arithmetic functions are calculated here. These estimates, which are of independent interest, will be used later on.\\
\subsection{Unconditional Estimates}
\begin{thm} \label{thm3.1} If $C>0$ is a constant, and $\mu$ is the Mobius function, then, for any large number $x>1$,
\begin{enumerate} [font=\normalfont, label=(\roman*)]
\item $ \displaystyle \sum_{n \leq x} \mu(n)=O \left (\frac{x}{\log^{C}x}\right ). $
\item $ \displaystyle \sum_{n \leq x} \frac{\mu(n)}{n}=O \left (\frac{1}{\log^{C}x}\right ). $
\end{enumerate}
\end{thm}
\begin{proof} See \cite[p.\ 6]{DL12}, \cite[p.\ 347]{HW08}.
\end{proof}
There are sharper bounds, say, $O(xe^{-\sqrt{\log x}})$, but the simpler notation will be used here. And the conditional estimate $O(x^{1/2} \log x)$ presupposes that the nontrivial
zeros of the zeta function $ \zeta(\rho)=0$ in the critical strip $\{0<\Re e(s)<1 \}$ are of the form $\rho=1/2+it, t \in \mathbb{R}$. Moreover, the explicit upper bounds are
developed in \cite{BO15}.\\
The Mobius function over an arithmetic progression is linked to the Siegel-Walfisz Theorem for primes in arithmetic progressions, it has the upper bound given below, see
\cite[p.\ 424]{IK04}, \cite[p.\ 385]{MV07}. \\
\begin{thm} \label{thm3.2} Let $a,q$ be integers such that $gcd(a,q)=1$. If $C>0$ is a constant, and $\mu$ is the Mobius function, then, for any large number $x>1$,\\
\begin{enumerate} [font=\normalfont, label=(\roman*)]
\item $ \displaystyle \sum_{n \leq x, n\equiv a \text{ mod } q} \mu(n)=O \left (\frac{x}{q\log^{C}x}\right ) .$
\item $ \displaystyle \sum_{n \leq x, n\equiv a \text{ mod } q} \frac{\mu(n)}{n}=O \left (\frac{1}{q\log^{C}x}\right ). $
\end{enumerate}
\end{thm}
There is another approach through the generalized Bombieri-Vinogradov Theorem, which provides the summation
\begin{equation}
\sum_{n \leq x, n\equiv a \text{ mod } q} \mu(n)=\frac{1}{\varphi(q)}\sum_{n \leq x, \gcd(n,q)=1} \mu(n)+ O \left (\frac{\sqrt{q}x}{\log^{C}x}\right ),
\end{equation}
confer \cite[p.\ 40]{EP94} for more details. \\
\begin{lem} \label{lem3.1} Let $C> 1$ be a constant, and let $d(n)=\sum_{d|n}1$ be the divisors function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \mu(n)d(n) =O \left (\frac{x}{(\log x)^{C}} \right ).
\end{equation}
\end{lem}
\begin{proof} Rewrite the finite sum as
\begin{equation}
\sum_{n\leq x} \mu(n)d(n)=\sum_{n\leq x} \mu(n) \sum_{d|n} 1
=\sum_{d\leq x,} \sum_{n\leq x/d} \mu(n).
\end{equation}
Next, applying Theorem \ref{thm3.1} to the inner finite sum yields:
\begin{eqnarray}
\sum_{d\leq x,} \sum_{n\leq x/d} \mu(n)
&=&O \left (\frac{x}{\log^{C}x} \sum_{d\leq x} \frac{1}{d}\right ) \\
&=& O \left (\frac{x}{\log^{C-1}x}\right ), \nonumber
\end{eqnarray}
with $C>1$.
\end{proof}
Exactly the same result is obtained via the hyperbola method, see \cite[p. 322]{RM08}. For $C>1$, this estimate of the twisted summatory divisor function is nontrivial. The summatory divisor function has the asymptotic formula $\sum_{n \leq x}d(n) =x(\log x+2\gamma-1)+O (x^{1/2} )$.
\\
\begin{lem} \label{lem3.2} Let $C>1$ be a constant, and let $d(n)=\sum_{d|n}1$ be the divisor function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \frac{\mu(n)d(n)}{n} =O \left (\frac{1}{(\log x)^{C}} \right ).
\end{equation}
\end{lem}
\begin{proof} Let $U(x)= \sum_{n \leq x} \mu(n)d(n)$. A summation by parts leads to the integral representation
\begin{equation}
\sum_{n\leq x} \frac{\mu(n)d(n)}{n}= \int_{1}^{x} \frac{1}{t} d U(t).
\end{equation}
For information on the Abel summation formula, see \cite[p.\ 4]{CR06}, \cite[p.\ 4]{MV07}, \cite[p.\ 4]{TG15}. Evaluate the integral:
\begin{equation}
\int_{1}^{x} \frac{1}{t} d U(t)=\frac{1}{x} \cdot O \left (\frac{x}{\log^{C}x} \right )+ \int_{1}^{x}\frac{1}{t^2} U(t)dt=O \left (\frac{x}{\log^{C}x} \right ) ,
\end{equation}
where the constant is $C-1>0$.
\end{proof}
\begin{thm} \label{thm3.3} If $C>0$ is a constant, and $\lambda$ is the Liouville function, then, for any large number $x>1$,\\
\begin{enumerate} [font=\normalfont, label=(\roman*)]
\item $ \displaystyle \sum_{n \leq x} \lambda(n)=O \left (\frac{x}{\log^{C}x}\right )$.
\item $ \displaystyle \sum_{n \leq x} \frac{\lambda(n)}{n}=O \left (\frac{1}{\log^{C}x}\right ).
$
\end{enumerate}
\end{thm}
\begin{proof} These follow from Theorems \ref{thm3.1}, and \ref{thm3.2} via Lemma \ref{lem2.2}.
\end{proof}
\begin{lem} \label{lem3.3} Let $C> 1$ be a constant, and let $d(n)=\sum_{d|n}1$ be the divisors function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \lambda(n)d(n) =O \left (\frac{x}{(\log x)^{C}} \right ).
\end{equation}
\end{lem}
\begin{proof} Use Lemma \ref{lem2.3}, and Lemma \ref{lem3.3}. \end{proof}
\begin{lem} \label{lem3.4} Let $C>1$ be a constant, and let $d(n)=\sum_{d|n}1$ be the divisor function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \frac{\lambda(n)d(n)}{n} =O \left (\frac{1}{(\log x)^{C}} \right ).
\end{equation}
\end{lem}
\begin{proof} Let $R(x)= \sum_{n \leq x} \lambda(n)d(n)$. Now use summation by parts as illustrated in the proof of Lemma \ref{lem3.3}.
\end{proof}
\subsection{Conditional Estimates}
The conditional estimates assume the optimal zerofree region $\{s \in \mathbb{C}: \Re e(s)>1/2 \}$ of the zeta function. \\
\begin{thm} \label{thm3.8} Suppose that $ \zeta(\rho)=0 \Longleftrightarrow \rho=1/2+it, t \in \mathbb{R}$. If $\mu(n)=-1,0,1$ is the Mobius function, then, for any large number $x>1$,
\begin{enumerate} [font=\normalfont, label=(\roman*)]
\item $ \displaystyle \sum_{n \leq x} \mu(n)=O \left (x^{1/2}\log x \right ).$ \item $ \displaystyle \sum_{n \leq x} \frac{\mu(n)}{n}=O \left (\frac{\log^2 x}{x^{1/2}}\right ).
$
\end{enumerate}
\end{thm}
These results, and other sharper bounds, are widely available in the literature, see \cite{MV07}, \cite{TG15}, et cetera.\\
\begin{lem} \label{lem3.5} Suppose that $ \zeta(\rho)=0 \Longleftrightarrow \rho=1/2+it, t \in \mathbb{R}$. Let $d(n)=\sum_{d|n}1$ be the divisor function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \mu(n)d(n) =O \left (x^{1/2} \log x \right ).
\end{equation}
\end{lem}
\begin{proof} The generating series has the expression
\begin{eqnarray}
\sum_{n \geq 1} \frac{\mu(n)d(n)}{n^s} &=& \prod_{p \geq 2} \left (1-\frac{2}{p^s} \right ) \nonumber \\
&=& \frac{1}{\zeta(s)}\prod_{p \geq 2} \left (1-\frac{2}{p^s} \right)\left (1-\frac{1}{p^s} \right)^{-1} \\
&=&\frac{1}{\zeta(s)}\prod_{p \geq 2} \left (1-\frac{1}{p^s-1} \right)=\frac{g(s)}{\zeta(s)}\nonumber,
\end{eqnarray}
where $g(s)$ is an absolutely convergent holomorphic function on the complex half plane $\Re e(s)>1/2$. Let $x \in \mathbb{R}-\mathbb{Q}$ be a large real number. Applying the Perron formula returns:
\begin{equation}
\sum_{n\leq x} \mu(n)d(n)=\frac{1}{i2 \pi}\int_{c-i\infty}^{c+i \infty}\frac{g(s)}{\zeta(s)}\frac{x^s}{s}ds=\sum_{s\in \mathcal{P}} \text{Res}(s,f(s)),
\end{equation}
where $c$ is a constant, and $\mathcal{P}=\{0,\rho :\zeta(\rho)=0\}$ is the set of poles of the meromorphic function $f(s)=\frac{g(s)}{\zeta(s)} \frac{x^s}{s}$. Using standard analytic methods, \cite[p. 139]{MV07}, \cite[p.\ 219]{TG15}, et cetera, this reduces to the sum of residues
\begin{equation}
\sum_{s\in \mathcal{P}} \text{Res}(s,f(s))
= O \left (x^{1/2} \log^2 x \right ).
\end{equation}
Let $x \in \mathbb{N}$ be a large integer, and $\varepsilon>0$ be an arbitrarily small number. Since the average
\begin{equation}
\frac{1}{2} \left (\sum_{n\leq x-\varepsilon} \mu(n)d(n)+\sum_{n\leq x+\varepsilon} \mu(n)d(n)\right)=O \left (x^{1/2} \log^2 x \right ),
\end{equation}
holds for all integers $x \geq 1$, the upper bound holds for all large real numbers $x \geq 1$.
\end{proof}
\begin{lem} \label{lem3.6} Suppose that $ \zeta(\rho)=0 \Longleftrightarrow \rho=1/2+it, t \in \mathbb{R}$. Let $d(n)=\sum_{d|n}1$ be the divisor function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \frac{\mu(n)d(n)}{n} =O \left ( \frac{\log^2 x}{x^{1/2}} \right ).
\end{equation}
\end{lem}
The proofs are similar to those in Lemmas \ref{lem3.3} and \ref{lem3.4}, but use the conditional results in Lemma \ref{lem3.5}.
\subsection{Densities For Squarefree Integers}
The subset of squarefree integres is ususlly denoted by
\begin{equation}
\mathcal{Q}=\{n\in \mathbb{Z}:n=p_1p_2 \cdots p_t, \text{with }p_k \text{ prime }\}
\end{equation}
and the subset of nonsquarefree integers is denoted by
\begin{equation}
\overline{\mathcal{Q}}=\{n\in \mathbb{Z}:n\ne p_1p_2 \cdots p_t, \text{with }p_k \text{ prime }\}.
\end{equation}
\begin{lem} \label{lem3.7} Let $\mu(n)$ be the Mobius function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \mu(n)^2 =\frac{6}{\pi^2}x+O \left (x^{1/2} \right ).
\end{equation}
\end{lem}
\begin{lem} \label{lem3.8} Let $\mu(n)$ be the Mobius function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \mu(n)^2 =\Omega \left (x^{1/2} \right ).
\end{equation} \end{lem}
\begin{lem} \label{lem3.9} Let $d(n)=\sum_{d \mid n}1$ be the divisors function, and let $\mu(n)$ be the Mobius function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \mu(n)^2d(n) =\frac{6}{\pi^2}(\log x+1-\gamma)x+O \left (x^{1/2} \right ).
\end{equation} \end{lem}
\subsection{Subsets of Squarefree Integers of Zero Densities}
A technique for estimating finite sums over subsets of integers $\mathcal{A} \subset \mathbb{N}$ of zero densities in the set of integers $\mathbb{N}$ is sketched here. Write the counting function as $A(x)=\# \{n \leq x:n \in \mathcal{A}\}$. The case for squarefree integers occurs frequently in number theory. In this case, let $ \mathcal{A} \subset \mathcal{Q}=\{n\in \mathbb{N}:\mu(n)\ne0\} $, and the measure $A(x)=\# \{n \leq x:n \in \mathcal{A}\}=o(x)$. \\
\begin{lem} \label{lem3.10} Let $C> 1$ be a constant, let $d(n)=\sum_{d|n}1$ be the divisors function. If $A(x)=O(x/\log^C x)$, then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x, n\in \mathcal{A}} \mu(n)^2d(n) =O \left (\frac{x}{(\log x)^{C}} \right ).
\end{equation}
\end{lem}
\begin{proof} For squarefree integers the divisor function reduces to $d(n)=\sum_{d|n}1=2^{\omega(n)}$, but this is not required here. Rewrite the finite sum as
\begin{equation}
\sum_{n \leq x, n\in \mathcal{A}} \mu(n)^2d(n)=\sum_{n\leq x, n\in \mathcal{A}} \mu(n)^2 \sum_{d \mid n} 1
=\sum_{d\leq x, } \sum_{n\leq x/d, n\in \mathcal{A}} \mu(n)^2.
\end{equation}
Next, applying the measure $A(x)=\# \{n \leq x:n \in \mathcal{A}\}=O(x/\log^C x)$ to the inner finite sum yields:
\begin{eqnarray}
\sum_{d\leq x,} \sum_{n\leq x/d, n\in \mathcal{A}} \mu(n)^2
&=&O \left (\frac{x}{\log^{C}x} \sum_{d\leq x} \frac{1}{d}\right ) \\
&=&O \left (\frac{x}{\log^{C-1}x}\right )\nonumber,
\end{eqnarray}
with $C>1$.
\end{proof}
\begin{lem} \label{lem3.11} Let $C>1$ be a constant, and let $d(n)=\sum_{d|n}1$ be the divisor function. If $A(x)=O(x/\log^C x)$, then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x, n\in \mathcal{A}} \frac{\mu(n)^2d(n)}{n} =O \left (\frac{1}{(\log x)^{C}} \right ).
\end{equation}
\end{lem}
\begin{proof} Let $R(x)= \sum_{n \leq x, n\in \mathcal{A}} \mu(n)^2d(n)$. A summation by parts leads to the integral representation
\begin{equation}
\sum_{n\leq x, n\in \mathcal{A}} \frac{\mu(n)^2d(n)}{n}= \int_{1}^{x} \frac{1}{t} d R(t).
\end{equation}
For information on the Abel summation formula, see \cite[p.\ 4]{CR06}, \cite[p.\ 4]{MV07}, \cite[p.\ 4]{TG15}. Evaluate the integral:
\begin{equation}
\int_{1}^{x} \frac{1}{t} d R(t)=\frac{1}{x} \cdot O \left (\frac{x}{\log^{C}x} \right )+ \int_{1}^{x}\frac{1}{t^2} R(t)dt=O \left (\frac{1}{\log^{C}x} \right ) ,
\end{equation}
where the constant is $C-1>0$. \end{proof}
\subsection{Problems}
\begin{exe} {\normalfont Use Lemma \ref{lem2.3} to prove Lemma \ref{lem3.5} }
\end{exe}
\begin{exe} {\normalfont Let $x\geq 1$ be a large number. Show that the Prime Number Theorem implies that
$$
\sum_{n \leq x, \mu(n)=1} \mu(n)=\sum_{n \leq x} \frac{1+\mu(n)}{2}=\frac{3}{\pi^2} x+O(x^{1/2}).
$$}
\end{exe}
\begin{exe} {\normalfont Let $x\geq 1$ be a large number. Show that the Prime Number Theorem implies that
$$
\sum_{n \leq x, \mu(n)=-1} \mu(n)=\sum_{n \leq x} \frac{1-\mu(n)}{2}=\frac{3}{\pi^2} x+O(x^{1/2}).
$$
}
\end{exe}
\begin{exe} {\normalfont Let $x\geq 1$ be a large number, and let $\{s_n:n \in \mathbb{N}\} \subset \mathbb{N}$ be a subsequence of integers, including random sequences. Compute an asymptotic formula or estimate for
$$
\sum_{n \leq x} \mu(s_n) .
$$}
\end{exe}
\begin{exe} {\normalfont Let $x\geq 1$ be a large number. Show that the main term in the finite sum
$$
\#\{n \leq x:n=m^2 \}=\sum_{n \leq x} \sum_{d|n}\lambda(d)=[x^{1/2}]+E(x),
$$
is $M(x)=x^{1/2}$. But the error term $E(x)=O(x^{1/2})$ has the same order of magnitude. Hence, it can change signs infinitely often.}
\end{exe}
\begin{exe} {\normalfont Let $x\geq 1$ be a large number. Show that the main term in the finite sum
$$
x^{1/2} \leq \sum_{n \leq x} \sum_{d|n}\lambda(d) <2x^{1/2}.
$$
}
\end{exe}
\begin{exe} {\normalfont Let $x\geq 1$ be a large number. Show that the Prime Number Theorem implies that
$$
\sum_{n \geq 1, n=even} \frac{\mu(n)}{n}=\frac{-1}{3}\sum_{n \geq 1} \frac{\mu(n)}{n} \quad \text{and} \quad
\sum_{n \geq 1, n=odd} \frac{\mu(n)}{n}=\frac{4}{3}\sum_{n \geq 1} \frac{\mu(n)}{n},$$}
\end{exe}
\newpage
\section{Signs And Oscillations}\label{s4}
The observed signs changes of the multiplicative function $\mu:\mathbb{N} \longrightarrow \{-1,0,1\}$ are sufficiently random. This phenomenon is known as the \textit{Mobius randomness principle}, \cite[p.\ 338]{IK04}. In fact, the number of consecutive signs changes $\mu(n)=-\mu(n+1)$ over the interval $[1,x]$ satisfies the lower
bound $ \gg x/(\log x)^8$, confer \cite{CE03}, \cite{EP85}, \cite{HR86}, \cite{HP85}, and the recent stronger result in \cite[Corollary 4]{MM15}, et alii. \\
\subsection{Simple Patterns}
The elementary concepts are discussed in \cite[p.\ 412]{RD96}, and \cite{NZ80}. A pattern of length $k+1\geq 1$ is a vector of values $(e_0,e_1, \ldots,e_r)$ where
$e_i \in \{-1,0,1 \}$. Since every sequence of consecutive integers of length $k\geq 4$ has an integer divisible by four, the pattern $e_0,e_1,e_2,e_3=0,e_4, \ldots,e_k$,
or a linear shift, must occur on every sequence
\begin{equation}
\mu(n)=e_0,\mu(n+1)=e_1,\mu(n+2)=e_2,\mu(n+3)=e_3,\ldots, \mu(n+k)=e_k
\end{equation}\\
of length $k\geq 4$ where $e_i \in \{-1,0,1 \}$.\\
But for shorter pattern, all the possible combination can occur. For example, if $x\geq 1 $ is large, then every short interval $[x,2x]$ has a consecutive sign change
$\mu(n)=-\mu(n+1)$ with $n \in [x,2x]$ as $x \longrightarrow \infty$.\\
A basic result on the patterns of the Mobius function is proved here.\\
\begin{lem} \label{lem4.1} Let $k\geq 1$ be a fixed integer. Then, there exists two infinite sequences of integers $M_m \longrightarrow \infty$, and $n \leq M_m$ such that
\begin{equation} \label{el4.1}
\mu(n)=e_0, \quad \mu(n+1)=e_1, \quad \mu(n+2)=e_2, \quad \ldots , \quad \mu(n+k)=e_k,
\end{equation}
where the values $e_i=0$ for $0 \leq i\leq r$ specify a fixed pattern.\end{lem}
\begin{proof} Let $p_{m+i}$ be primes for $i=0,1,2, \ldots,k$, and let $M_m=p_{m}^2p_{m+1}^2 p_{m+2}^2 \cdots p_{m+k}^2$. Consider the system of congruences
\begin{equation}
n \equiv e_0 \text{ mod }p_m^2, \quad n+1 \equiv e_1 \text{ mod }p_{m+1}^2, \quad \ldots ,\quad n+k \equiv e_k \text{ mod }p_{m+k}^2.
\end{equation}
Since the Nicomachus functional $(x_0=e_0,x_1=e_1-1,x_2=e_2-2, \ldots,x_k=e_k-k) \longrightarrow \mathbb{Z}/M\mathbb{Z}$ is one-to-one, see \cite[p.\ 48]{GE84}, there is a unique integer $n \geq 1$ for each distinct $M=M_m \geq 2$. In particular, as $m \longrightarrow \infty$, a sequence of infinitely many integers $n$ that satisfies (\ref{el4.1}) are generated.
\end{proof}
The size $x \geq 1$ of the interval $[1,x]$, restricts the length $k+1 \geq 1$ of the fixed patterns $(e_0,e_1,e_2, \ldots,e_k)$. Exampli gratia, using consecutive primes, the maximal length of any pattern can be estimated as follows:
\begin{equation}
M_m=p_{m}^{2} p_{m+1}^{2} p_{+2}^2 \cdots p_{m+k}^{2} \leq 2^{2^{k+1}}\leq x,
\end{equation}
where $p_1=2,p_2<2p_1,p_3<2p_2, \ldots $. Thus, $k+1\leq \log \log x / \log 2$. \\
Some theoretical and numerical data for the Liouville function $\lambda(f(n))$ with some polynomials $f(x)=ax^2+bx+c \in \mathbb{Z}[x]$ arguments are compiled in \cite{BM09}; and some results for cubic polynomials are given in \cite{HH05}. \\
\subsection{Orthogonal Sequences}
\begin{def} \label{dfn4.2} A multiplicative function $f:\mathbb{N} \to [-1,1]$ is said to be orthogonal to the Mobius sequence $\{\mu(n):n \geq 1\}$ if
\begin{equation}
\sum_{p\geq 2}\frac{1+f(p)}{p}
\end{equation}
diverges. Otherwise it is not orthogonal.
\end{def}
\begin{exa} \label{exa4.3} { \normalfont (i) Every multiplicative function $f(n) \geq 0$ is orthogonal to the Mobius function.\\
(ii) If a multiplicative function $f(n) \geq 0$ for all but a subset of integers $n\geq 1$ of zero density, then it is orthogonal to the Mobius function. }
\end{exa}
This concept of defining orthogonal functions was extended to a metric
\begin{equation}
D(f,g)=\left |\sum_{p\geq 2}\frac{1+\Re e(f(p)\overline{g(p)})}{p} \right |^{1/2}
\end{equation}
in the space of multiplicative functions $f,g \in \mathcal{M}$, see \cite{GS07}. Recent applications are given in \cite{KN16}, and other authors. An application to iterated sequences is discussed in \cite{SP09}.\\
\newpage
\section{Correlations Functions Of Degree Two} \label{s5}
The multiplicative function $\mu:\mathbb{N} \longrightarrow \{-1,0,1\}$ has sufficiently many sign changes to force meaningful cancellation on the twisted summatory functions $\sum_{n \leq x} \mu(n)f(n)$ for some functions $f: \mathbb{N} \longrightarrow \mathbb{C}$ such that $f(n) \ne \lambda(n), \mu(n), \mu_{*}(n)$, see Section \ref{s2} for the definitions of these functions. This randomness phenomenon is discussed in Section \ref{s4}. \\
\subsection{Unconditional Estimates}
The normalized correlation function is considered first. Subsequently, the standard correlation function is derived from it.\\
\begin{lem} \label{lem5.1} Let $C>2$ be a constant, and let $\mu:\mathbb{N} \longrightarrow \{-1,0,1\}$ be the Mobius function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \frac{\mu(n) \mu(n+1)}{n} =O \left (\frac{1}{\log^{C}x} \right ).
\end{equation}
\end{lem}
\begin{proof} By Lemma \ref{lem2.1}, the normalized correlation function has the equivalent form
\begin{equation} \label{el41}
\sum_{n \leq x} \frac{\mu(n) \mu(n+1)}{n} =\sum_{n \leq x} \frac{(-1)^{\omega(n)} \mu(n)^2 \mu(n+1)}{n}.
\end{equation}
Applying Lemma \ref{lem2.2}, and reversing the order of summation yield:
\begin{eqnarray}
\sum_{n \leq x} \frac{(-1)^{\omega(n)} \mu^2(n)\mu(n+1)}{n}
&=&\sum_{n \leq x} \frac{\mu^2(n)\mu(n+1)}{n} \sum_{q|n} \mu(q)d(q) \\
&=&\sum_{q\leq x,} \sum_{n \leq x, q|n} \frac{\mu^2(n)\mu(n+1)\mu(q)d(q)}{n} \nonumber.
\end{eqnarray}
Information and other examples on inverting the order of summation are given in \cite[p.\ 35]{MV07}, \cite[p.\ 27]{RH94}, \cite[p.\ 216]{RM08}, \cite[p.\ 83]{SH83}, \cite[p.\ 36]{TG15}, et cetera.\\
Replacing the change of variable $1 \leq n=qm\leq x$, and separating the variables return a basic decomposition as product of two simpler finite sums:
\begin{eqnarray} \label{el42}
\sum_{q\leq x,} \sum_{n \leq x, q|n} \frac{\mu^2(n)\mu(n+1)\mu(q)d(q)}{n}
&=&\sum_{q\leq x} \frac{\mu(q)d(q)}{q} \sum_{m \leq x/q} \frac{\mu^2(qm)\mu(qm+1)}{m}\\
&=& \sum_{q\leq x} \frac{\mu(q)d(q)}{q} \sum_{\substack{m \leq x/q,\\ \gcd(m,q)=1}} \frac{\mu^2(m)\mu(qm+1)}{m}\nonumber.
\end{eqnarray}
The inner finite sum is a Mobius sum over the arithmetic progression $\{qm+1:m \in \mathbb{N} \}$ restricted to the squarefree integers $qm\ge 1$, see Theorem \ref{thm3.2} for background and references on this topic. This basic decomposition into a product of two simpler finite sums, confer (\ref{el41}) to (\ref{el42}), facilitates a simple technique for estimating its upper bound.\\
A routine calculation shows that $\mu(qm+1)\ne \mu(q) \mu(m)$, for all but a subset of integers $qm \geq 1$ of zero density in $\mathbb{N}$, see Lemma \ref{lem3.11}, and Problems \ref{exe5.1}, and \ref{exe5.2}. This constraint preempts the possibility of a correlated finite sum, exempli gratia,
\begin{eqnarray}
\sum_{q\leq x} \frac{\mu(q)d(q)}{q} \sum_{\substack{m \leq x/q,\\ \gcd(m,q)=1}} \frac{\mu^2(m)\mu(qm+1)}{m}
&=&\sum_{q\leq x} \frac{\mu(q)^2d(q)}{q} \sum_{\substack{m \leq x/q,\\ \gcd(m,q)=1}} \frac{\mu(m)}{m}\nonumber\\
& \gg &\log x.
\end{eqnarray}
Take the absolute value of the penultimate equation to reach the inequality:
\begin{eqnarray}
\left | \sum_{q\leq x} \frac{\mu(q)d(q)}{q} \sum_{m \leq x/q} \frac{\mu^2(qm)\mu(qm+1)}{m} \right |
&\leq& \left |\sum_{q\leq x} \frac{\mu(q)d(q)}{q} \right | \left | \sum_{m \leq x/q} \frac{\mu^2(m)\mu(qm+1)}{m} \right |\nonumber\\
&\leq& \left |\sum_{q\leq x} \frac{\mu(q)d(q)}{q} \right | \sum_{m \leq x/q} \frac{1}{m} \\
&\leq& (\log x)\left |\sum_{q\leq x} \frac{\mu(q)d(q)}{q} \right | \nonumber.
\end{eqnarray}
Apply Lemma \ref{lem3.4} to complete the estimate:
\begin{equation}
\sum_{n \leq x} \frac{\mu(n)\mu(n+1)}{n}
=O \left (\frac{1}{\log^{C-1} x} \right ),
\end{equation}
with $C-1>1$ constant. \end{proof}
A different approach using exact formula is sketched in problem 5.7, in the Problems subsection. This similar to the proof of Lemma 2.17 in \cite[p. 66]{MV07}.\\
\begin{thm} \label{thm5.1} Let $C>2$ be a constant, and let $\mu:\mathbb{N} \longrightarrow \{-1,0,1\}$ be the Mobius function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \mu(n) \mu(n+1) =O \left (\frac{x}{(\log x)^{C}} \right ).
\end{equation}
\end{thm}
\begin{proof} Let $V(x)= \sum_{n \leq x}\frac{ \mu(n) \mu(n+1)}{n}$. A summation by parts leads to the integral representation
\begin{equation}
\sum_{n\leq x} \mu(n) \mu(n+1)=\sum_{n\leq x} n \cdot \frac{\mu(n) \mu(n+1)}{n}
=\int_{1}^{x} t \cdot d V(t).
\end{equation}
For information on the Abel summation formula, see \cite[p.\ 4]{CR06}, \cite[p.\ 4]{MV07}, \cite[p.\ 4]{TG15}.
Lastly, employ Lemma \ref{lem5.1} to evaluate the integral:
\begin{equation}
\int_{1}^{x} t \cdot d V(t)=x \cdot O \left (\frac{1}{\log^{C}x} \right )+O \left (\int_{1}^{x} \frac{1}{\log^{C}t} dt \right )=O \left (\frac{x}{\log^{C}x} \right ) ,
\end{equation}
where the constant is $C>2$. \end{proof}
\subsection{Conditional Upper Bounds}
The conditional upper bounds are derived from the optimal zerofree region $\{s \in \mathbb{C}: \Re e(s)>1/2 \}$ of the zeta function. \\
\begin{thm} \label{thm5.3} Suppose that $ \zeta(\rho)=0 \Longleftrightarrow \rho=1/2+it, t \in \mathbb{R}$. Let $\mu:\mathbb{N} \longrightarrow \{-1,0,1\}$ be the Mobius function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \frac{\mu(n) \mu(n+1)}{n} =O \left (\frac{\log^2 x}{x^{1/2}} \right ).
\end{equation} \end{thm}
\begin{thm} \label{thm5.4} Suppose that $ \zeta(\rho)=0 \Longleftrightarrow \rho=1/2+it, t \in \mathbb{R}$. Let $\mu:\mathbb{N} \longrightarrow \{-1,0,1\}$ be the Mobius function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \mu(n) \mu(n+1) =O \left (x^{1/2} \log^2 x \right ).
\end{equation} \end{thm}
The proofs are similar to those in Lemma \ref{lem5.1} and Theorem \ref{thm5.1}, but use the conditional result in Lemma \ref{lem3.9}.\\
\subsection{Comparison}
A character $ \chi : \mathbb{Z} \longrightarrow \mathbb{C}$ is a periodic and completely multiplicative function modulo some integer $q \geq 1$, while the Mobius function $ \mu : \mathbb{N} \longrightarrow \{-1,0,1\}$ is not periodic nor completely multiplicative.\\
The corresponding correlation functions for the quadratic symbol has an exact evaluation
\begin{equation}
\sum_{n \leq p} \chi(n) \chi(n+k)=-1,
\end{equation}
where $k \nmid p$ and the quadratic symbol is defined by $\chi(n) \equiv n^{(p-1)/2} \mod p$. The well known proof appears in \cite[p. ?]{LN09}, and similar sources. Other related topics are considered in \cite[p. 196]{BE09}. \\
\subsection{Problems}
\begin{exe} {\normalfont Show that $\mu(q)\mu(qm+1)\ne \mu(q)^2 \mu(m)$ for all but a subset of squarefree integers $qm \geq 1$ of zero density in $\mathbb{N}$, see Lemma \ref{lem3.10}. This implies that the finite sum is not correlated. For example, $\sum_{m,q<x}\mu(q)\mu(qm+1)\ne c\sum_{n\leq x} \mu(n+k)^2$ , $c>0$ constant, for any $k \in \mathbb{Z}$. Hint: a) try $q\equiv 3 \mod 4$ and $m\equiv 1 \mod 4$ with $qm$ squarefree. b) try prime $q\equiv \pm1 \mod 4$ and $m\equiv 1 \mod 4$ with $qm$ nonsquarefree.}
\end{exe}
\begin{exe} {\normalfont Use Lemma \ref{lem2.3} to decompose the finite sum in the form
$$
\sum_{n\leq x} \frac{\mu(n)^2}{n} =\sum_{q\leq x} \frac{\mu(q)^2 d(q)}{q} \sum_{m \leq x/q, \gcd(q,m)=1} \frac{\mu(m)}{m}.
$$ }
\end{exe}
\begin{exe} {\normalfont Show that $\mu(q)\mu(qm+1)= \mu(q)^2 \mu(m)$ for all $m,q\geq 1$, implies correlation. For example, it has a much larger upper bound
$$
\left | \sum_{q\leq x} \frac{\mu(q)d(q)}{q} \sum_{m \leq x/q} \frac{\mu(qm+1)}{m} \right |
\leq (\log x)\left |\sum_{q\leq x} \frac{\mu(q)^2 d(q)}{q} \right | \ll x^{\varepsilon},
$$
for some $\varepsilon>0$, which implies correlation. }
\end{exe}
\begin{exe} {\normalfont The evaluation of the right side of the series
$$
\sum_{q\leq x} \frac{\mu(q)^2d(q)}{q} \sum_{m \leq x/q, \gcd(m,q)=1} \frac{\mu(m)}{m} =\sum_{n\leq x} \frac{\mu(n)^2}{n}.
$$
is well known, that is, the evaluation of the left side reduces to $\sum_{n\leq x} \mu(n)^2) /n=6 \pi^{-2} \log x +O(x^{-1/2})$. Use a different technique to find the equivalent evaluation on the left side. Hint: try the inverse Dirichlet series
$$
\frac{1}{L(s,\chi_{0})}= \frac{1}{\zeta(s)} \prod_{p |q} \left (1-\frac{1}{p^s} \right )^{-1},
$$
where $\chi_{0}=1$ is the principal character mod $q$, see \cite[p.\ 334]{MV07}.}
\end{exe}
\begin{exe} {\normalfont Verify that $\mu(q) \mu(qm+1)=-1$ for $\gg x/q\log x$ integers $m,q \in [1,x]$. This proves that $\mu(q)\mu(qm+1) \ne \mu(q)^2 \mu(m)$ for all $m,q\geq 1$.}
\end{exe}
\begin{exe} {\normalfont Let $f(x) \in \mathbb{Z}[x]$ be an irreducible polynomial of degree $\deg(f)=2$. Estimate $\sum_{n\leq x} \mu(f(n))$, consult \cite{BM09} for related works.}
\end{exe}
\begin{exe} {\normalfont Let $\mu(q)\mu(qm+1)\ne \mu(q)^2 \mu(m)$ for all $m,q\geq 1$. Use an exact formula for the inner sum such as
$$
\sum_{m \leq x/q} \frac{\mu^2(qm)\mu(qm+1)}{m}=\frac{\varphi(q)}{q} \left (\log(x/q)+\gamma(q)+O(q/x) \right ),
$$
where $\gamma(q)$ is a constant depending on $q\geq1$, to prove Lemma 5.1:
$$
\sum_{n\leq x} \frac{\mu(n) \mu(n+1)}{n}=\sum_{q\leq x} \frac{\mu(q)d(q)}{q} \sum_{m \leq x/q} \frac{\mu^2(qm)\mu(qm+1)}{m}
=O \left ( \frac{1}{\log^{C-1} x} \right).
$$
Hint: Compare this to the proof for Lemma 2.17 in \cite[p.\ 66]{MV07}.}
\end{exe}
\begin{exe} {\normalfont Let $x\geq 1$ be a large number. Find an asymptotic formula for the finite sum
$$
\sum_{n \leq x} \mu^2(n)\mu(n+1)^2\stackrel{?}{=}\frac{6^2}{\pi^4} x+c_1+O(x^{1/2}),
$$
where $c_1$ is a constant.}
\end{exe}
\begin{exe} {\normalfont Let $x\geq 1$ be a large number, and let $k \ne 0$. Find an asymptotic formula for the finite sum
$$
\sum_{n \leq x} \mu^2(n)\mu(n+k)^2\stackrel{?}{=}\frac{6^2}{\pi^4} x+c_k+O(x^{1/2}),
$$
where $c_k$ is a constant. Hint: find a way or an argument to prove that $\sum_{n \leq x} \mu^2(n)\mu(n+k)^2\stackrel{?}{\ne}o(x)$. }
\end{exe}
\newpage
\section{Correlations Functions Of Degree Three} \label{s6}
The multiplicative function $\mu$ has sufficiently many sign changes to force meaningful cancellation on the twisted summatory functions $\sum_{n \leq x} \mu(n)f(n)$ for some functions $f: \mathbb{N} \longrightarrow \mathbb{C}$ such that $f(n) \ne \lambda(n), \mu(n)$. The signs changes are discussed in Section \ref{s4}.
\subsection{Unconditional Estimates}
This Section illustrates the calculations of the correlations functions of degree three. \\
\begin{lem} \label{lem6.1} Let $C>2$ be a large number, and let $\mu:\mathbb{N} \longrightarrow \{-1,0,1\}$ be the Mobius function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \frac{\mu(n) \mu(n+1) \mu(n+2)}{n} =O \left (\frac{1}{\log^{C}x} \right ).
\end{equation}
\end{lem}
\begin{proof} By Lemma \ref{lem2.1}, the normalized correlation function has the equivalent form
\begin{equation} \label{el51}
\sum_{n \leq x} \frac{\mu(n) \mu(n+1) \mu(n+2)}{n} =\sum_{n \leq x} \frac{(-1)^{\omega(n)}\mu^2(n) \mu(n+1)\mu(n+2)}{n}.
\end{equation}
Applying Lemma \ref{lem2.2}, and reversing the order of summation yield:
\begin{eqnarray}
&&\sum_{n \leq x} \frac{(-1)^{\omega(n)} \mu^2(n)\mu(n+1) \mu(n+2)}{n}\nonumber \\
&=&\sum_{n \leq x} \frac{\mu^2(n)\mu(n+1) \mu(n+2)}{n} \sum_{q|n} \mu(q)d(q) \\
&=&\sum_{q\leq x,} \sum_{n \leq x, q|n} \frac{\mu^2(n)\mu(n+1) \mu(n+2)\mu(q)d(q)}{n}\nonumber .
\end{eqnarray}
More details and examples on inverting the order of summation are given in \cite[p.\ 35]{MV07}, \cite[p.\ 27]{RH94}, \cite[p.\ 216]{RM08}, \cite[p.\ 83]{SH83}, \cite[p.\ 36]{TG15}, et cetera.\\
Replacing the change of variable $1 \leq n=qm\leq x$, and separating the variables return a basic decomposition as product of two simpler finite sums:
\begin{eqnarray} \label{el52}
&&\sum_{q\leq x,} \sum_{n \leq x, q|n} \frac{\mu^2(n)\mu(n+1) \mu(n+2) \mu(q)d(q)}{n} \\
&=&\sum_{q\leq x} \frac{\mu(q)d(q)}{q} \sum_{m \leq x/q} \frac{\mu^2(qm)\mu(qm+1) \mu(qm+2)}{m} \nonumber .
\end{eqnarray}
The inner finite sum is a Mobius sum over the oblique arithmetic progression $\{(qm+1, qm+2:m \in \mathbb{N}\}$ restricted to the squarefree integers $qm \geq 1$, see Theorem \ref{thm3.2} for background and references on this topic. This basic decomposition into a product of two finite sums, confer (\ref{el51}) to (\ref{el52}), facilitates a simple technique for estimating its upper bound.\\
A routine calculation shows that $\mu(qm+1)\mu(qm+2)\ne \mu(q) \mu(m)$ for all but a subset of integers $qm \geq 1$ of zero density in $\mathbb{N}$, see Lemma \ref{lem3.10}, and Problems 6.1 and 6.2. This constraint preempts the possibility of a correlated finite sum, exempli gratia,
\begin{eqnarray} && \sum_{q\leq x} \frac{\mu(q)d(q)}{q} \sum_{m \leq x/q} \frac{\mu^2(qm)\mu(qm+1) \mu(qm+2)}{m} \nonumber \\
&=&\sum_{q\leq x} \frac{\mu(q)^2d(q)}{q} \sum_{\substack{m \leq x/q,\\ \gcd(m,q)=1}} \frac{\mu(m)}{m} \\
&\gg& \log x. \nonumber
\end{eqnarray}
Take the absolute value of the penultimate equation to reach the inequality:
Taking absolute value yields the inequality:
\begin{eqnarray}
&& \left | \sum_{q\leq x} \frac{\mu(q)d(q)}{q} \sum_{m \leq x/q} \frac{\mu^2(qm)\mu(qm+1)\mu(qm+2)}{m} \right | \nonumber \\
&\leq& \left |\sum_{q\leq x} \frac{\mu(q)d(q)}{q} \right | \left | \sum_{m \leq x/q} \frac{\mu^2(qm)\mu(qm+1) \mu(qm+2)}{m} \right |\\
&\leq& \left |\sum_{q\leq x} \frac{\mu(q)d(q)}{q} \right | \sum_{m \leq x/q} \frac{1}{m} \nonumber \\
&\leq& (\log x)\left |\sum_{q\leq x} \frac{\mu(q)d(q)}{q} \right | \nonumber .
\end{eqnarray}
Apply Lemma \ref{lem3.4} to complete the estimate:
\begin{equation}
\sum_{n \leq x} \frac{\mu(n)\mu(n+1) \mu(n
+2)}{n}
=O \left (\frac{1}{\log^{C-1} x} \right ),
\end{equation}
with $C-1>1$ constant. Quod erat demonstrandum.
\end{proof}
\begin{thm} \label{thm6.1} Let $C>2$ be a constant, and let $\mu:\mathbb{N} \longrightarrow \{-1,0,1\}$ be the Mobius function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \mu(n) \mu(n+1) \mu(n+2)=O \left (\frac{x}{(\log x)^{C}} \right ).
\end{equation}
\end{thm}
\begin{proof} Let $V(x)= \sum_{n \leq x}\frac{ \mu(n) \mu(n+1) \mu(n+2)}{n}$. A summation by parts leads to the integral representation
\begin{equation}
\sum_{n\leq x} \mu(n) \mu(n+1) \mu(n+2)=\sum_{n\leq x} n \cdot \frac{\mu(n) \mu(n+1) \mu(n+2)}{n}
=\int_{1}^{x} t \cdot d V(t).
\end{equation}
For information on the Abel summation formula, see \cite[p.\ 4]{CR06}, \cite[p.\ 4]{MV07}, \cite[p.\ 4]{TG15}.\\
Lastly, employ Lemma \ref{lem6.1} to evaluate the integral:
\begin{equation}
\int_{1}^{x} t \cdot d V(t)=x \cdot O \left (\frac{1}{\log^{C}x} \right )+O \left (\int_{1}^{x} \frac{1}{\log^{C}t} dt \right )=O \left (\frac{x}{\log^{C}x} \right ) ,
\end{equation}
where the constant is $C>2$. Quod erat demonstrandum.
\end{proof}
This idea generalizes to the calculations for correlation functions of higher degrees $>3$. Moreover, a recursive calculation of the correlation function of degree three, using the result for the correlation function of degree two, is also feasible \\
\subsection{Conditional Upper Bounds}
The conditional upper bounds are derived from the optimal zerofree region $\{s \in \mathbb{C}: \Re e(s)>1/2 \}$ of the zeta function $\zeta(s), s \in \mathbb{C}$. \\
\begin{thm} \label{thm6.2} Suppose that $ \zeta(\rho)=0 \Longleftrightarrow \rho=1/2+it, t \in \mathbb{R}$. Let $\mu:\mathbb{N} \longrightarrow \{-1,0,1\}$ be the Mobius function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \frac{\mu(n) \mu(n+1) \mu(n+2)}{n}=O \left (\frac{\log^{2} x}{x^{1/2}} \right ).
\end{equation} \end{thm}
\begin{thm} \label{thm6.3} Suppose that $ \zeta(\rho)=0 \Longleftrightarrow \rho=1/2+it, t \in \mathbb{R}$. Let $\mu:\mathbb{N} \longrightarrow \{-1,0,1\}$ be the Mobius function. Then, for any sufficiently large number $x>1$,
\begin{equation}
\sum_{n \leq x} \mu(n) \mu(n+1) \mu(n+2)=O \left (x^{1/2}\log^{2} x \right ).
\end{equation}
\end{thm}
The proofs are similar to those in Lemma \ref{lem6.1} and Theorem \ref{thm6.1}, but use the conditional result in Lemmas \ref{lem3.8}, and \ref{lem3.9}.\\
\subsection{Comparison}
The corresponding correlation functions of degree three for quadratic symbols have well known upper Weil bound:\\
\begin{equation}
\sum_{n \leq p} \chi(n) \chi(n+a)\chi(n+b) \leq 2\sqrt{p},
\end{equation}
where $0\not \equiv ab \mod p$ and $a \not \equiv b \mod p$, see \cite[p.\ 183]{BE09}. The quadratic symbol is defined by $\chi(n) \equiv n^{(p-1)/2} \mod p$. In some cases there are exact evaluations, see \cite[p.\ 206]{BE09} and similar sources. In addition, there are specialized algorithms to compute them, confer the literature on CM curves. \\
\subsection{Problems}
\begin{exe} {\normalfont Show that $\mu(q) \mu(qm+1)\mu(qm+2)\ne \mu(q)^2 \mu(m)$for all but a subset of integers $qm \geq 1$ of zero density in $\mathbb{N}$, see Lemma 3.10. This implies that the finite sum $ \sum_{m \leq x/q} \frac{\mu(qm+1) \mu(qm+2)}{m} $ is not correlated.}
\end{exe}
\begin{exe} {\normalfont Verify that $\mu(qm+1)\mu(qm+2)= \mu(q) \mu(m)$ for all $m,q\geq 1$ with $\gcd(m,q)=1$, implies that
$$
\sum_{n\leq x} \frac{\mu(n)\mu(n+1)\mu(n+2)}{n} =\sum_{q\leq x} \frac{\mu(q)^2 d(q)}{q} \
\sum_{m \leq x/q, \gcd(m,q)=1} \frac{\mu(m)}{m} \gg (\log x).
$$
Thus, there is correlation.}
\end{exe}
\begin{exe} {\normalfont Verify that arithmetic function equation $\mu(qm+1) \mu(qm+2)=-1$ holds for $\gg x/q\log^8 x$ integers $m,q \in [1,x]$, confer \cite{CE03}, \cite{HA86}, et alii. This proves that $\mu(qm+1)\mu(qm+2) \ne \mu(q) \mu(m)$ for all $m,q\geq 1$.}
\end{exe}
\begin{exe} {\normalfont Let $f(x) \in \mathbb{Z}[x]$ be an irreducible polynomial of degree $\deg(f)=3$. Estimate $\sum_{n\leq x} \mu(f(n))$.}
\end{exe}
\begin{exe} {\normalfont Use Lemmas \ref{lem2.2}, and \ref{lem2.3}, to compute a decomposition of
$$
(i) \quad \sum_{n\leq x} \frac{\mu(n)^2\mu(n+1)\mu(n+2)}{n} \hskip 1 in (ii) \quad \sum_{n\leq x} \frac{\mu(n)^2\mu(n+1)^2\mu(n+2)}{n}
$$
as a product of two simpler finite sums.}
\end{exe}
\newpage
\section{Correlations Functions Of Higher Degrees} \label{s7}
The signs changes of multiplicative functions are studied in \cite{CE03}, \cite{EP85}, \cite{HR86}, \cite{HP85}, \cite[Corollary 4]{MM15}. In addition, some discussion are given in Section \ref{s4}. \\
Recent contributions on the correlations of Liouville and Mobius functions are described in Section \ref{s1}, and appear in \cite{CR14}, \cite{CD15}, \cite{MR15}, \cite{DK15}, \cite{MM15}, et alii.\\
\subsection{Unconditional Estimates}
The next result provides some analytic tools for the estimations of certain correlations functions such as those in Theorem \ref{thm1.1}, Theorem \ref{thm1.2} and the twisted finite sums $\sum_{n \leq x} \mu(n) f(n)$ of high degrees $\geq 3$. It is a straight forward generalization of Sections \ref{s5} and \ref{s6} for the correlations functions of degrees two and three. The multiplicative structure of the function provides additional flexibility in the evaluation or estimation of the twisted finite sum $\sum_{n \leq x} \mu(n)f(n)$.\\
\begin{lem} \label{lem7.1} Let $C>B+1 \ge 1$ be a constant, and let $f,g:\mathbb{N} \longrightarrow \mathbb{C}$ be multiplicative functions. Assume that
\begin{enumerate} [font=\normalfont, label=(\roman*)]
\item $f(n)\ll \log ^{B}x$ for all $n\geq 1$.
\item $f(n)\ne \mu(n)g(n),\mu_*(n)g(n), \lambda(n)g(n)$ \\
for all but a subset of integers $n \geq 1$ of zero density in $\mathbb{N}$.
\end{enumerate}
Then, for any sufficiently large number $x>1$
\begin{equation}
\sum_{n \leq x} \frac{\mu(n) f(n)}{n} =O \left ( \frac{1}{(\log x)^{C}} \right ).
\end{equation}
\end{lem}
\begin{proof} By Lemma \ref{lem2.1}, the normalized twisted sum has the equivalent form
\begin{equation} \label{el61}
\sum_{n \leq x} \frac{\mu(n) f(n)}{n} =\sum_{n \leq x} \frac{(-1)^{\omega(n)} \mu^2(n)f(n)}{n}.
\end{equation}
Apply Lemma \ref{lem2.2}, and reverse the order of summation to obtain this:
\begin{eqnarray}
\sum_{n \leq x} \frac{(-1)^{\omega(n)} \mu^2(n)f(n)}{n}
&=&\sum_{n \leq x} \frac{\mu^2(n)f(n)}{n} \sum_{q|n} \mu(q)d(q) \\
&=&\sum_{q\leq x,} \sum_{n \leq x, d|n} \frac{\mu^2(n)f(n)\mu(q)d(q)}{n} \nonumber.
\end{eqnarray}
Information on inverting the order of summation are given in \cite[p. 35]{MV07}, \cite[p. 27]{RH94}, \cite[p. 216]{RM08}, \cite[p. 83]{SH83}, \cite[p. 36]{TG15}, et cetera.\\
Replacing the change of variable $1 \leq n=qm\leq x$, and separating the variables
return a basic decomposition as product of two simpler finite sums:
\begin{eqnarray} \label{el62}
\sum_{q\leq x,} \sum_{n \leq x, q|n} \frac{\mu^2(n)f(n)\mu(q)d(q)}{n}
&=& \sum_{q\leq x} \frac{\mu(q)d(q)}{q} \sum_{m \leq x/q} \frac{\mu^2(qm)f(qm)}{m}\\
&=& \sum_{q\leq x} \frac{\mu(q)d(q)f(q)}{q} \sum_{m \leq x/q} \frac{\mu^2(m)f(m)}{m} \nonumber.
\end{eqnarray}
The inner finite sum is a twisted Mobius sum restricted to the squarefree integers $qm\ge 1$. This basic decomposition into a product of two simpler finite sums, confer (\ref{el61}) to (\ref{el62}), facilitates a simple technique for estimating its upper bound.\\
By hypothesis, $f(n)\ne \mu(n)g(n),\mu_*(n)g(n), \lambda(n)g(n)$. Thus,
\begin{equation}
f(qm)\ne \mu(q) g(m), \qquad \mu_*(q)g(m), \qquad \lambda(q)g(m),
\end{equation}
for all but a subset of squarefree integers $qm \geq 1$ of zero density in $\mathbb{N}$, refer to Lemma \ref{lem3.10} for some information. This constraint preempts the possibility of a correlated finite sum
\begin{equation}
\sum_{q\leq x} \frac{\mu(q)d(q)}{q} \sum_{m \leq x/q} \frac{\mu^2(qm)f(qm)}{m}
=\sum_{q\leq x} \frac{\mu(q)^2d(q)}{q} \sum_{m \leq x/q, \gcd(m,q)=1} \frac{\mu(m)}{m} \gg \log x.
\end{equation}
Take the absolute value of the penultimate equation to reach the inequality:
\begin{eqnarray}
\left | \sum_{q\leq x} \frac{\mu(q)d(q)}{q} \sum_{m \leq x/q} \frac{\mu^2(qm)f(qm)}{m} \right |
&\leq& \left |\sum_{q\leq x} \frac{\mu(q)d(q)}{q} \right | \left | \sum_{m \leq x/q} \frac{\mu^2(qm)f(qm)}{m} \right | \nonumber \\
&\leq& \left |\sum_{q\leq x} \frac{\mu(q)d(q)}{q} \right | \sum_{m \leq x/q} \frac{|\mu^2(qm)f(qm)|}{m} \nonumber \\
&\leq& (\log x)^{B+1}\left |\sum_{q\leq x} \frac{\mu(q)d(q)}{q} \right | .
\end{eqnarray}
Apply Lemma \ref{lem3.4} to complete the estimate:
\begin{equation}
\sum_{n \leq x} \frac{f(n)\mu(n)}{n}
=O \left (\frac{1}{\log^{C-B-1} x} \right ),
\end{equation}
with $C>B+1\ge 1$ constant.
\end{proof}
\begin{thm} \label{thm7.1} Let $C>B+1 \ge 1$ be a constant, and let $f,g:\mathbb{N} \longrightarrow \mathbb{C}$ be multiplicative functions. Assume that
\begin{enumerate} [font=\normalfont, label=(\roman*)]
\item $f(n)\ll \log ^{B}x$ for all $n\geq 1$.
\item $f(n)\ne \mu(n)g(n),\mu_*(n)g(n), \lambda(n)g(n)$ \\
for all but a subset of integers $n \geq 1$ of zero density in $\mathbb{N}$.
\end{enumerate}
Then, for any sufficiently large number $x>1$
\begin{equation}
\sum_{n \leq x} \mu(n)f(n) =O\left(\frac{x}{(\log x)^{C-B-1}} \right).
\end{equation}
\end{thm}
\begin{proof} Let $V(x)= \sum_{n \leq x}\frac{ \mu(n) f(n)}{n}$. Thus, Lemma 7.1 is applicable. \\
A summation by parts leads to the integral representation
\begin{equation}
\sum_{n\leq x} \mu(n) f(n)=\sum_{n\leq x} n \cdot \frac{\mu(n) f(n)}{n}
=\int_{1}^{x} t \cdot d V(t).
\end{equation}
For information on the Abel summation formula, see \cite[p. 4]{CR06}, \cite[p. 4]{MV07}, \cite[p. 4]{TG15}. Use Lemma \ref{lem3.1} to compute an upper bound for the integral:
\begin{eqnarray}
\left | \int_{1}^{x} t \cdot d V(t) \right |
& \leq & x \cdot \left | \int_{1}^{x} d V(t) \right | \nonumber\\
& \leq & x \cdot \left (V(x)-V(1) \right ) \\
&=&O \left ( \frac{x}{(\log x)^{C-B-1}} \right )\nonumber ,
\end{eqnarray}
where $V(x)-V(1) \ll (1/(\log x)^{C-B-1}$, with $C>B+1 \ge 1$ constant.
\end{proof}
The correlated case $\mu(n)f(n)=\alpha \mu(n)^2$, where $\alpha >0$ is a constant, collapses to the norm $\sum_{n\leq x} \mu(n) f(n)= \alpha \sum_{n\leq x} \mu(n)^2 \gg x$. \\
\subsection{Conditional Upper Bounds}
The conditional upper bounds are derived from the optimal zerofree region $\{s \in \mathbb{C}: \Re e(s)>1/2 \}$ of the zeta function $\zeta(s), s \in \mathbb{C}$. \\
\begin{thm} \label{thm7.2} Let $f,g:\mathbb{N} \longrightarrow \mathbb{C}$ be multiplicative functions. Assume that
\begin{enumerate} [font=\normalfont, label=(\roman*)]
\item $ \zeta(\rho )=0 \Longleftrightarrow \rho =1/2+it, t \in \mathbb{R}$.
\item $f(n)\ll \log ^{B}x$ \textit{for all $n\geq 1$.}
\item $f(n)\ne \mu(n)g(n),\mu_*(n)g(n), \lambda(n)g(n)$ \\
for all but a subset of integers $n \geq 1$ of zero density in $\mathbb{N}$.
\end{enumerate}
Then, for any sufficiently large number $x>1$
\begin{equation}
\sum_{n \leq x} \frac{\mu(n)f(n)}{n} =O\left(\frac{(\log x)^{B+1}}{x^{1/2}} \right).
\end{equation}
\end{thm}
\begin{thm} \label{thm7.4} Let $f, g:\mathbb{N} \longrightarrow \mathbb{C}$ be multiplicative functions. Assume that
\begin{enumerate} [font=\normalfont, label=(\roman*)]
\item $ \zeta(\rho )=0 \Longleftrightarrow \rho =1/2+it, t \in \mathbb{R}$.
\item $f(n)\ll \log ^{B}x$ for all $n\geq 1$.
\item $f(n)\ne \mu(n)g(n),\mu_*(n)g(n), \lambda(n)g(n)$ \\
for all but finitely many integers $n \geq 1$.
\end{enumerate}
Then, for any sufficiently large number $x>1$
\begin{equation}
\sum_{n \leq x} \mu(n)f(n) =O\left(x^{1/2}(\log x)^{B+1} \right).
\end{equation}
\end{thm}
The proofs are similar to the proofs of Lemma \ref{lem7.1} and Theorem \ref{thm7.1}, but use the conditional results in Lemmas \ref{lem3.8} and \ref{lem3.9}.\\
\subsection{Problems}
\begin{exe} {\normalfont Given a multiplicative function $f$, estimate or determine an asymptotic formula for $\sum_{n \leq x}\mu(n)^2f(n)$, for a multiplicative function $f$. What if it is not multiplicative?}
\end{exe}
\begin{exe} {\normalfont Let $1\leq a<q, \gcd(a,q)=1$ be fixed integers. Given a multiplicative function $f$, estimate or determine an asymptotic formula for $\sum_{n \leq x, n\equiv a \text{ mod } q}\mu(n)^2f(n)$, . What if it is not multiplicative?}
\end{exe}
\newpage
\section{Some Arithmetic Correlation Functions} \label{s8}
Let $f:\mathbb{N} \longrightarrow \mathbb{C}$ be an arithmetic function, and let $\tau_{1}, \tau_{2},\ldots \tau_{k} \in \mathbb{N}$, and let $k \geq 1$ be fixed integers. The $k$-degree correlation function of $f$ is defined by
\begin{equation}
R(\tau)=\sum_{n \leq x} f(n+\tau_{1})f(n+\tau_{2}) \cdots f(n+\tau_{k}).
\end{equation}
The case $\tau_{1}=\tau_{2}= \cdots =\tau_{k}$, with $k \in 2\mathbb{Z}$, is usually not difficulty to estimate or calculate. But the case $\tau_{i}\neq \tau_{j}$ for some $i \neq j$ is usually a challenging problem. \\
Trivially, the correlation function has the upper bound $|R(\tau)|\ll |f|^k x$. And a priori, a random sequence $ f(n),f(n+1),f(n+3), \ldots $ is expected to have the upper bound $|R(\tau)|\ll |f|^k x^{1/2}(\log x)^B$, where $B>0$ is a constant, see \cite{CS99}, \cite{CS00}, [\cite{CS02}, Theorem 2], and \cite{AR07}. Some extreme cases such that $|R(\tau)|\gg |f|^k x$, are demonstrated in \cite{MS98}.\\
\subsection{Divisors Correlation Functions}
The shifted divisor problem $\sum_{n \leq x} d(n) d(n+1)$ was estimated in 1928, see \cite{IA28}, and \cite{AR07}. The next level of complexity $\sum_{n \leq x} d_k(n) d_m(n+1)$ for varius integers parameters $k,m\geq2$ has a vast literature. In contrast, the analysis for the triple correlation $\sum_{n \leq x} d(n) d(n+1)d(n+2)$ is relatively new. Some redumentary analysis was established in 2015, see \cite{BV15}, and the functions fields version was proved in \cite{AR14}.\\
\subsection{Divisor And vonMangoldt Correlation Functions}
The shifted prime divisor problem, better known as the Titchmarsh divisor problem, that is
\begin{equation}
\sum_{p \leq x} d(p-a)=a_0x+a_1\li(x)+O\left( \frac{x}{\log^Cx} \right )
\end{equation}
where $a \ne0$ is a fixed integer, and $a_0,a_1>0$ are constants, was conditionally estimated in 1931, see \cite{TE31}, and unconditionally in \cite{LJ63}. Later the analysis was simplified in \cite{RG65}. Utilizing summation by part, this result is equivalent to the correlation of the vonMangoldt function and the divisor function. Specifically, the analysis for
\begin{equation}
\sum_{n \leq x} \Lambda(n)d(n-a)
\end{equation}
and $\sum_{p \leq x}d(p+a)$ are equivalent.
\subsection{Characters Correlation Functions}
The characters $ \chi : \mathbb{Z} \longrightarrow \mathbb{C}$ are periodic and completely multiplicative functions modulo some integers $q \geq 1$. Often, these properties allow simpler analysis of the character correlation functions. Several classes of these correlation functions have been settled. One of these is the binary sequence of (Legendre) quadratic symbol $f(n)= \chi(n)$, where $\chi(n) \equiv n^{(p-1)/2} \mod p$. Specifically,
\begin{equation}
\sum_{n \leq x} \chi(n+\tau_{1}) \chi(n+\tau_{2}) \cdots \chi(n+\tau_{k})=O(k x^{1/2}\log x).
\end{equation}
This is estimate, which is derived from the Weil bound, is dependent on the degree $k \geq 1$, and the sequence of integers $\tau_{1},\tau_{2}, \ldots,\tau_{k}$, see \cite[p.\ 183]{BE09}, \cite[p.\ 112]{CS02}, and the literature.\\
\newpage
\section{Improved Correlation Result} \label{s9}
This Section completes the proofs for the correlation of the Mobius functions, and correlation of the Liouville functions over the interval $[1,x]$.\\
\subsection{Main Results}
\begin{proof} (Proof of Theorem \ref{thm1.1}:) Without loss in generality, let $\tau_{0}=0, \tau_{1}=1, \ldots, \tau_{k}=k-1$, and let $f(n)= \mu(n+1) \cdots \mu(n+k-1)$,where $k \geq 1$ is a
fixed constant. By routine calculations, it can be demonstrated that $f(qm)\ne \mu(q) g(m)$ for all but a subset of integers $n=qm \geq 1$ of zero density
in $\mathbb{N}$, see Lemma \ref{lem3.10}. Ergo, the expression
\begin{equation}
\mu(q)f(qm)=\mu(q) \mu(qm+1) \cdots \mu(qm+k-1)\ne \mu(q)^2\mu(m)
\end{equation}
for all but a subset of integers $qm \geq 1$ of zero
density in $\mathbb{N}$, see Lemma \ref{lem3.10}, and Problems 5.1, and 5.2. These imply that the finite sum is not correlated. Thus,
\begin{eqnarray}
\sum_{n \leq x} \mu(n)\mu(n+1) \cdots \mu(n+k-1) &=&\sum_{n \leq x} \mu(n)f(n)\\
&\ne&\sum_{n \leq x} \mu(n+r)^2\nonumber,
\end{eqnarray}
for any $r \in \mathbb{Z}$. By Theorem \ref{thm7.2}, this becomes\\
\begin{eqnarray} \label{el81}
\sum_{n \leq x} \mu(n)\mu(n+1) \cdots \mu(n+k-1) &=&\sum_{n \leq x} \mu(n) f(n)\\
&=&O \left ( \frac{x}{(\log x)^{C-B-1}} \right) \nonumber,
\end{eqnarray}
where $C-B-1>1$ is a constant, and $B=0$ since $|f(n)| \leq 1. $
\end{proof}
The proof of Theorem \ref{thm1.2} is the same as the proof of Theorem \ref{thm1.1} above, but use Lemma \ref{lem3.5}. \\
In synopsis, replacing $\tau=k-1$, the correlation function $R(\tau)=\sum_{n \leq x} \mu(n)\mu(n+1) \cdots \mu(n+\tau) $ is a ``two-value" function, videlicet,
\begin{equation}
R(\tau) =
\left \{
\begin{array}{lr}
x &\tau=2m-1,\\
O(x/(\log x)^{-C}) & \tau \ne 2m-1,\\
\end{array}
\right .
\end{equation}
with $R(0)=\sum_{n \leq x} \mu(n)^2 \gg x $, which is the energy of the function $f(n)=\mu(n)$ over the interval $[1,x]$. \\
\subsection{Comparison}
The corresponding correlation functions of degree $k\ge 2$ for quadratic symbols have well known upper Weil bound:
\begin{equation} \label{el82}
\sum_{n \leq p} \chi(f(n)) \leq 2k\sqrt{p},
\end{equation}
where $f(x) \in \mathbb{Z}[x]$ is a polynomial such that $f(x) \ne g(x)^2$, see \cite[p. 183]{BE09}, and similar references. The quadratic symbol is defined by $\chi(n) \equiv n^{(p-1)/2} \mod p$. \\
Another advantage offered by the Mobius sequence $\{\mu(n):n\geq 1\}$ and the Liouville sequence $\{\lambda(n):n\geq 1\}$ are independence on the degree $k \geq 1$. In the case of the sequence of quadratic symbol $ \chi(n),\chi(n+1), \ldots, \chi(n+k-1)$, the correlation is dependent on the degree $k \geq 1$, compare (\ref{el81}) and (\ref{el82}).\\
\newpage
\section{Correlation Functions over Subsets of Integers}
Some works have been done to estimate the correlation of Liouville functions over specific subsets of integers.\\
\begin{dfn} \label{dfn10.1} {\normalfont A subset of integers $A \subset \mathbb{N}$ is called a \textit{normal} subset if any binary sequence $s(N)=(s_{0}(n_0),s_{1}(n_1),...,s_{N-1}(n_{N-1}))$, $s_{i} \in \{-1,1\}$,
where $n_i \in A$, of length $N \geq 1$ occurs with probability $2^{-N}$.}
\end{dfn}
Let $Q \subset \mathbb{P}=\{2,3,5, \ldots\}$ be a subset of primes. The restricted Liouville function is defined by
\begin{equation}
\lambda_Q(p) =
\left \{
\begin{array}{ll}
-1 &n \in Q,\\
0 &n \notin Q.\\
\end{array}
\right .
\end{equation}
\begin{thm} \label{thm10.2} {\normalfont (\cite{FA05})} Let $k\geq 1$ be a small fixed integer. Then, for every large number $x>1$,
\begin{equation}
\sum_{n \leq x,\; n+n_i \in Q} \lambda_Q(n+n_{1}) \lambda_Q(n+n_{2})\cdots\lambda_Q(n+n_{k})=O\left (\frac{x}{(\log x)^{C}} \right )
\end{equation}
for any fixed sequence of integers $n_{1}<n_{2}<\cdots<n_{k}$, for which $n+n_i \in Q$.
\end{thm}
This result has application in Diophantine equations on restricted subsets of integers. Several problems are described in the same reference.
\newpage
\section{Exponential Sums Estimates} \label{s11}
Exponential sums $\sum_{n \leq x} f(n) e^{i2 \pi \alpha n},$ where $0<\alpha<1$ is a real number, with the multiplicative coefficient $f(n)=\mu(n)$ and $f(n)=\lambda(n)$ are treated here. The earlier result on these exponential sums appears to be $\sum_{n \leq x} \mu(n) e^{i2 \pi \alpha n} =O(x(\log x)^{-c}), c>0,$ in \cite{DH37}. Recent contributions appear in \cite{DD74}, \cite{MV77}, \cite{HS87}, \cite{BR99}, \cite{BG03}, \cite{MS11}, \cite{MM15}, et alii.\\
\subsection{Sharper Estimate}
The proof for a better unconditional estimate relies on the zerofree region $\Re e(s)>1-c_0/\log t$, where $c_0>0$ is a constant, and $t \in \mathbb{R}$.\\
\begin{thm} \label{thm11.1} {\normalfont (\cite{HS87}) } Let $\alpha \in \mathbb{R}$ be a real number such that $0<\alpha<1$, and let $x \geq 1$ be a large number. Then,
\begin{equation}
\sum_{n \leq x} \mu(n) e^{i2 \pi \alpha n} =O \left (xe^{-c\sqrt{\log x}} \right ),
\end{equation}
where $c>0$ is an absolute constant.
\end{thm}
This result is a corollary of a more general result for arbitrary functions of certain forms given in Theorem 7.2. This proof is a lot simpler than the proofs given in
\cite{BG03}, and \cite{MS11}, which are based on the circle methods and the Vaugham identity respectively.
\\
\subsection{Conditional Upper Bound}
An upper bound conditional on the generalized Riemann hypothesis proved in \cite{BH91} claims that $
\sum_{n \leq x} \mu(n) e^{i2 \pi \alpha n} \ll x^{3/4+\varepsilon},$
where $\varepsilon>0$ is an arbitrarily small number. The analysis is based on the zerofree region of $L$-functions.\\
An improved upper bound derived from the optimal zerofree region $\{s \in \mathbb{C}: \Re e(s)>1/2 \}$ of the zeta function is presented here. \\
\begin{thm} \label{thm11.2} Suppose that $ \zeta(\rho)=0 \Longleftrightarrow \rho=1/2+it, t \in \mathbb{R}$. Let $\alpha \in \mathbb{R}$ be a real number such that $0<\alpha<1$,
nd let $x \geq 1$ be a large number. Then,
\begin{equation}
\sum_{n \leq x} \mu(n) e^{i2 \pi \alpha n} =O \left (x^{1/2}\log^{2}x \right).
\end{equation}
\end{thm}
\begin{proof} Let $f(n)=e^{i2 \pi \alpha n}$ with $0<\alpha<1$ a real number. Two cases will be examined. \\
\textit{Case} 1. Assume $\alpha \in \mathbb{R}$ be an irrational number. A routine calculation shows that $\mu(n) \ne e^{i \pi \alpha n}$ for all integers but $n=0$. Ergo,
\begin{equation}
\sum_{n\leq x} \mu(n) f(n) \ne \sum_{n\leq x} \mu(n+m)^2
\end{equation}
for any fixed integer $m \in \mathbb{Z}$. This confirms that there is no correlation, and Lemmas \ref{lem7.1} and \ref{lem7.2} are applicable. Let $V(x)= \sum_{n \leq x}\frac{ \mu(n) f(n)}{n}$. A summation by
parts leads to the integral representation
\begin{equation}
\sum_{n\leq x} \mu(n) f(n)=\sum_{n\leq x} n \cdot \frac{\mu(n)f(n)}{n}
=\int_{1}^{x} t \cdot d V(t).
\end{equation}
Lastly, employ Lemma \ref{lem6.3} to evaluate the integral:
\begin{equation}
\int_{1}^{x} t \cdot d V(t)=x \cdot O \left (\frac{\log^3 x}{x^{1/2}} \right )+O \left (\int_{1}^{x} \frac{\log^{2}t}{t^{1/2}} dt \right )=O \left (x^{1/2}\log^3 x \right ).
\end{equation}
\textit{Case} 2. Assume $\alpha \in \mathbb{R}$ be a rational number. Then, $f(n)=e^{i2 \pi \alpha n}$ is a periodic function.Consequently,
\begin{eqnarray}
\sum_{n \leq x} \mu(n) e^{i2 \pi \alpha n} &=&\sum_{k \bmod q,} \sum_{n \equiv k \bmod q} \mu(n) e^{i2 \pi \alpha n} \nonumber \\
&=&\sum_{k \bmod q,} e^{i2 \pi \alpha k}\sum_{n \equiv k \bmod q} \mu(n) \\
&=&O \left (x^{1/2}\log^{2}x \right) \nonumber
\end{eqnarray}
where $\alpha=m/q$.
\end{proof}
The same estimate can be computed by other means, such as the Perron formula.
\subsection{Problems}
\begin{exe} {\normalfont Let $\alpha \in \mathbb{Q}$ be a rational number. Show that the subset of integers $A_\alpha=\{ n \in \mathbb{N}:\mu(n) e^{i2 \pi \alpha n}= \mu^2(n)\}$ is infinite but has zero density in the set of integers $\mathbb{N}$. Hint: Consider squarefree integers $ \alpha n \in \mathbb{Z}.$ }
\end{exe}
\begin{exe} {\normalfont Let $\alpha \notin \mathbb{Q}$ be an irrational number. Show that $\mu(n) e^{i2 \pi \alpha n} \ne \mu^2(n)$ for all integers $n \geq 1$.}
\end{exe}
\begin{exe} {\normalfont Let $\alpha \notin \mathbb{Q}$ be an irrational number. Compute an estimate for $$\sum_{n \leq x} \mu(n) e^{i2 \pi \alpha n^2}$$ for large numbers $x \geq 1$.}
\end{exe}
\begin{exe} {\normalfont Let $\alpha \notin \mathbb{Q}$ be an irrational number. Compute an estimate for $$\sum_{n \leq x} \mu(n) e^{i2 \pi \alpha n^3}$$ for large numbers $x \geq 1$.}
\end{exe}
\begin{exe} {\normalfont Let $x\geq 1$ be a large number, and let $s(n)=\sum_{0 \leq i \leq k} n_i$, where $n=\sum_{0
\leq i \leq k} n_i\cdot 2^i.$ Compute an estimate for the finite sums
$$
\sum_{n \leq x} \frac{\mu(n)(-1)^{s(n)}}{n} \text{ and } \sum_{n \leq x} \mu(n)(-1)^{s(n)}.
$$}\end{exe}
\begin{exe} {\normalfont Let $x\geq 1$ be a large number.Compute an estimate for the finite sum
$$
\sum_{n \leq x} \mu^2(n)e^{i2 \pi \alpha n}.
$$ }\end{exe}
\newpage
\section{Twisted Arithmetic Sums} \label{s12}
A sample of finite sums of arithmetic functions twisted by the Mobius function are given here.\\
\begin{thm} \label{thm12.1} Suppose that $ \zeta(\rho)=0 \longleftrightarrow \rho=1/2+it, t \in \mathbb{R}$. Let $\varphi$ be the totient function, and let $x \geq 1$ be a large number. Then,
\begin{enumerate} [font=\normalfont, label=(\roman*)]
\item $ \displaystyle \sum_{n \leq x} \mu(n) \varphi(n) =O \left (x^{3/2}\log x \right) . $
\item $ \displaystyle\sum_{n \leq x} \lambda(n) \varphi(n) =O \left (x^{3/2}\log x \right). $
\end{enumerate}
\end{thm}
\begin{proof} Let $\varphi(n)=\sum_{d|n} \mu(d)d$. Rewrite the sum as
\begin{equation}
\sum_{n\leq x} \mu(n) \varphi(n) = \sum_{n\leq x} \mu(n)\sum_{d|n} \mu(d)d =\sum_{d \leq x} \mu(d)d \sum_{n\leq x,d|n} \mu(n).
\end{equation}
Now let $n=dm \leq x $. Substituting this returns
\begin{equation}
\sum_{d \leq x} \mu(d)d \sum_{n\leq x,d|n} \mu(n)=\sum_{d \leq x} \mu(d)^2d \sum_{m\leq x/d} \mu(m).
\end{equation}
Using Lemma 3.1 yields
\begin{equation}
\sum_{d \leq x} \mu(d)^2d \sum_{m\leq x/d} \mu(m) \ll \sum_{d \leq x} \mu(d)^2d \left ( \dfrac{x^{1/2}\log (x/d)}{d^{1/2}} \right ) \ll(x^{1/2}\log x)\sum_{d \leq x} \mu(d)^2d^{1/2} .
\end{equation}
Let $Q(x)= \sum_{n \leq x}\mu(n)^2 \ll x$. A summation by parts leads to the integral representation \\
\begin{equation}
(x^{1/2}\log x)\sum_{d \leq x} \mu(d)^2d^{1/2}
=\int_{1}^{x} t^{1/2} \cdot d Q(t).
\end{equation}
Evaluate the integral to complete the proof.
\end{proof}
\newpage
\section{Fractional Functions} \label{s13}
The alternating fractional function has the Fourier series
\begin{equation}
D(x)=-\frac{1}{\pi}\sum_{n\geq1} \frac{\sin(2 \pi nx)}{n}
=\left \{\begin{array}{ll}
\{x\}-1/2 &x \not \in \mathbb{Z},\\
0 &x \in \mathbb{Z}.\\
\end{array}
\right.
\end{equation}
\begin{thm} \label{thm1300} {\normalfont (\cite{DH37})} Let $a_n,n\geq 1,$ be a sequence of numbers, and let $A_n=\sum_{d|n}a_d$. If the series $\sum_{n\geq1} \frac{a_n}{n^s}$ is absolutely convergent for $\Re e(s)>1$, then, for any $x \in \mathbb{R}$,
\begin{equation}
\sum_{n\geq1} \frac{a_n}{n}D(nx) =-\frac{1}{\pi}\sum_{n\geq1} \frac{A_n}{n}\sin(2\pi nx).
\end{equation}
\end{thm}
This result was proved in \cite{DH37}, and improved in \cite{SL76}.\\
\begin{lem} \label{lem1300} Let $D(x)$ be the Fourier series of the function $\{x\}-1/2$. Then,
\begin{enumerate} [font=\normalfont, label=(\roman*)]
\item The alternating fractional function $D(nx)$ is not orthogonal (it is correlated) to the Liouville function $\lambda(n)$ for all $x\in \mathbb{R-Z}$.
\item The alternating fractional function $D(nx)$ is not orthogonal (it is correlated) to the Mobius function $\mu(n)$ for all $2x\ne m$ with $m \in \mathbb{N}$.
\end{enumerate}
\end{lem}
\begin{proof} (i) Let $a_n=\lambda(n)$, and let
\begin{equation}
A_n=\sum_{d|n}\lambda(d)=
\left \{
\begin{array}{ll}
1 &n=m^2,\\
0 &n\ne m^2.\\
\end{array}
\right.
\end{equation}
Then, the series $\sum_{n\geq1} \frac{a_n}{n^s}=\sum_{n\geq1} \frac{\lambda(n)}{n^s}$ is absolutely convergent for $\Re e(s)>1$. And the right side of the series
\begin{equation}
\sum_{n\geq1} \frac{\lambda(n)}{n}D(nx) =-\frac{1}{\pi}\sum_{n\geq1} \frac{A_n}{n}\sin(2\pi nx)=-\frac{1}{\pi}\sum_{n\geq1} \frac{1}{n^2}\sin(2\pi n^2x)
\end{equation}
converges to a nonzero value if and only if $x\ne 0$. Therefore, by Theorem \ref{thm1300}, left side converges to the same nonzero number. Specifically,
\begin{equation}
\sum_{n\geq1} \frac{\lambda(n)}{n}D(nx) =c_0+O\left ( \frac{1}{\log x} \right ).
\end{equation}
This implies that the functions $\lambda(n)$
and $D(nx)$ are not orthogonal (but are correlated).
(ii) Let $a_n=\mu(n)$, and let
\begin{equation}
A_n=\sum_{d|n}\mu(d)=
\left \{
\begin{array}{ll}
1 &n=1,\\
0 &n\ne 1.\\
\end{array}
\right.
\end{equation}
Then, the series $\sum_{n\geq1} \frac{a_n}{n^s}=\sum_{n\geq1} \frac{\mu(n)}{n^s}$ is absolutely convergent for $\Re e(s)>1$. And the right side of the series
\begin{equation}
\sum_{n\geq1} \frac{\mu(n)}{n}D(nx)=-\frac{1}{\pi}\sum_{n\geq1} \frac{A_n}{n}\sin(2\pi nx) =-\frac{1}{\pi} \sin(2\pi x)
\end{equation}
is a nonzero value if and only if $2x\ne m$, where $m \in \mathbb{N}$. Therefore, by Theorem \ref{thm1300}, left side converges to the same nonzero number. Specifically,
\begin{equation}
\sum_{n\geq1} \frac{\mu(n)}{n}D(nx) =c_1+O\left ( \frac{1}{\log x} \right ).
\end{equation}
This implies that the functions $\mu(n)$
and $D(nx)$ are orthogonal (not correlated).
\end{proof}
\begin{exa} \label{exa1300} {\normalfont The sequence $\{(n-2)/4 \text{ mod }4:n \geq 1\}$ and $\{\mu(n):n \geq 1\}$ are not orthogonal, (are correlated).
\begin{tabular}{ l l}
$ \displaystyle \sum_{n\geq1} \frac{\mu(n)}{n}D(nx) =c_1+O\left ( \frac{1}{\log x} \right )$
\end{tabular}
}
\end{exa}
The next series considered has an intrinsic link to the sequence of primes $p=n^2+1$. Some extra work is required to prove that the partial sum $\sum_{n\leq x} \Lambda(n^2+1)$ is unbounded as $x \to \infty$.
\begin{lem} \label{lem1305} Let $\Lambda(n)$ be the vonMangoldt function, and let $D(x)$ be the saw tooth function. For any real number $x\in \R-\Z$, show that
\begin{equation}
\sum_{n\geq1} \frac{\Lambda(n^2+1)}{n^2}D(n^2x)=-\frac{1}{\pi}\sum_{n\geq1} \frac{\sum_{d^2 \mid n} \Lambda(d^2+1)}{n}\sin(2\pi nx)
\end{equation}
\end{lem}
\begin{proof} Let
\begin{equation}
a_n=\Lambda(n+1)\sum_{d|n}\lambda(d),
\end{equation} and let
\begin{equation}
A_n=\sum_{d \mid n}\Lambda(d+1)\sum_{e|d}\lambda(e)=
\sum_{d^2 \mid n} \Lambda(d^2+1).
\end{equation}
Then, the series
\begin{equation}
\sum_{n\geq1} \frac{a_n}{n^s}D(nx)=\sum_{n\geq1} \frac{\Lambda(n+1)\sum_{d|n}\lambda(d)}{n^s}D(nx)=\sum_{n\geq1} \frac{\Lambda(n^2+1)}{n^{2s}}D(n^2x)
\end{equation}
is absolutely convergent for $\Re e(s)>1/2$. And the right side of the series
\begin{equation}
-\frac{1}{\pi}\sum_{n\geq1} \frac{A_n}{n^s}\sin(2\pi nx)=-\frac{1}{\pi}\sum_{n\geq1} \frac{\sum_{d^2 \mid n} \Lambda(d^2+1)}{n^s}\sin(2\pi nx)
\end{equation}
converges to a nonzero value if and only if $x \in \R-\Z$. Therefore, by Theorem \ref{thm1300}, left side converges to the same nonzero number.
\end{proof}
\newpage
|
1,477,468,750,626 | arxiv | \section{Introduction}
The world continues to struggle in containing and controlling the COVID-19 pandemic by applying different measures such as lockdown or remote-work to contain the virus spread at the cost of straining their economy. Although the situation has been improved significantly since the declaration of the global pandemic by the World Health Organization~\citep{pak2020economic}, the number of infected people and death toll continues to rise regularly when a new COVID variant emerges~\citep{thakur2021omicron}. The number of infected patients has a direct impact on the nation's healthcare systems and causes a swift drain of hospital's human and material resources.
As an example, the shortage of hospital staff can lead to misdiagnosis and improper treatment of COVID-19 patients.
As stated by the recent studies~\citep{dadson2020underlying, sullivan2022acute}, COVID-19 infection can cause serious health complications like Acute kidney injury (AKI) which can be fatal in some patients. Understanding these complications and acting preemptively during the treatment can significantly increase the survival chance of a patient~\citep{see2021risk} suffering from COVID-19. Nevertheless, lack of resources makes taking swift actions even more difficult.
In this paper, a new machine learning model framework is proposed to predict a patient's survival chance and the chance of developing kidney injury during hospitalization from clinical and biochemistry data within a transparent and systematic manner. The proposed COVID-Net Biochem method is an explainability-driven framework for building machine learning model for the aforementioned tasks that can be extended to other healthcare domains. The explainability insight derived from the model decision-making process provides a framework to be able to audit model decisions. This capability can be used in tandem to gain new powerful insights of potential clinical and biochemical markers that are relevant to the prediction outcome. As such, the proposed method can assist physicians to make the diagnosis process more effective and efficient by providing supplementary outcome predictions based on a large collection of clinical and biochemical markers as well as highlighting key markers relevant to the task.
The resulting output from the framework includes a diverse collection of machine learning models including different gradient based boosting tree architectures and deep transformer architectures designed to specifically predict survival chance and kidney injury as well as their dominant clinical and biochemical markers leveraged throughout the decision making process.
In this work, we propose COVID-Net Biochem, an explainability-driven framework for building machine learning models for patient survival prediction and AKI (Acute Kidney Injury during hospitalization) prediction in a transparent and systematic manner. The proposed two-phase framework leverages both clinician assessment and deep insights extracted via a quantitative explainability strategy to not only gain a deeper understanding into the decision-making process of the machine learning models, and the impact of different clinical and biochemical markers on its decision-making process, but also enables the creation of high-performance, trustworthy, clinically-sound machine learning models by guiding architecture design and training policies based on these extracted clinical and explainability-driven insights in an iterative manner.
\subsection*{Generalizable Insights about Machine Learning in the Context of Healthcare}
A key generalizable insight we wish to surface in this work is the largely 'black box' nature of model design at the current state of machine learning in the context of healthcare, and the strategies for transparent design are not only critical but very beneficial for building reliable, clinically relevant models in a trustworthy manner for widespread adoption in healthcare. More specifically, while significant advances have been made in machine learning, particularly with the introduction of deep learning, much of the design methodologies leveraged in the field rely solely on a small set of performance metrics (e.g., accuracy, sensitivity, specificity, etc.) to evaluate and guide the design process of machine learning models. Such 'black box' design methodologies provide little insight into the decision-making process of the resulting machine learning models, and as such even the designers themselves have few means to guide their design decisions in a clear and transparency manner. This is particularly problematic given the mission-critical nature of clinical decision support in healthcare, and can lead to significant lack of trust and understanding by clinicians in machine learning-driven clinical decision support solutions. Furthermore, the lack of interpretability or understanding in the decision-making process during the design process and during clinical use creates significant accountability and governance issues, particularly if decisions and recommendations made by machine learning models result in negative patient impact in some cases.
Motivated to tackle the challenges associated with 'black box' model design for clinical decision support, in this work we propose an explainability-driven development framework for machine learning models that can be extended to multiple healthcare domains such as COVID-19 survival and acute kidney injury prediction. The framework provides a two phase approach in which a diverse set of machine learning models are designed and trained on a curated dataset and then validated using both an automatic explainability technique to identify key features as well as manual clinician validation of the proposed highlighted features. The second phase consists of leveraging the explainability-driven insights to revise the data and design of the models to ensure high detection performance from relevant clinical features. The resulting output from the development process are high-performing, transparent detection models that not only provide both supplementary outcome predictions for clinicians but also quantitatively highlight important factors that could provide new insights beyond standard clinical practices.
\subsection{Related Works}
From the onset of this crisis, the push for improving effective screening methods has gained tremendous attention worldwide. The access to accurate and effective screening processes of patients is important to provide immediate treatment, and isolation precautions to contain the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus causing the COVID-19 pandemic.
Several research efforts have been introduced for utilizing deep learning models for screening of COVID-19 patients. Studies have shown that by exploiting deep learning, COVID-19 cases can be diagnosed based on the CXR image with an acceptable accuracy~\citep{Wong,Warren,Toussie,Huang,Guan,zhang2021diagnosis}, followed by works utilized CT images to diagnose COVID-19 cases~\citep{silva2020covid, zhao2021deep, saood2021covid}. In addition, several techniques have been proposed to grade the severity of COVID-19 patients based on the medical imaging modality~\citep{shoaib2021covid, tang2021severity, qiblawey2021detection}.
While using computer-aided diagnostics for screening and medical imaging of COVID-19 patients has been very popular, little work has been done in using machine learning models to assess the survival chance and prediction of development of acute kidney injury (AKI) among COVID-19 patients. Furthermore, most of the proposed algorithms so far lack interpretability which makes their real-world application and integration with clinicians questionable as their decision making process is not reliable.
Interpretable assessment tools are particularly important as they can help physicians in the diagnosis process and alert the medical team whether a certain complication has a high risk or not with a quantitaive comprehension of the underlying decision-making process, increasing the preemptive measures that can be taken to reduce the associated risks. As a result, not only the cost of the treatment can be reduced substantially but the patients also have a higher chance of survival~\citep{hirsch2020acute}.
One of the more relevant study to the proposed method is the approach introduced by Gladding {\it et al.}~\citep{gladding2021machine} which utilizes machine learning model to determine COVID-19 and other diseases from
hematology data. Furthermore, Erdi {\it et al.}~\citep{ccalli2021deep} proposed a novel deep learning architecture for detection of COVID-19 based on laboratory results. While these works focus on determining the COVID-19 positive cases, our work focuses on determining the survival chance and the chance of developing AKI during hospitalization based on biochemical data with the proposal of an end-to-end transparent model development framework that can be extended to other healthcare domains.
\section{Explainability-driven framework for building machine learning models for clinical decision support}
In this section, we describe in detail the proposed systematic framework for building high-performance, clinically-sound machine learning models from relevant clinical and biochemical markers in a transparent and trustworthy manner. The proposed COVID-Net Biochem framework comprises of two main phases:
\textbf{Clinician-guided initial design phase:} The first phase starts with the preparation of a benchmark dataset from carefully selected clinical and biochemical markers based on clinical assessment curated for a patient cohort. While a plethora of clinical and biochemical markers may be collected for the patient cohort, only a selected number of markers are relevant for a given predictive task while others may be not only irrelevant but misleading for the machine learning model when leveraged. Furthermore, certain clinical markers such as age and gender can lead to considerable bias in the resulting model. Therefore, in this phase, we remove clinically irrelevant markers through consultations with clinicians who have the domain knowledge of the task. Next, a collection of different machine learning models with a diversity of gradient based boosting tree architectures and deep transformer architectures is designed and trained on the constructed benchmark dataset.
\textbf{Explainability-driven design refinement phase:} The second phase starts with the quantitative explainability validation of model performance and behaviour to gain a deeper understanding of the decision-making process, as well as gaining quantitative insights into the impact of clinical and biochemical markers on the decision-making process and the identification of the key markers influencing the decision-making process. In this paper, we leverage a quantitative explainability technique called GSInquire to conduct this evaluation. Next, we analyze and interpret the decision making process of the model through the identified relevant predictive markers and leverage the insights in an iterative design refinement manner to build progressively better and more clinically-relevant machine learning models. More specifically, if all of the clinical and biochemical markers identified by quantitative explainability are driving the decision-making process of a given model and these markers are verified to be clinically sound based on clinical assessment of the explainability results, the model is accepted as the final model; otherwise, we returns to the first phase where the irrelevant markers are discarded for that given model, and a new model architecture is trained and produced via hyperparameter optimization and again tested for phase 2. This iterative approach not only removes the influence of quantitatively and clinically irrelevant clinical and biochemical markers, but it also removes the markers that may dominate the decision-making process when they are insufficient for clinically sound decisions (e.g., the heart rate clinical marker may be clinically relevant but should not be solely leveraged for survival prediction from COVID-19 as a result of its general severity implication). This iterative process is continued until the model heavily leverages clinically sound clinical and biochemical markers to great effect and impact in its decision making process. Figure \ref{fig:meth} provides an overview of the complete iterative design process in the proposed framework. In the following sections, we show how we have applied this framework in our study to develop reliable models which leverage only relevant markers for decision making.
In this particular study, the clinician-guided initial design phase consists of constructing a new benchmark dataset of clinical and biochemistry data based on clinical feedback curated from a patient cohort of 1366 patients at Stony Brook University~\citep{saltz2021stony}. The collection of models we designed and trained on the constructed benchmark dataset are based on the following architecture design patterns: i)TabNet~\citep{arik2019tabnet}, ii) TabTransformer~\citep{huang2020tabtransformer}, iii) FTTransformer~\citep{gorishniy2021revisiting}, iv) XGBoost~\citep{chen2016xgboost}, v) LightGBM~\citep{ke2017lightgbm}, and vi) CatBoost~\citep{prokhorenkova2018catboost}. TabNet focuses on employing sequential attention to score features for decision making and make the model more interpretable compared to previously proposed deep learning models for tabular datasets \citep{arik2019tabnet}. TabTransformer and FTTransformer utilize a more recent transformer architecture designed to process Tabular datasets. In practice, transformer models have shown higher performance on most well-known datasets~\citep{huang2020tabtransformer, gorishniy2021revisiting, vaswani2017attention}. The gradient boosting algorithms rely on creating and learning an ensemble of weak prediction models (decision trees) by minimizing an arbitrary differentiable loss function.
In the explainability-driven design refinement phase for this particular study, we conduct quantitative explainability validation of model performance and behaviour by leveraging GSInquire~\citep{lin2019explanations}, a state-of-the-art explainability technique that has been shown to produce explanations that are significantly more reflective of the decision-making process when compared to other well-known explainability techniques in the literature. GSInquire enables the assignment of quantitative importance values to each clinical and biochemical marker representing its impact on the model prediction. Finally, clinical assessment of explainability-driven insights was conducted by a clinician with over eight years of experience.
\section{Data Preparation and Refinement}
\begin{figure*}
\centering
\hspace*{-2cm}
\setlength{\tabcolsep}{0.01cm}
\begin{tabular}{cc}
\includegraphics[width=0.6\textwidth]{figures/master_correlation.png}&
\includegraphics[width=0.6\textwidth]{figures/kidney_correlation.png}\\
(a) Survival Prediction & (b) AKI Prediction
\end{tabular}
\caption{Pearson correlation coefficients between key identified clinical and biochemical markers for survival and AKI prediction.}
\label{fig:corr_plot}
\end{figure*}
In this section we provide a comprehensive overview of the data preparation process in constructing a benchmark dataset for COVID-19 patient survival and AKI prediction in the clinician-guided initial design phase of the proposed framework, as well as the clinical and biochemical marker selection process based on explainability-driven insights in the explainability-driven design refinement phase. The proposed dataset is built by carefully selecting clinical and biochemical markers based on clinical assessment from a patient cohort curated by Stony Brook University~\citep{saltz2021stony}. More specifically, the clinical and biochemical markers were collected from a patient cohort of 1336 COVID-19 positive patients, and consists of both categorical and numerical markers. The clinical and biochemical markers include patient diagnosis information, laboratory test results, intubation status, oral temperature, symptoms at admission, as well as a set of derived biochemical markers from blood work. Table~\ref{table:numeric} demonstrates the numeric clinical and biochemical markers from the patient cohort and their associated dynamic ranges.
The categorical clinical markers consists of \textit{"gender"}, \textit{"last status"} (discharged or deceased), \textit{"age"}, \textit{"is icu"} (received icu or not), \textit{"was ventilated"} (received ventilator or not), \textit{"AKI during hospitalization"} (True or False), \textit{"Type of Theraputic received"}, \textit{"diarrha"}, \textit{"vomiting symptom"}, \textit{"nausea symptom"}, \textit{"cough symptom"}, \textit{"antibiotic received"} (True or False), \textit{"other lung disease"}, \textit{"Urine protein symptom"}, \textit{"smoking status"}, and \textit{"abdominal pain symptom"}.
\textbf{Target value}: In this study, the \textit{"last status"} is used as a target value for the task of predicting the survival chance given the patient's symptoms and status. In addition, the \textit{"AKI during hospitalization"} is identified as a target for the task of predicting the kidney injury development during hospitalization. Figure \ref{fig:dist}.a and figure \ref{fig:dist}.b demonstrates the distribution of these two target values in the patient cohort is highly unbalanced.
\textbf{Missing Value and Input Transformation}: For replacement and modification, we found that using different type of input transformation does not substantially change the final result in our models. In this regard, we examined MinMax scaler, uniform transformer and normal distribution transformer (all available in sickit-learn preprocessing method~\citep{pedregosa2011scikit}).
None of them provided any better results.
On the other hand, the dataset had many missing values and to resolve the issue, for any marker that had more than $75\%$ missing values in the dataset, the corresponding marker was removed in our study. For replacing missing value for our models, we found that both transformer models and gradient boosting tree models are resilient against missing value and replacing the missing value with a constant gives a competitive result. Particularly, we followed the same strategy introduced in TabTransformer~\citep{huang2020tabtransformer} where the missing value is treated as an additional category.
\subsection{Clinically guided marker selection}
To create our benchmark dataset in the clinically guided initial design phase, we consulted with a clinician with over 8 years of experience and identified clinical markers that are clinically irrelevant and may result in biases being learnt by the machine learning models. More specifically, we also excluded demographic markers that would induce bias in our training process. Given the highly imbalanced patient cohort, demographic markers such as \textit{"age"} or \textit{"gender"} can cause significant bias in the decision-making process of the trained machine learning models. As seen in Figure \ref{fig:dist}.c, it can be observed that gender distribution is highly skewed and can lead to spurious correlations in the resulting machine learning model if used for training purposes. Finally, other confounding factors such as \textit{"heart rate", "invasive ventilation days"} were also removed after consulting with the clinician as their impact on survival and AKI prediction were not directly clinically relevant.
\begin{figure*}
\centering
\hspace*{-2cm}
\setlength{\tabcolsep}{0.01cm}
\begin{tabular}{ccc}
\includegraphics[width=0.4\textwidth]{figures/last_status.png}&
\includegraphics[width=0.4\textwidth]{figures/aki.png}&
\includegraphics[width=0.4\textwidth]{figures/gender.png}\\
(a) \textit{"last status"} distribution& (b) \textit{"Acute Kidney Injury (AKI)"} distribution & (c) \textit{"gender"} distribution
\end{tabular}
\caption{Distribution of \textit{"last status"}, \textit{"AKI" }, and \textit{"gender"} clinical markers}
\label{fig:dist}
\end{figure*}
\subsection{Explainability-driven clinical and biochemical marker refinement}
In the explainability-driven design refinement phase, we leverage quantitative explainability to analyse the decision-making processes of the individual trained models within the collection of initial model designs, and identified the most quantitatively important clinical and biochemical markers for each of the models using the GSInquire quantitative explainability method~\cite{gsinquire}. After identifying the most quantitatively important markers to the decision-making processes of individual models, we presented these explainability results to the clinician to not only gain valuable clinical insights into the clinical soundness of the machine learning models but also to identify the non-relevant markers among these that the models rely on so that they will be excluded in the next round of model design refinement and training. As an example, after conducting explainability-driven assessment on the machine learning models with LightGBM and CatBoost architectures, we observed that the clinical marker \textit{"Length of Stay"} had the highest quantitative impact on the decision-making process of said models for the AKI prediction (see Figure \ref{fig:kidney_rem_cols}). After clinical consultation on this explainability-driven insight, we found out this clinical marker has little clinical value in determining the likelihood of AKI. As a result, in the next phase of model design and training, the \textit{"Length of Stay"} marker was excluded. This process continued until only the relevant markers for our prediction tasks were utilized by the individual models. It is very important to note that explainability-driven assessment was conducted on each model independently, and as such the end result is that each model is uniquely tailored around what clinical and biochemical markers benefits them most from the set of possible markers. The final result is shown in figure \ref{fig:kidney_xai} which shows the models are not dependant on irrelevant markers. More explanation will be provided in the Explainability section on these figures.
\vspace{2cm}
\begin{figure}
\hspace{-2.5cm}
\includegraphics[width=1.3\textwidth]{figures/fig.png}
\caption{Overview of the proposed explainability-driven framework for building machine learning models for clinical decision support.}
\label{fig:meth}
\end{figure}
Finally, to better show the correlation between clinical and biochemical markers, Figure~\ref{fig:corr_plot} shows the correlation of top ten markers for AKI (acute kidney injury during hospitalization) and last status target marker. As seen, for the target marker \textit{"last status"}, AKI has the highest correlation. On the other hand, for the target marker AKI, \textit{"Urine Protein", "Therapeutic Heparin","Fibrin D Dimer", "Creatinine"} and \textit{"Glomerular Filtration"} have the highest correlation values. It is worth to note our trained models in the experiment section are actually utilizing these markers to do decision making as discussed in the Explainability section.
\begin{table}
\vspace{1cm}
\centering
\begin{tabular}{ |p{8cm}|p{3cm}|p{3cm}| }
\hline
\hline
Clinical/Biochemical Markers (Numeric)& Minimum Value & Maximum Value \\
\hline
Invasive Ventilation Days & 0 & 40 \\
Length of Stay & 1 & 96 \\
Oral Temperature &34 & 39.8 \\
Oxygen saturation in Arterial blood by Pulse &55 & 100 \\
Respiratory Rate & 11.0 & 95 \\
Heart Rate Beat by EKG & 6 & 245 \\
Systolic Blood Pressure & 55 & 222 \\
Mean Blood Pressure by Non Invasive & 40 & 168 \\
Neutrophils in Blood by Automated count & 0.36 & 100 \\
Lymphocytes in Blood by Automated count & 0.36 & 100 \\
Sodium [Moles/volume] in Serum or Plasma & 100 & 169 \\
Aspartate aminotransferase in Serum or Plasma & 8 & 2786 \\
Aspartate aminotransferase in Serum or Plasma & 8 & 2909 \\
Creatine kinase in Serum or Plasma & 11 & 6139 \\
Lactate in Serum or Plasma & 5 & 23.8 \\
Troponin T.cardiac in Serum or Plasma & 0.01 & 1.81 \\
Natriuretic peptide.B prohormone N-Terminal in Serum or Plasma & 5 & 267600 \\
Procalcitonin in Serum or Plasma Immunoassay & 0.02 & 193.5 \\
Fibrin D-dimer DDU in Platelet poor plasma & 150 & 63670\\
Ferritin [Mass/volume] in Serum or Plasma & 5.3 & 16291\\
Hemoglobin A1c in Blood & 4.2 & 17\\
BMI Ratio & 11.95 & 92.8\\
Potassium [Moles/volume] in Serum or Plasma & 2 & 7.7\\
Chloride [Moles/volume] in Serum or Plasma& 60 & 134\\
Bicarbonate [Moles/volume] in Serum & 6 & 43\\
Glomerular filtration rate & 2 & 120\\
Erythrocyte sedimentation rate & 5 & 145\\
Cholesterol in LDL in Serum or Plasma & 12 & 399\\
Cholesterol in VLDL [Mass/volume] in Serum & 8 & 79\\
Triglyceride & 10 & 3524\\
HDL & 10 & 98\\
\hline
\end{tabular}
\caption{Example numerical clinical and biochemical markers collected from the patient cohort}
\label{table:numeric}
\end{table}
\section{Experiment}
In this section, we describe the experimental results and training procedure for the different machine learning models created using the proposed framework for the purpose of predicting COVID-19 patient survival and predicting the development of AKI (Acute Kidney Injury) in COVID-19 patients during hospitalization. As mentioned earlier, we designed six different machine learning models for the aforementioned prediction tasks using the following architecture design patterns: TabTransformer, TabNet, FTTransformer, XGBoost, LightGBM, and CatBoost. Our training procedure is guided by not only accuracy, precision, and recall but also by identified explainability results. In the next section we provide explainability on the models decision process.
\subsection{Survival prediction}
We set the \textit{last status} which had a binary value of deceased or discharged as our target in this task.
For the training, as briefly discussed in the previous section, we constantly monitored the decision making process of the model using GSInquire to make sure the model is choosing the relevant set of features to make the prediction.
For the training, we used 20\% of the dataset as the test set and another 5 \% as validation set. For TabTransformer, TabNet, and FTTransformer we did a grid search to find the best hyperparameter. In this regard, we set the batch size to 256, the learning rate was set to 0.00015 and we run the models for 150 epochs with early stopping on the validation set. We used Adam optimizer for all tasks. The training procedure was done in parallel with getting explainibilty results for the model. In this regard, we discarded features \textit{"heart rate", "length of stay", "invasive ventilation days"} as models tend to heavily rely on these less relevant factors for decision making.
For gradient boosting models XGBoost, CatBoost, and LightGBM, we used the default setting except for the learning rate. The learning rate of 0.35 gave us the highest accuracy for CatBoost. For XGBoost and LightGBM, we set the learning rate to 0.3 and 0.1 respectively.
The results for the models are depicted in Table \ref{table:survival}. Also Table \ref{tab:survival2} shows the confusion matrix for CatBoost and TabTransformer. As it can be seen XGBoost had the best performance achieving accuracy of 98.1 \% on the test set. Among deep learning models, TabTransformer had the best performance with accuracy of 95.9\%. Also, both TabTransformer and XGBoost had above 96\% results for recall and precision.
\subsection{AKI prediction}
We set the \textit{Acute kidney injury during hospitalization} which had a binary value of True or False as our target in this task.
The training procedure for this task was very similar to the survival task with hyperparameters almost the same. Except here we also removed the \textit{last status} marker from our input to the models as it is a non-relevant clinical marker.
The results for the models are depicted in Table \ref{table:kidney}. Also Table \ref{tab:kidney2} shows the confusion matrix for LightGBM and TabTransformer. As it can be seen LightGBM had the best performance achieving accuracy of 96.7 \% on the test set. Also, among deep learning models, TabTransformer had the best performance with accuracy of 91.9\%.
\begin{table}
\vspace{1cm}
\centering
\begin{tabular}{ |p{3cm}|p{3cm}|p{3cm}|p{3cm}|p{3cm}| }
\hline
\multicolumn{5}{|c|}{Survival Prediction} \\
\hline
Model & Accuracy & Precision & Recall & F1 Score \\
\hline
FTTransformer & 89.7\% & 92.6\% & 95.7\% &0.89\\
TabTransformer & 95.9\% & 96.6\% & 98.7\% &0.95\\
TabNet & 80.9\% & 87.1\% & 91.5\% &0.80\\
XGBoost & \textbf{98.1}\% & \textbf{97.9}\% & \textbf{1.0}\%& \textbf{0.98}\\
LightGBM & 97.0\% & 97.9\% & 98.7\% &0.97\\
CatBoost & 97.4\% & \textbf{97.9}\% & 99.1\% & 0.97\\
\hline
\end{tabular}
\caption{Accuracy, precision, recall, and F1 score of tested models for Survival Prediction}
\label{table:survival}
\end{table}
\begin{table}
\parbox{.45\linewidth}{
\centering\begin{tabular}{ |p{2cm}|p{2cm}|p{2cm}| }
\hline
\multicolumn{3}{|c|}{Confusion Matrix XGBoost} \\
\hline
Class & Deceased & Discharged \\
\hline
Deceased & 31 & 5 \\
Discharged & 0 & 237 \\
\hline
\end{tabular}
}
\hfill
\parbox{.45\linewidth}{
\centering\begin{tabular}{ |p{2cm}|p{2cm}|p{2cm}| }
\hline
\multicolumn{3}{|c|}{Confusion Matrix TabTransformer} \\
\hline
Class & Deceased & Discharged \\
\hline
Deceased & 28 & 8 \\
Discharged & 3 & 234 \\
\hline
\end{tabular}}
\caption{Confusion Matrix for CatBoost and TabTransformer for Survival Prediction}
\label{tab:survival2}
\end{table}
\begin{table}
\vspace{1cm}
\centering
\begin{tabular}{ |p{3cm}|p{3cm}|p{3cm}|p{3cm}|p{3cm}| }
\hline
\multicolumn{5}{|c|}{AKI Prediction} \\
\hline
Model & Accuracy & Precision & Recall & F1 Score \\
\hline
FTTransformer & 82.0\% & 51.5\% & 34.0\% &0.82\\
TabTransformer & 91.9\% & 88.8\% & 64.0\% &0.91\\
TabNet & 80.9\% & 0.4\% & 0.08\% &0.80\\
XGBoost & 95.6\% & 97.5\% & 78.0\% &0.95\\
LightGBM & \textbf{96.7}\% & \textbf{97.6}\% & \textbf{84.0}\% &\textbf{0.96}\\
CatBoost & 96.3\% & \textbf{97.6}\% & 82.0\% & \textbf{0.96}\\
\hline
\end{tabular}
\caption{Accuracy, precision, recall, and F1 score of tested models for AKI Prediction}
\label{table:kidney}
\end{table}
\begin{table}
\parbox{.45\linewidth}{
\centering\begin{tabular}{ |p{2cm}|p{2cm}|p{2cm}| }
\hline
\multicolumn{3}{|c|}{Confusion Matrix LightGBM} \\
\hline
Class & False & True \\
\hline
False & 222 & 1 \\
True & 8 & 42 \\
\hline
\end{tabular}
}
\hfill
\parbox{.45\linewidth}{
\centering\begin{tabular}{ |p{2cm}|p{2cm}|p{2cm}| }
\hline
\multicolumn{3}{|c|}{Confusion Matrix TabTransformer} \\
\hline
Class & False & True \\
\hline
False & 219 & 4 \\
True & 18 & 32 \\
\hline
\end{tabular}}
\caption{Confusion Matrix for CatBoost and TabTransformer for AKI Prediction}
\label{tab:kidney2}
\end{table}
\textbf{The benchmark dataset created in this study and the link to the code is available \href{https://github.com/h-aboutalebi/CovidBiochem}{here}}
\section{Explainability}
\begin{figure*}
\centering
\hspace*{-2cm}
\setlength{\tabcolsep}{0.01cm}
\begin{tabular}{ccc}
\includegraphics[width=0.41\textwidth, height=4cm]{figures_xai/master_tabtransformer.png}&
\includegraphics[width=0.41\textwidth, height=4cm]{figures_xai/master_lightgbm.png}&
\includegraphics[width=0.41\textwidth, height=4cm]{figures_xai/master_catboost.png}\\
(a) TabTransformer & (b) LightGBM & (c) CatBoost
\end{tabular}
\caption{Top 10 markers identified through explainability-performance validation for TabTransformer, LightGBM, and CatBoost survival prediction models.}
\label{fig:master_xai}
\end{figure*}
\begin{figure*}
\centering
\hspace*{-2cm}
\setlength{\tabcolsep}{0.01cm}
\begin{tabular}{ccc}
\includegraphics[width=0.43\textwidth, height=4cm]{figures_xai/kidney_tabtransformer.png}&
\includegraphics[width=0.41\textwidth, height=4cm]{figures_xai/kidney_lightgbm.png}&
\includegraphics[width=0.41\textwidth, height=4cm]{figures_xai/kidney_catboost.png}\\
(a) TabTransformer & (b) LightGBM & (c) CatBoost
\end{tabular}
\caption{Top 10 markers identified through explainability-performance validation for TabTransformer, LightGBM, and CatBoost models leveraged for AKI prediction.}
\label{fig:kidney_xai}
\end{figure*}
\begin{figure*}
\centering
\hspace*{-2cm}
\setlength{\tabcolsep}{0.01cm}
\begin{tabular}{cc}
\includegraphics[width=0.6\textwidth]{figures_xai/top_10_master_avg.png}&
\includegraphics[width=0.6\textwidth]{figures_xai/top_10_kidney_avg.png}\\
(a) Survival Prediction & (b) AKI Prediction
\end{tabular}
\caption{Top 10 most predictive clinical and biochemical markers averaged across models for COVID-19 patient survival and AKI prediction.}
\label{fig:summary_xai}
\end{figure*}
\begin{figure*}
\centering
\hspace*{-2cm}
\setlength{\tabcolsep}{0.01cm}
\begin{tabular}{cc}
\includegraphics[width=0.6\textwidth]{figures_xai/kidney_lightgbm_rem_cols.png}&
\includegraphics[width=0.6\textwidth]{figures_xai/kidney_catboost_rem_cols.png}\\
(a) LightGBM & (b) CatBoost
\end{tabular}
\caption{Top 10 clinical and biochemical markers identified through explainability-performance validation for LightGBM and CatBoost models for AKI prediction with the inclusion of the length of stay parameter in available patient data.}
\label{fig:kidney_rem_cols}
\end{figure*}
As explained earlier, the trained models from phase one of the development framework were then audited via explainability-driven performance validation to gain insights into their decision-making process that will inform the design modifications in phase two of the process. We leveraged GSInquire~\citep{gsinquire} to provide quantitative explainability of input clinical and biochemical markers. More specifically, GSInquire provides impact scores for each marker based on their influence on the outcome prediction through an inquisitor $\mathcal{I}$ within a generator-inquisitor pair $\left\{\mathcal{G},\mathcal{I}\right\}$. These actionable insights were then further validated by a clinician to ensure the clinical relevance and later employed by the framework to make design revisions to the models accordingly.
Figures \ref{fig:master_xai} and \ref{fig:kidney_xai} show the 10 most impactful clinical and biochemical markers relevant to COVID-19 survival and AKI prediction for the highest performing models of TabTransformer, LightGBM, and CatBoost, respectively. Figure \ref{fig:summary_xai} provides a summary of the high impact markers across all models by averaging their impact scores and reporting the top 10 highest positive predictive parameters. For COVID-19 patient survival prediction, the marker indicating whether a patient has experienced acute kidney injury during hospitilization provides the highest impact to model predictions which is aligned with our clinician suggestion. In this regard, we observed in figure \ref{fig:corr_plot}, that there is a direct correlation between survival and acute kidney injury. Also, it is interesting to see that in figure \ref{fig:master_xai} while two gradient boosting tree has the same high impact marker (AKI), the tabtransformer is looking at two other markers B and Fibrin D Dimer which are also relevant for survival prediction. What we can see here is that as we change the type of learning model from gradient boosting tree to deep neural network, the model considers a different set of relevant clinical and biochemical markers. This also happens in \ref{fig:kidney_xai} for acute kidney injury prediction. While we can clearly see that both gradient boosting trees are considering the Theraputic Heparin and Creatinine to determine the chance of developing AKI, the TabTransformer is considering a different set of relevant markers such as Ferritin for decision making.
Finally, it is worth mentioning that our clinician found the figure \ref{fig:summary_xai} very interesting which represents the main markers used by all model on average. In this regard, most of the biomedical and clinical markers including Creatinine, Therapeutic Heparin used in this figure are considered as most relevant markers to determine survival rate of the patient and chance of AKI.
\section{Discussion}
In this work we presented an explainability-driven framework for building machine learning models which is able to build transparent models that only leverage clinically relevant markers for prediction. As a proof of concept, we applied this framework for predicting survival and kidney injury during hospitalization of COVID-19 patients such that only clinically relevant clinical and biochemical markers are leveraged in the decision-making process of the models and to ensure that the decisions made are clinically sound. Experimental results show that the constructed machine learning models were able to not only achieve high predictive performance, but did so in a way that considered clinically-sound clinical and biochemical markers in their decision-making processes. In this regard, we provided a comprehensive examination of the constructed machine learning models' accuracy, recall, precision, F1 score, confusion matrix on the benchmark dataset. Furthermore, we interpreted the decision-making process of the models using quantitative explainability via GSInquire. Finally, we showed that the model uses acute kidney injury as the main factor to determine the survival chance of the COVID-19 patient, and leverages Creatinine biochemical markers as the main factor to determine the chance of developing kidney injury which is consistent with clinical interpretation.
While our findings shows a path to build better explainable models for healthcare problems, more experiments needs to be carried out to attest the results we have obtained in this work.
|
1,477,468,750,627 | arxiv |
\subsubsection{Performance Anomaly}
\input{anomaly-RT}
\subsubsection{Reliability Anomaly}
\input{anomaly-EC}
\subsubsection{Traffic Anomaly}
\input{anomaly-QPS}
\subsection{Target System}
The e-commerce system of Alibaba has more than 846 millions monthly active users.
It adopts the microservice architecture and contains more than 30,000 services.
These microservices are deployed with containers (i.e., Docker~\cite{docker}) and virtual machines and orchestrated by the customized orchestrator based on Kubernetes~\cite{kubernetes}, with Service Mesh~\cite{istio} applied for service communications.
The system is equipped with a large-scale monitoring infrastructure called EagleEye~\cite{EagleEye}.
EagleEye includes a tracing SDK, a real-time ETL (Extract-Transform-Load) data processing system, a batch processing computing cluster, and a web-based user interface.
As the carrier of the business, the system needs to ensure high availability.
Therefore, a business monitoring platform is deployed to raise timely alarms about availability issues.
These availability issues usually indicate problems with the running of business, for example the drop of successfully placed orders and success rate of transactions.
An availability issue can be caused by different types of anomalies, each of which is indicated by a set of metrics.
An anomaly can originate from a service and propagate along service calls, and finally cause an availability issue.
In this work, we focus on the following three types of anomalies that cause most of the availability issues in Alibaba.
\begin{itemize}
\item \textbf{Performance Anomaly}.
Performance anomaly is indicated by anomalous increase of response time (RT).
It is usually caused by problematic implementation or improper environmental configurations (e.g., CPU/memory configurations of containers and virtual machines).
\item \textbf{Reliability Anomaly}.
Reliability anomaly is indicated by anomalous increase of error counts (EC), i.e., the numbers of service call failures.
It is usually caused by exceptions due to code defects or environmental failures (e.g., server or network outage).
\item \textbf{Traffic Anomaly}.
Traffic anomaly is indicated by anomalous increase or decrease of queries per second (QPS).
Anomalous traffic increase may break the services, while anomalous decrease may indicate that many requests cannot reach the services.
Traffic anomaly is usually caused by improper traffic configurations (e.g., the traffic limits of Nginx~\cite{nginx}), DoS attack, or unanticipated stress test.
\end{itemize}
In anomaly detection, the 3-sigma rule is often used to identify outliers of metric values as candidates of anomalies.
The 3-sigma rule states that in a normal distribution almost all the values remain within three standard deviations of the mean.
The range can be represented as ($\mu- 3\sigma$, $\mu + 3\sigma$), where $\mu$ is the mean and $\sigma$ is the standard deviation.
The values within the range account for 99.73\% of all the values and the others can be regarded as outliers.
\subsection{Experimental Setup}
\input{setup}
\subsection{Localization Accuracy (RQ1)}
\input{rq1}
\subsection{Localization Efficiency (RQ2)}
\input{rq2}
\subsection{Effect of Pruning (RQ3)}
\input{rq3}
\subsection{Threats to Validity}
\input{threat}
\subsection{Analysis Process}\label{sec:process}
\input{process}
\subsection{Service Anomaly Detection}\label{sec:anomaly}
\input{anomaly}
\subsection{Pruning Strategy}\label{sec:pruning}
\input{pruning}
\section{Introduction}
\input{introduction}
\section{Background}
\input{background}
\section{\app~Overview}\label{sec:overview}
\input{overview}
\section{Anomaly Propagation Chain Analysis}\label{sec:propagation}
\input{propagation}
\section{Experimental Study}
\input{experiment}
\section{Practical Application}
\input{application}
\section{Related Work}
\input{related}
\section{Conclusion}
\input{conclusion}
\bibliographystyle{IEEEtran}
|
1,477,468,750,628 | arxiv | \section{Introduction}
Let $\ensuremath{\bold{k}}$ denote a base field, algebraically closed of characteristic $p>0$. In \cite{Hen}, all graded cocommutative connected Hopf algebras of dimension less than or equal to $p^3$ are classified by using W.M.~Singer's theory of extensions of connected Hopf algebras \cite{WMS}. In this paper, we classify all connected Hopf algebras of dimension $p^2$ over $\ensuremath{\bold{k}}$. We use the theories of restricted Lie algebras and Hochschild cohomology of coalgebras for restricted universal enveloping algebras.
Let $H$ denote a finite-dimensional connected Hopf algebra in the sense of \cite[Def. 5.1.5]{MO93} with primitive space $\ensuremath{\text{\upshape P}}(H)$, and $K$ be a Hopf subalgebra of $H$. In Section 2, basic definitions related to and properties of $H$ are briefly reviewed. In particular, we describe a few concepts concerning the inclusion $K\subseteq H$. We say that the \emph{$p$-index} of $K$ in $H$ is $n-m$ if $\dim K=p^m$ and $\dim H=p^n$. The notion of the \emph{first order} of the inclusion and a \emph{level-one} inclusion are also given in Definition \ref{BDCHA}.
In Section 3, the algebra structure of a finite-dimensional connected coradically graded Hopf algebra is obtained (Theorem \ref{generalgrc}) based on a result for algebras representing finite connected group schemes over $\ensuremath{\bold{k}}$. It implies that the associated graded Hopf algebra $\ensuremath{\text{\upshape gr}} H$ is isomorphic to as algebras \[\ensuremath{\bold{k}}\left[x_1,x_2,\cdots,x_d\right]/\left(x_1^p,x_2^p,\cdots,x_d^p\right)\] for some $d\ge0$.
Section 4 concerns a simple case when $H$ is generated by $K$ and another element $x$. Suppose the $p$-index of $K$ in $H$ is $d$. Under an additional assumption, the basis of $H$ as a left $K$-module is given in terms of the powers of $x$ (Theorem \ref{FREENESS}). Moreover, if $K$ is normal in $H$ \cite[Def. 3.4.1]{MO93}, then $x$ satisfies a polynomial equation as follows:
\begin{align*}
x^{p^d}+\sum_{i=0}^{d-1}a_ix^{p^i}+b=0
\end{align*}
for some $a_i\in \ensuremath{\bold{k}}$ and $b\in K$.
Section 5 deals with the special case when $H$ is cocommutative. It is proved in Proposition \ref{cocommfiltration} that such Hopf algebra $H$ is equipped with a series of normal Hopf subalgebras $\ensuremath{\bold{k}}=N_0\subset N_1\subset N_2\subset \cdots \subset N_n=H$ satisfying certain properties. If we apply these properties to the case when $\ensuremath{\text{\upshape P}}(H)$ is one-dimensional, then we have $N_1$ is generated by $\ensuremath{\text{\upshape P}}(H)$ and each $N_i$ has $p$-index one in $N_{i+1}$ (Corollary \ref{FCLH}). In Theorem \ref{NPLA}, we give locality criterion for $H$ in terms of its primitive elements. This result, after dualization, is equivalent to a criteria for unipotency of finite connected group schemes over $\ensuremath{\bold{k}}$, as shown in Remark \ref{GSL}.
In section 6, we take the Hopf subalgebra $K=u\left(\mathfrak g\right)$, the restricted universal enveloping algebra of some finite-dimensional restricted Lie algebra $\mathfrak g$. We consider the Hochschild cohomology of the coalgebra $K$ with coefficients in the trivial bicomodule $\ensuremath{\bold{k}}$, namely $\ensuremath{\text{\upshape H}}^\bullet(\ensuremath{\bold{k}},K)$. Then the Hochschild cohomology can be computed as the homology of the cobar construction of $K$. In Proposition \ref{Liealgebrainclusion}, we give a specific basis for $\ensuremath{\text{\upshape H}}^2(\ensuremath{\bold{k}},K)$. We further show, in Lemma \ref{chainmap}, that $\bigoplus_{n\ge 0} \ensuremath{\text{\upshape H}}^n(\ensuremath{\bold{k}} ,K)$ is a graded restricted $\mathfrak g$-module via the adjoint map. When the inclusion $K\subseteq H$ has first order $n\ge 2$, the differential $d^1$ in the cobar construction of $H$ induces a restricted $\mathfrak g$-module map from $H_n$ into $\ensuremath{\text{\upshape H}}^2(\ensuremath{\bold{k}}, K)$, whose kernel is $K_n$ (Theorem \ref{Cohomologylemma}). Concluded in Theorem \ref{HCT}, if $K\neq H$, we can find some $x\in H\setminus K$ with the following comultiplication
\[
\Delta(x)=x\otimes 1+1\otimes x+\omega\left(\sum_i\alpha_ix_i\right)+\sum_{j<k}\alpha_{jk}x_j\otimes x_k
\]
where $\{x_i\}$ is a basis for $\mathfrak g$.
Finally, the classification of connected Hopf algebras of dimension $p^2$ over $\ensuremath{\bold{k}}$ is accomplished in section 7. Assume $\dim H=p^2$. We apply results on $H$ from previous sections, i.e., Corollary \ref{FCLH} and Theorem \ref{HCT}. The main result is stated in Theorem \ref{D2} and divided into two cases. When $\dim \ensuremath{\text{\upshape P}}(H)=2$, based on the classification of two-dimensional Lie algebras with restricted maps (see Appendix A), there are five non-isomorphic classes
\begin{itemize}
\item[(1)] $\ensuremath{\bold{k}}\left[x,y\right]/\left(x^p,y^p\right)$,
\item[(2)] $\ensuremath{\bold{k}}\left[x,y\right]/\left(x^p-x,y^p\right)$,
\item[(3)] $\ensuremath{\bold{k}}\left[x,y\right]/\left(x^p-y,y^p\right)$,
\item[(4)] $\ensuremath{\bold{k}}\left[x,y\right]/\left(x^p-x,y^p-y\right)$,
\item[(5)] $\ensuremath{\bold{k}}\langle x,y\rangle/\left([x,y]-y,x^p-x,y^p\right)$,
\end{itemize}
where $x,y$ are primitive. When $\dim\ensuremath{\text{\upshape P}}(H)=1$, $H$ must be commutative and there are three non-isomorphic classes
\begin{itemize}
\item[(6)] $\ensuremath{\bold{k}}\left[x,y\right]/(x^p,y^p)$,
\item[(7)] $\ensuremath{\bold{k}}\left[x,y\right]/(x^p,y^p-x)$,
\item[(8)] $\ensuremath{\bold{k}}\left[x,y\right]/(x^p-x,y^p-y)$,
\end{itemize}
where $\Delta\left(x\right)=x\otimes 1+1\otimes x$ and $\Delta\left(y\right)=y\otimes 1+1\otimes y+\omega(x)$.
Moreover, all local Hopf algebras of dimension $p^2$ over $\ensuremath{\bold{k}}$ are classified by duality, see Corollary \ref{localp2}.
\section{Preliminaries}
Throughout this paper, $\ensuremath{\bold{k}}$ denotes a base field, algebraically closed of characteristic $p>0$. All vector spaces, algebras, coalgebras, and tensor products are taken over $\ensuremath{\bold{k}}$ unless otherwise stated. Also, $V^*$ denotes the vector space dual of any vector space $V$.
For any coalgebra $C$, the \bf{coradical}\rm\ $C_0$ is defined to be the sum of all simple subcoalgebras of $C$. Following \cite[5.2.1]{MO93}, $\{C_n\}_{n=0}^\infty$ is used to denote the \bf{coradical filtration}\rm\ of $C$. If $C_0$ is one-dimensional, $C$ is called \textbf{connected}. If every simple subcoalgebra of $C$ is one-dimensional, $C$ is called \textbf{pointed}. Let $(C,\Delta,\ensuremath{\varepsilon})$ be a pointed coalgebra, and $\left(M,\rho_l,\rho_r\right)$ be a $C$-bicomodule via the structure maps $\rho_l: M\to C\otimes M$ and $\rho_r: M\to M\otimes C$. We denote the identity map of $C^{\otimes n}$ by $I_n$ and $C^{\otimes 0}=\ensuremath{\bold{k}}$. The \textbf{Hochschild cohomology} $\ensuremath{\text{\upshape H}}^\bullet\left(M,C\right)$ of $C$ with coefficients in $M$ is defined by the homology of the complex $\left(\mathbb C^n(M,C),d^n\right)$, where $\mathbb C^n(M,C)=\ensuremath{\text{\upshape Hom}}_\ensuremath{\bold{k}}\left(M,C^{\otimes n}\right)$ and
\begin{align*}
d^n(f)=(I\otimes f)\rho_l-(\Delta\otimes I_{n-1})f+\cdots+(-1)^n(I_{n-1}\otimes \Delta)f+(-1)^{n+1}(f\otimes I)\rho_r.
\end{align*}
For any Hopf algebra $H$, we use $\ensuremath{\text{\upshape P}}(H)$ to indicate the subspace of primitive elements. Following the terminology in \cite[Def. 1.13]{Andruskiewitsch02pointedhopf}, we recall the definition of graded Hopf algebras.
\begin{definition}
Let $H$ be a Hopf algebra with antipode $S$. If
\begin{itemize}
\item[(1)] $H=\bigoplus_{n=0}^{\infty}H(n)$ is a graded algebra,
\item[(2)] $H=\bigoplus_{n=0}^{\infty}H(n)$ is a graded coalgebra,
\item[(3)] $S(H(n))\subseteq H(n)$ for any $n\ge 0$,
\end{itemize}
then $H$ is called a \bf{graded Hopf algebra}\rm.\ If in addition,
\begin{itemize}
\item[(4)] $H=\bigoplus_{n=0}^{\infty} H(n)$ is a coradically graded coalgebra,
\end{itemize}
then $H$ is called a \bf{coradically graded Hopf algebra}\rm. Also, the \bf{associated graded Hopf algebra}\rm\ of $H$ is defined by $\ensuremath{\text{\upshape gr}} H=\bigoplus_{n\ge 0} H_n/H_{n-1}$ ($H_{-1}=0$) with respect to its coradical filtration.
\end{definition}
There are some basic properties of finite-dimensional Hopf algebras, which we use frequently.
\begin{proposition}\label{BPH}
Let $H$ be a finite-dimensional Hopf algebra.
\begin{itemize}
\item[(1)] $H$ is local if and only if $H^*$ is connected.
\item[(2)] If $H$ is local, then any quotient or Hopf subalgebra of $H$ is local.
\end{itemize}
Furthermore assume that $H$ is connected. Denote by $u\left(\ensuremath{\text{\upshape P}}(H)\right)$ the restricted universal enveloping algebra of $\ensuremath{\text{\upshape P}}(H)$.
\begin{itemize}
\item[(3)] Any quotient or Hopf subalgebra of $H$ is connected.
\item[(4)] $\dim \ensuremath{\text{\upshape P}}(H)=\dim J/J^2$, where $J$ is the Jacobson radical of $H^*$.
\item[(5)] $H$ is primitively generated if and only if $H\cong u\left(\ensuremath{\text{\upshape P}}(H)\right)$.
\item[(6)] $\dim u\left(\ensuremath{\text{\upshape P}}(H)\right)=p^{\dim \ensuremath{\text{\upshape P}}(H)}$.
\item[(7)] $\dim H=p^n$ for some integer $n$.
\end{itemize}
\end{proposition}
\begin{proof}
$(1)$ and $(4)$ are derived from \cite[Prop. 5.2.9]{MO93}.
For $(3)$ assume $H$ is connected, $H/I$ is connected by \cite[Cor. 5.3.5]{MO93}, where $I$ is any Hopf ideal of $H$. And for any Hopf subalgebra $K$ of $H$, by \cite[Lemma 5.2.12]{MO93}, $K_0=K\bigcap H_0$. Since $H_0$ is one-dimensional, so is $K_0$. Thus $K$ is connected.
$(2)$ is the dual version of $(3)$ by $(1)$.
$(5)$ is a standard result from \cite[Prop. 13.2.3]{S} and $(6)$ comes from \cite[P. 23]{MO93}.
$(7)$ is true because the associated graded ring $\ensuremath{\text{\upshape gr}}_J(H^*)$ with respect to its $J$-adic filtration is connected and primitively generated. Hence $\dim H=\dim H^*=\dim \ensuremath{\text{\upshape gr}}_J(H^*)=p^n$, where $n=\dim \ensuremath{\text{\upshape P}}(\ensuremath{\text{\upshape gr}}_J(H^*))$ by $(6)$.
\end{proof}
\begin{definition}\label{BDCHA}
Consider an inclusion of finite-dimensional connected Hopf algebras $K\subseteq H$.
\begin{itemize}
\item[(1)] If $\dim K=p^m$ and $\dim H=p^n$, then the \bf{$p$-index}\rm\ of $K$ in $H$ is defined to be $n-m$.
\item[(2)] The \bf{first order}\rm\ of the inclusion is defined to be the minimal integer $n$ such that $K_n\subsetneq H_n$. And we say it is infinity if $K=H$.
\item[(3)] The inclusion is said to be \bf{level-one}\rm\ if $H$ is generated by $H_n$ as an algebra, where $n$ is the first order of the inclustion.
\item[(4)] The inclusion is said to be \bf{normal}\rm\ if $K$ is a normal Hopf subalgebra of $H$.
\end{itemize}
\end{definition}
\begin{remark}\label{BHC}
By \cite[Lemma 5.2.12]{MO93}, if $D$ is a subcoalgebra of $C$, we have $D_n=D\bigcap C_n\subseteq C_n$. Also the coradical filtration is exhaustive for any coalgebra by \cite[Thm. 5.2.2]{MO93}. As a result of \cite[Lemma 5.2.10]{MO93}, a connected bialgebra is automatically a connected Hopf algebra. Furthermore, it is well known that any sub-bialgebra of a connected Hopf algebra is a Hopf subalgebra. Let $H$ be a connected Hopf algebra. Then the algebra generated by each term of the coradical filtration $H_n$ is a connected Hopf subalgebra of $H$. Because each term of the coradical filtration $H_n$ is a subcoalgebra and the algebra generated by it is certainly a sub-bialgebra.
\end{remark}
Throughout the whole paper we will use the following convention:
\begin{convention}
Define the expression $\omega(x)=\sum_{i=1}^{p-1}\frac{(p-1)!}{i!(p-i)!}\ x^i\otimes x^{p-i}$, where $\frac{(p-1)!}{i!(p-i)!}\in \ensuremath{\bold{k}}$ for each $1\le i\le p-1$.
\end{convention}
\section{Associated graded Hopf algebras for finite-dimensional connected Hopf algebras}
\begin{theorem}\label{generalgrc}
Let $H=\bigoplus_{n=0}^{\infty} H(n)$ be a finite-dimensional connected coradically graded Hopf algebra. Then $H$ is isomorphic to $\ensuremath{\bold{k}}\left[x_1,x_2,\cdots,x_d\right]/\left(x_1^p,x_2^p,\cdots,x_d^p\right)$ for some $d\ge0$ as algebras.
\end{theorem}
\begin{proof}
Denote by $K=\bigoplus_{n=0}^{\infty} H(n)^*$ the graded dual of $H$. It is a graded Hopf algebra and connected for $K_0\subseteq K(0)=H(0)^*=\ensuremath{\bold{k}}$ by \cite[Lemma 5.3.4]{MO93}. Moreover since $H$ is coradically graded, by \cite[Lemma 5.5]{andruskiewitsch2000finite}, $K$ is generated in degree one and hence cocommutative. Therefore by duality $H$ is commutative and local. Then according to \cite[Thm. 14.4]{GTM66}, $H$ is isomorphic to $\ensuremath{\bold{k}}[x_1,x_2,\cdots,x_d]/(x_1^{p^{n_1}},x_2^{p^{n_2}},\cdots,x_d^{p^{n_d}})$ for some $d\ge 0$ as an algebra. Thus it suffices to prove inductively that for any homogeneous element $x\in H(n)$, we have $x^p=0$ for all $n\ge 1$. Since $H$ is coradically graded, $\ensuremath{\text{\upshape P}}(H)=H(1)$. Then for any $x\in H(1)$, we have $x^p\in (H(1))^p\bigcap H(1)\subseteq H(p)\bigcap H(1)=0$. Assume the assertion holds for $n\le m-1$. Let $x\in H(m)$. By the definition of graded Hopf algebras we have:
\begin{align*}
\Delta(x)=x\otimes 1+1\otimes x+\sum_{i=1}^{m-1}y_i\otimes z_{m-i},
\end{align*}
where $y_i,z_i\in H(i)$ for all $1\le i\le m-1$. Therefore $\Delta(x^p)=x^p\otimes 1+1\otimes x^p+\sum_{i=1}^{m-1}y_i^p\otimes z_{m-i}^p=x^p\otimes 1+1\otimes x^p$ by induction. Thus $x^p\in (H(m))^p\bigcap H(1)\subseteq H(pm)\bigcap H(1)=0$.
\end{proof}
\begin{corollary}\label{connectedgr}
The associated graded Hopf algebra of a finite-dimensional connected Hopf algebra is isomorphic to $\ensuremath{\bold{k}}\left[x_1,x_2,\cdots,x_d\right]/\left(x_1^p,x_2^p,\cdots,x_d^p\right)$ for some $d\ge0$ as algebras.
\end{corollary}
\begin{proof}
The associated graded space $\ensuremath{\text{\upshape gr}} H=\bigoplus_{n\ge 0} H_n/H_{n-1}$ is a graded Hopf algebra by \cite[P. 62]{MO93}. Also mentioned in \cite[Def. 1.13]{Andruskiewitsch02pointedhopf}, $\ensuremath{\text{\upshape gr}} H$ is coradically graded. Therefore $\ensuremath{\text{\upshape gr}} H$ is a coradically graded Hopf algebra, which is clearly connected because $H$ is connected. Hence $\ensuremath{\text{\upshape gr}} H$ satisfies all the conditions in Theorem \ref{generalgrc} and the result follows.
\end{proof}
As a consequence of the commutativity of the associated graded Hopf algebra for any finite-dimensional connected Hopf algebra we conclude that:
\begin{corollary}\label{productcoradical}
Let $H$ be a finite-dimensional connected Hopf algebra. Then $[H_n,H_m]\subseteq H_{n+m-1}$ for all integers $n,m$.
\end{corollary}
\section{Finite-dimensional connected Hopf algebras with Hopf subalgebras}
In this section, we always assume $K\subseteq H$ is an inclusion of finite-dimensional connected Hopf algebras.
\begin{lemma}\label{Contraddim}
Suppose the inclusion $K\subseteq H$ has first order $n$. Then the $p$-index of $K$ in $H$ is greater or equal to $\dim (H_n/K_n)$.
\end{lemma}
\begin{proof}
By Remark \ref{BHC}, the inclusion $K\hookrightarrow H$ induces an injection $K_i/K_{i-1}\hookrightarrow H_i/H_{i-1}$ for all $i\ge 1$. Thus $\ensuremath{\text{\upshape gr}} K=\bigoplus_{i\ge 0} K(i)\hookrightarrow \ensuremath{\text{\upshape gr}} H=\bigoplus_{i\ge 0} H(i)$ and $K(i)=H(i)$ for all $0\le i\le n-1$ since $n$ is the first order of the inclusion. Moreover by \cite[Def. 1.13]{Andruskiewitsch02pointedhopf}, $\left(\ensuremath{\text{\upshape gr}} H\right)_m=\bigoplus_{0\le i\le m} H(m)$ for all $m\ge 0$ and the same is true for $\ensuremath{\text{\upshape gr}} K$. Therefore it is enough to prove the result in the associated graded Hopf algebras inclusion $\ensuremath{\text{\upshape gr}} K\subseteq \ensuremath{\text{\upshape gr}} H$.
For simplicity, we write $K$ for $\ensuremath{\text{\upshape gr}} K$, $H$ for $\ensuremath{\text{\upshape gr}} H$ and use $\ensuremath{\text{\upshape d}}(H/K)$ to denote the $p$-index of $K$ in $H$. We will prove the result by induction on $\dim (H_n/K_n)$. When $\dim (H_n/K_n)=1$, it is trivial. Now suppose that $\dim(H_n/K_n)>1$ and choose any $x\in H(n)\setminus K(n)$. Because $H$ is a graded coalgebra,
\begin{align*}
\Delta(x)=x\otimes 1+1\otimes x+\sum_{i=1}^{n-1}y_i\otimes z_{n-i},
\end{align*}
where $y_i,z_i\in H(i)=K(i)$ for all $1\le i\le n-1$. Hence $K$ and $x$ generate a Hopf subalgebra of $H$ by Remark \ref{BHC}, which we denote as $L$. Now according to Theorem \ref{generalgrc}, we have $x^p=0$. Thus $K\subseteq L$ has $p$-index one and first order $n$. Because $H$ is a graded algebra, it is clear that $L_n$ is spanned by $K_n$ and $x$. Hence $\dim (L_n/K_n)=1$ and $\dim(H_n/L_n)=\dim (H_n/K_n)-1$. Therefore by induction we have
\begin{align*}
\dim (H_n/K_n)&=\dim (H_n/L_n)+\dim (L_n/K_n)=\dim (H_n/L_n)+1\\
&\le \ensuremath{\text{\upshape d}}(H/L)+1=\ensuremath{\text{\upshape d}}(H/L)+\ensuremath{\text{\upshape d}}(L/K)=\ensuremath{\text{\upshape d}} (H/K).
\end{align*}
\end{proof}
\begin{lemma}\label{normality}
Let $K\subseteq H$ be a level-one inclusion with first order $n$. Then $K$ is normal in $H$ if and only if $[K,H_n]\subseteq K$.
\end{lemma}
\begin{proof}
First suppose that $K$ is normal in $H$. By \cite[Lemma 5.3.2]{MO93} for any $x\in H_n$, $\Delta(x)-x\otimes 1-1\otimes x\in H_{n-1}\otimes H_{n-1}=K_{n-1}\otimes K_{n-1}\subseteq K\otimes K$. Thus we can write $\Delta(x)=x\otimes 1+1\otimes x+\sum a_i\otimes b_i$ where $a_i,b_i\in K$. Apply the antipode $S$ to get
\begin{align*}
S(x)=\ensuremath{\varepsilon}(x)-x-\sum a_iS(b_i).
\end{align*}
By the definition of normal Hopf subalgebras \cite[Def. 3.4.1]{MO93}, for any $y\in K$
\begin{align*}
\sum x_1yS(x_2)=xy+yS(x)+\sum a_iyS(b_i)=u\in K.
\end{align*}
Therefore
\begin{align*}
[y,x]=yx-xy=y\left(\ensuremath{\varepsilon}(x)-\sum a_iS(b_i)\right)+\sum a_iyS(b_i)-u\subseteq K,
\end{align*}
which shows that $[K,H_n]\subseteq K$. Conversely suppose that $[K,H_n]\subseteq K$. Then it is clear that $K^+H_n\subseteq H_nK^++K^+\subseteq HK^+$ since $[K^+,H_n]\subseteq K^+$. We claim that $K^+(H_n)^i\subseteq HK^+$ for all $i\ge 0$ by induction. Suppose the inclusion holds for $i$ and then for $i+1$:
\begin{align*}
K^+\left(H_n\right)^{i+1}=K^+\left(H_n\right)^iH_n\subseteq \left(HK^+\right)H_n\subseteq H\left(HK^+\right)\subseteq HK^+.
\end{align*}
Therefore $K^+H=\bigcup K^+(H_n)^i\subseteq HK^+$ and by symmetry $K^+H=HK^+$. According to \cite[Cor. 3.4.4]{MO93}, $K$ is normal.
\end{proof}
\begin{lemma}\label{normalcom}
If $x\in H$ satisfies $[K,x]\subseteq K$ and $\Delta(x)-x\otimes 1-1\otimes x \in K\otimes K$, then $\Delta\left(x^{p^n}\right)-x^{p^n}\otimes 1-1\otimes x^{p^n}\in K\otimes K$ for all $n\ge 0$.
\end{lemma}
\begin{proof}
First, we prove $\Delta \left(x^{p}\right)-x^{p}\otimes 1-1\otimes x^{p}\in K\otimes K$. Denote $\Delta(x)=x\otimes 1+1\otimes x+u$, where $u\in K\otimes K$. By Lemma \ref{palgebra}, we have:
\begin{align*}
\Delta\left(x^{p}\right)=\left(x\otimes 1+1\otimes x+u\right)^p=x^p\otimes 1+1\otimes x^{p}+u^p+\sum_{i=1}^{p-1}S_i
\end{align*}
where $iS_i$ is the coefficient of $\lambda^{i-1}$ in $u\left(\ensuremath{\text{\upshape ad}}\left(\lambda u+x\otimes 1+1\otimes x\right)\right)^{p-1}$. Hence it suffices to show inductively that
\begin{align*}
u\left(\ensuremath{\text{\upshape ad}}\left(\lambda u+x\otimes 1+1\otimes x\right)\right)^n\in \left(K\otimes K\right)[\lambda]
\end{align*}
for all $n\ge 0$. Notice that when $n=0$, it is just the assumption. Suppose it's true for $n-1$ then for $n$
\begin{align*}
u\left(\ensuremath{\text{\upshape ad}}\left(\lambda u+x\otimes 1+1\otimes x\right)\right)^n&\in\left[\left(K\otimes K\right)[\lambda],\lambda u+x\otimes 1+1\otimes x \right]\\
&\subseteq\left\{\left[K\otimes K, u\right]+[K,x]\otimes K+K\otimes [K,x]\right\}[\lambda]\\
&\subseteq \left(K\otimes K\right)[\lambda].
\end{align*}
Now replace $x$ with $x^{p^{n-1}}$ and we have $[K,x^{p^{n-1}}]=K\left(\ensuremath{\text{\upshape ad}}(x)\right)^{p^{n-1}}\subseteq K$ by Lemma \ref{palgebra}. Then the other cases can be proved in the similar way.
\end{proof}
\begin{lemma}\label{subHopfLn}
If $x\in H$ satisfies $\Delta(x)-x\otimes 1-1\otimes x\in K\otimes K$ and $[K,x]\subseteq \sum_{0\le i\le 1}Kx^i$. For each $n\ge 0$, set $L_n=\sum_{i\le n} K x^i$. Then we have the following
\begin{itemize}
\item[(1)] $[K,x^n]\subseteq L_n$ and $L_n$ is a $K$-bimodule via the multiplication in $H$.
\item[(2)] $\Delta(x^n)-x^n\otimes 1-1\otimes x^n\in L_{n-1}\otimes L_{n-1}$.
\item[(3)] $L_n$ is a subcoalgebra of $H$.
\item[(4)] If $H$ is generated by $K$ and $x$ as an algebra, then $H=\bigcup_{n\ge 0} L_n$.
\end{itemize}
\end{lemma}
\begin{proof}
$(1)$ Since $xL_n\subseteq L_{n+1}$, we have $x^nL_1\subseteq L_{n+1}$ for all $n\ge 0$. By assumption, it holds that $[K,x]\subseteq L_1$. Suppose $[K,x^{n-1}]\subseteq L_{n-1}$. For any $a\in K$, it follows that
\begin{eqnarray*}
x^na\in x^{n-1}\left(ax+L_1\right)\subseteq \left(ax^{n-1}+L_{n-1}\right)x+x^{n-1}L_1\subseteq ax^n+L_n.
\end{eqnarray*}
Hence $[K,x^n]\subseteq L_n$ for each $n\ge 0$. Moreover, we have $L_nK\subseteq L_n$ for each $n\ge 0$, the left $K$-module $L_n$ now becomes $K$-bimodule.
$(2)$ Denote $\Delta(x)=x\otimes 1+1\otimes x+u$, where $u\in K\otimes K$. We still prove by induction. When $n=1$, it is just the assumption. Suppose it's true for $n-1$. Write $\Delta(x^{n-1})=x^{n-1}\otimes 1+1\otimes x^{n-1}+\sum a_i\otimes b_i$, where $a_i,b_i\in L_{n-2}$. Therefore
\begin{align*}
&\Delta(x^n)-x^n\otimes 1-1\otimes x^n\\
&= \left(x\otimes 1+1\otimes x+u\right)\left(x^{n-1}\otimes 1+1\otimes x^{n-1}+\sum a_i\otimes b_i\right)-x^n\otimes 1-1\otimes x^n\\\
&\in x\otimes x^{n-1}+x^{n-1}\otimes x+xL_{n-2}\otimes L_{n-2}+L_{n-2}\otimes xL_{n-2}+L_{n-2}\otimes L_{n-2}\\
&\subseteq L_{n-1}\otimes L_{n-1}.
\end{align*}
$(3)$ Now because of $(1)$ and $(2)$, it is enough to check that $L_n$ is a coalgebra by induction.
$(4)$ Furthermore if $H$ is generated by $K$ and $x$ as an algebra, it is easy to see $H=\bigcup_{n\ge 0} L_n$.
\end{proof}
\begin{theorem}\label{FREENESS}
Let $H$ be a finite-dimensional connected Hopf algebra with Hopf subalgebra $K$. Suppose the $p$-index of $K$ in $H$ is $d$ and $H$ is generated by $K$ and some $x\in H$ as an algebra. Also assume that $\Delta(x)=x\otimes 1+1\otimes x+u$, where $u\in K\otimes K$ and $[K,x]\subseteq \sum_{0\le i\le 1}Kx^i$. Then $H$ is a free left $K$-module such that $H=\bigoplus_{i=0}^{p^d-1} Kx^i$. Furthermore if $K$ is normal in $H$, then $x$ satisfies a polynomial equation as follows:
\begin{align*}
x^{p^d}+\sum_{i=0}^{d-1}a_ix^{p^i}+b=0
\end{align*}
for some $a_i\in \ensuremath{\bold{k}}$ and $b\in K$.
\end{theorem}
\begin{proof}
Denote $L_n=\sum_{0\le i\le n} Kx^i$ for all $n\ge 0$. By the Lemma \ref{subHopfLn}(3), $L_n$ is a subcoalgebra. Also $H$ is a left $K$-module with generators $\{x^i|i\ge 0\}$ for $H=\sum Kx^i$. Because $H$ is finite-dimensional, there exist some nontrivial relations between the generators such as
\begin{align*}
d_mx^m+d_{m-1}x^{m-1}+\cdots+d_1x+d_0=0,
\end{align*}
where $d_i\in K$ and $d_m\neq 0$, among which we choose the lowest degree in terms of $x$, say degree $m$. Furthermore denote $D=K$, $L=L_{m-1}$, $F=x^m$ and $V=\{a\in D|aF\in L\}$. As a result of Lemma \ref{subHopfLn}(2), we know $\Delta(F)-x^m\otimes 1-1\otimes x^m\in L\otimes L$. Then $D,L,F$ satisfy all the conditions listed in \cite[Lemma 1.1]{wang2011lower}. Hence $V=D$ for $0\neq d_m\in V$. Thus $x^{m}\in \bigoplus_{i<m} Kx^i$ and consequently $H$ is a free left $K$-module with the free basis $\{x^i|0\le i\le m-1\}$. Since $\dim H=m\dim K$, it is easy to see $m=p^d$ by definition.
Now assume that $K$ is normal. Follow the proof in Lemma \ref{normality}, we can show that $[K,x]\subseteq K$. From pervious discussion there exists a general equation for $x$:
\begin{align}\label{SRE}
x^{p^d}+\sum_{i=0}^ {p^d-1}a_ix^{i}=0,
\end{align}
where all $a_i\in K$. According to Lemma \ref{normalcom}, we can write $\Delta\left(x^{p^n}\right)=x^{p^n}\otimes 1+1\otimes x^{p^n}+u_n$, where $u_n\in K\otimes K$ for all $n\ge 0$. Now apply the comultiplication $\Delta$ to the above identity \eqref{SRE} to get
\begin{equation*}
x^{p^d}\otimes 1+1\otimes x^{p^d}+u_d+\sum_{i=0}^{p^d-1}\Delta(a_i)(x\otimes 1+1\otimes x+u)^{i}=0.
\end{equation*}
Replacing $x^{p^d}$ with $\left(-\sum_{i=0}^{p^d-1} a_{i}x^{i}\right)$, the following equation is straightforward:
\begin{gather}\label{EQ1}
\left(-\sum_{i=0}^{p^d-1} a_{i}x^{i}\right)\otimes 1+1\otimes \left(-\sum_{i=0}^{p^d-1} a_{i}x^{i}\right)\\
+\sum_{i=0}^{d-1}\Delta\left(a_{p^i}\right)\left(x^{p^i}\otimes 1+1\otimes x^{p^i}+u_i\right)+\sum_{i\in S}\Delta\left(a_i\right)\left(x\otimes 1+1\otimes x+u\right)^{i}+\Delta(a_0)+u_d=0\nonumber
\end{gather}
with the $p$-index set $S=\{1,2,\cdots,p^d\}\setminus \{1,p,p^2,\cdots,p^d\}$.
We first prove that $a_i=0$ for all $i\in S$ by contradiction. If not, suppose $n\in S$ is the largest integer such that $a_n\neq 0$. The free $K$-module structure for $H$ implies that the $K\otimes K$-module $H\otimes H$ has a free basis $\left\{x^{i}\otimes x^{j}|0\le i,j<p^d\right\}$. Thus the term $Kx^{n-i}\otimes Kx^i$ would only come from $\Delta\left(a_{n}\right)\left(x\otimes 1+1\otimes x+u\right)^{n}$ for all $1\le i\le n-1$. Moreover it exactly comes from $\Delta\left(a_{n}\right)\left(x\otimes 1+1\otimes x\right)^{n}$ by the choice of $n$. Therefore ${n\choose i}\Delta\left(a_{n}\right)\left(x^{n-i}\otimes x^i\right)=0$ for all $1\le i\le n-1$. Suppose $n=p^\alpha m$ where $m>1$ and $m\not\equiv 0 \pmod p$. Choose $i=p^\alpha$. Hence by \cite[Lemma 5.1]{isaacs1994algebra}, ${n \choose p^\alpha}\equiv m \pmod p$. Then $\Delta(a_{n})=0$, which implies that $a_n=0$, a contradiction. Therefore from equation \eqref{EQ1}, we deduce that $\Delta(a_{p^i})(x^{p^i}\otimes 1)=a_{p^{i}}x^{p^i}\otimes 1$ for all $0\le i\le d-1$. Thus $\Delta(a_{p^i})=a_{p^i}\otimes 1$. Then since $H$ is counital, all of $a_{p^i}$ are coefficients in the base field $\ensuremath{\bold{k}}$.
\end{proof}
\section{Finite-dimensional cocommutative connected Hopf algebras}
Notice that the following lemma holds over any arbitrary base field. In the remaining of this section, we still assume $\ensuremath{\bold{k}}$ to be algebraically closed of characteristic $p>0$.
\begin{lemma}\label{commNAI}
Let $H$ be a finite-dimensional Hopf algebra with normal Hopf subalgebras $K\subseteq L\subseteq H$. Then there exists a natural isomorphism:
\begin{equation*}
\left(H/K^+H\right)^*\Big/\left(H/L^+H\right)^{*+}\left(H/K^+H\right)^*\cong \left(L/K^+L\right)^*.
\end{equation*}
\end{lemma}
\begin{proof}
By \cite[Thm. 2.1.3]{MO93}, $L$ is Frobenius. Hence the injective left $L$-module map $L\hookrightarrow H$ splits since $L$ is self-injective. Therefore we can write $H=L\bigoplus M$ as a direct sum of two left $L$-modules. Because $K\subseteq L$, we have $L\bigcap K^+H=L\bigcap K^+\left(L\bigoplus M\right)=L\bigcap \left(K^+L\bigoplus K^+M\right)=K^+L$. Then the inclusion map $L\hookrightarrow H$ induces an injective Hopf algebra map $L/K^+L\hookrightarrow H/K^+H$, since $K^+L$ and $K^+H$ are Hopf ideals of $L$ and $H$ by \cite[Lemma 3.4.2]{MO93}.
It is clear that the composition map $L/K^+L\hookrightarrow H/K^+L\twoheadrightarrow H/L^+H$ factors through $\ensuremath{\bold{k}}$ by the counit. Thus the dualized map restricted on $(H/L^+H)^{*+}=(H/L^+H)^*\bigcap \ensuremath{\text{\upshape Ker}}\ u^*\to (L/K^+L)^*$ is the zero map, where $u$ is the unit map in $H$.
Therefore the natural surjective map $(H/K^+H)^*\twoheadrightarrow (L/K^+L)^*$, which is induced by the inclusion $L/K^+L\hookrightarrow H/K^+H$, factors through $\left(H/K^+H\right)^*\Big/\left(H/L^+H\right)^{*+}\left(H/K^+H\right)^*$. In order to show that it is an isomorphism, it is enough to prove that both sides have the same dimension. By \cite[Theorem 3.3.1]{MO93}, we have
\begin{align*}
\dim \left(H/K^+H\right)^*\Big/\left(H/L^+H\right)^{*+}\left(H/K^+H\right)^*&=\dim \left(H/K^+H\right)^*\Big/\dim(H/L^+H)^*\\
&=(\dim H/\dim K)\Big/(\dim H/\dim L)\\
&=\dim L/\dim K\\
&=\dim(L/K^+L)^*.
\end{align*}
\end{proof}
Let $H$ be any Hopf algebra over $\ensuremath{\bold{k}}$, and $\ensuremath{\bold{k}}\subseteq E$ be a field extension. In the proof of \cite[Cor. 2.2.2]{MO93}, we know that $H\otimes E$ is also a Hopf $E$-algebra, via
\begin{align*}
\Delta(h\otimes \alpha)&:=\Delta (h)\otimes \alpha\in H\otimes H\otimes E\cong (H\otimes E)\otimes_E(H\otimes E)\\
\ensuremath{\varepsilon}(h\otimes \alpha)&:=\ensuremath{\varepsilon}(h)\alpha\in E\\
S(h\otimes \alpha)&:=S(h)\otimes \alpha
\end{align*}
for all $h\in H,\alpha\in E$. Now consider any automorphism $\sigma$ of $\ensuremath{\bold{k}}$. By taking $E=\ensuremath{\bold{k}}$ and $\sigma$ to be the embedding in the discussion above, $H\otimes_\sigma\ensuremath{\bold{k}}$ is also a Hopf $\ensuremath{\bold{k}}$-algebra, which we will denote by $H_\sigma$. Note that in $H_\sigma$, we have $h\alpha\otimes 1=h\otimes \sigma(\alpha)$ for all $h\in H,\alpha\in \ensuremath{\bold{k}}$. Let $id_\sigma$ be the map $id\otimes 1$ from $H$ to $H_{\sigma}$. The following hold for all $h, l\in H$ and $\alpha\in \ensuremath{\bold{k}}$
\begin{gather*}
id_\sigma(hl)=id_\sigma(h)id_\sigma(l),\ \Delta id_\sigma(h)=(id_\sigma\otimes id_\sigma)\Delta h,\ S(id_\sigma(h))=id_\sigma(S(h))\\
\ensuremath{\varepsilon} id_\sigma(h)=\sigma\left(\ensuremath{\varepsilon}(h)\right),\ id_\sigma(h\alpha)=id_\sigma(h)\sigma (\alpha).
\end{gather*}
Generally, let $A$ be another Hopf algebra over $\ensuremath{\bold{k}}$, and $\phi$ be a map from $A$ to $H$. We say that $\phi: A\mapsto H$ is a \textbf{$\sigma$-linear Hopf algebra map} if the composition $id_\sigma\circ\phi: A\mapsto H_\sigma$ is a $\ensuremath{\bold{k}}$-linear Hopf algebra map. Suppose $H,A$ are both finite-dimensional. Note that $(H_\sigma)^*\cong (H^*)_\sigma$ since $\ensuremath{\text{\upshape Hom}}_E(H\otimes E,E)\cong \ensuremath{\text{\upshape Hom}}_\ensuremath{\bold{k}}(H,\ensuremath{\bold{k}})\otimes E$ for any field extension $\ensuremath{\bold{k}}\subseteq E$. Let $f$ be a $\sigma$-linear Hopf algebra map from $A$ to $H$. It is clear that the dual of $f$ is a $\sigma^{-1}$-linear Hopf algebra map from $H^*$ to $A^*$. Also quotients of $\sigma$-linear Hopf algebra maps are still $\sigma$-linear.
\begin{proposition}\label{cocommfiltration}
Let $H$ be a finite-dimensional cocommutative connected Hopf algebra. Then $H$ has an increasing sequence of normal Hopf subalgebras: $\ensuremath{\bold{k}}=N_0\subset N_1\subset \cdots\subset N_n=H$ satisfying the following properties:
\begin{itemize}
\item[(1)] Denote by $J$ the Jacobson radical of $H^*$. Then the length $n$ is the minimal integer such that $x^{p^n}=0$ for all $x\in J$.
\item[(2)] $N_1$ is the Hopf subalgebra of $H$ generated by all primitive elements.
\item[(3)] There are $\sigma$-linear injective Hopf algebra maps:
\[\xymatrix{
N_{m}/N_{m-1}^+N_m\ar@{^(->}[r]& N_{m-1}/N_{m-2}^+N_{m-1}
}\]
for all $2\le m\le n$, where $\sigma$ is the Frobenius map of $\ensuremath{\bold{k}}$.
\item[(4)] $0=\dim \ensuremath{\text{\upshape P}}\left(H/N_n^{+}H\right)\le \dim \ensuremath{\text{\upshape P}}\left(H/N_{n-1}^{+}H\right)\le\cdots\le \dim \ensuremath{\text{\upshape P}}\left(H/N_0^{+}H\right)=\dim \ensuremath{\text{\upshape P}}(H)$.
\end{itemize}
\end{proposition}
\begin{proof}
$(1)$ By duality, $H^*$ is a finite-dimensional commutative local Hopf algebra. Therefore by \cite[Thm. 14.4]{GTM66} we can write:
\begin{align*}
H^*=\ensuremath{\bold{k}}\left[x_1,x_2,\cdots,x_d\right]\Big /\left(x_1^{p^{n_1}},x_2^{p^{n_2}},\cdots,x_d^{p^{n_d}}\right)
\end{align*}
for some $d\ge 0$, in which we can define a decreasing sequence of normal Hopf ideals \cite[Def. 3.4.5]{MO93}
\begin{align*}
\left(J_m=(x_1^{p^{m}},x_2^{p^{m}},\cdots,x_d^{p^{m}})\right)_{m\ge 0}.
\end{align*}
By \cite[P. 36]{MO93}, in the dual vector space $H$ we have an increasing sequence of normal Hopf subalgebras: $\ensuremath{\bold{k}}=N_0\subset N_1\subset \cdots\subset N_m\subseteq\cdots\subseteq H$, where $N_m=\left(H^*/J_m\right)^*$ for all $m\ge 0$. For the length of this sequence, notice that $N_m=H\Leftrightarrow J_m=0\Leftrightarrow x_i^{p^m}=0$ for all $1\le i\le d\Leftrightarrow x^{p^m}=0$ for all $x\in J_0=J$.
$(2)$ Denote by $L$ the Hopf subalgebra of $H$ generated by $\ensuremath{\text{\upshape P}}(H)$. By \cite[Prop. 5.2.9]{MO93}, $\ensuremath{\bold{k}}\bigoplus \ensuremath{\text{\upshape P}}(H)=\{h\in H|\langle J^2,h\rangle=0\}$. Hence under the natural identification, $\ensuremath{\text{\upshape P}}(H)\subset(H^*/J^2)^*\subseteq (H^*/J_1)^*=N_1$. Because $L$ is generated by $\ensuremath{\text{\upshape P}}(H)$ as an algebra, we have $L\subseteq N_1$. Moreover we know $\dim L=p^{\dim \ensuremath{\text{\upshape P}}(H)}=p^{\dim J/J^2}=p^d$ by Proposition \ref{BPH}(4). On the other side, $\dim N_1=\dim H^*/J_1=p^d$, which implies that $L=N_1$.
$(3)$ Define a decreasing sequence of normal Hopf subalgebras of $H^*$ by
\begin{align*}
A_{m}=\{h^{p^m}|h\in H^*\}=\ensuremath{\bold{k}}\left[x_1^{p^{m}},x_2^{p^{m}},\cdots,x_d^{p^{m}}\right].
\end{align*}
Notice that $A_m^+H^*=J_m$ for all $m\ge 0$. Moreover, by Lemma \ref{commNAI}, we have
\begin{align}\label{NIF}
\left(A_m/A_{m+1}^+A_m\right)^*&\cong \left(H^*/A_{m+1}^+H^*\right)^*\Big/\left(H^*/A_m^+H\right)^{*+}\left(H^*/A_{m+1}^+H^*\right)^*\\
&=N_{m+1}\Big/N_m{^+}N_{m+1}.\notag
\end{align}
Let $\sigma$ be the Frobenius map of $\ensuremath{\bold{k}}$ (i.e., the $p$-th power map). For any $2\le m\le n$, we can take $(A_{m-2})_{\sigma^{-1}}=A_{m-2}\otimes_{\sigma^{-1}} \ensuremath{\bold{k}}$ such that $ak\otimes 1=a\otimes \sigma^{-1}(k)$ for any $a\in A_{m-2}$ and $k\in \ensuremath{\bold{k}}$. Hence it is easy to see that there exists a series of $\sigma^{-1}$-linear surjective $p$-th power Hopf algebra maps $\phi_{m-2}: A_{m-2} \twoheadrightarrow A_{m-1}$ such that $\phi_{m-2}(x)=x^p$ for all $x\in A_{m-2}$. Therefore $\phi_{m-2}$ induces a series of $\sigma^{-1}$-linear surjective maps on their quotients $A_{m-2}/A_{m-1}^+A_{m-2}\twoheadrightarrow A_{m-1}/A_{m}^+A_{m-1}$. By daulizing all the maps and the above natural isomorphism \eqref{NIF}, we have a series of $\sigma$-linear injective Hopf algebra maps:
\[\xymatrix{
N_{m}/N_{m-1}^+N_m\ar@{^(->}[r]& N_{m-1}/N_{m-2}^+N_{m-1}
}\]
for all $2\le m\le n$.
$(4)$ In Lemma \ref{commNAI}, let $K=\ensuremath{\bold{k}}$ and $L=A_m$. Then we have the special isomorphism:
\begin{align*}
A_m^*\cong H\Big/N_m^+H.
\end{align*}
Therefore, by Proposition \ref{BPH}(4),
\begin{align*}
\dim \ensuremath{\text{\upshape P}}(H/N_m^{+}H)=\dim J(A_m)/J(A_m)^2=\#\left\{\{x_1^{p^{m}},x_2^{p^{m}},\cdots,x_d^{p^{m}}\}\setminus \{0\}\right\},
\end{align*}
which is the number of generators among $\{x_1,x_2,\cdots,x_d\}$, whose $p^{m}$-th power does not vanish. Thus the inequalities follow.
\end{proof}
\begin{corollary}\label{FCLH}
Let $H$ be a finite-dimensional connected Hopf algebra with $\dim\ensuremath{\text{\upshape P}}(H)=1$. Then $H$ has an increasing sequence of normal Hopf subalgebras:
\begin{align*}
\ensuremath{\bold{k}}=N_0\subset N_1\subset N_2\subset \cdots \subset N_n=H,
\end{align*}
where $N_1$ is generated by $\ensuremath{\text{\upshape P}}(H)$ and each $N_i$ has $p$-index one in $N_{i+1}$.
\end{corollary}
\begin{proof}
Denote by $H^*$ the dual Hopf algebra of $H$. By duality, $H^*$ is local. Set $J=J(H^*)$, the Jacobson radical of $H^*$. Since $\dim \ensuremath{\text{\upshape P}}(H)=1$, by Proposition \ref{BPH}(4), $\dim J/J^2=1$. Suppose that $\dim H=p^n$ by Proposition \ref{BPH}(7). It is clear that $H^*\cong \ensuremath{\bold{k}}\left[x\right]/(x^{p^n})$ as algebras and $J=(x)$. Hence $H$ is cocommutative and it has an increasing sequence of normal Hopf subalgebras $\ensuremath{\bold{k}}=N_0\subset N_1\subset \cdots\subset N_n=H$ such that $N_1$ is generated by $\ensuremath{\text{\upshape P}}(H)$ and $\dim N_m=p^m$ for all $0\le m\le n$ by Proposition \ref{cocommfiltration}.
\end{proof}
\begin{theorem}\label{NPLA}
Let $H$ be finite-dimensional cocommutative connected Hopf algebra. Denote by $K$ the Hopf subalgebra generated by $\ensuremath{\text{\upshape P}}(H)$. Then the following are equivalent:
\begin{itemize}
\item[(1)] $H$ is local.
\item[(2)] $K$ is local.
\item[(3)] All the primitive elements of $H$ are nilpotent.
\end{itemize}
\end{theorem}
\begin{proof}
$(1)\Rightarrow (2)$ is from Proposition \ref{BPH}(2) and $(2)\Rightarrow (3)$ is clear since $K$ contains $\ensuremath{\text{\upshape P}}(H)$ and its augmentation ideal is nilpotent.
In order to show that $(3)\Rightarrow (2)$, denote $\mathfrak g=\ensuremath{\text{\upshape P}}(H)$, which is a restricted Lie algebra. Then $(3)$ is equivalent to the statement that $\mathfrak g^{p^n}=0$ for sufficient larger $n$. Therefore $(\ensuremath{\text{\upshape ad}} x)^{p^n}=\ensuremath{\text{\upshape ad}}(x^{p^n})=0$ for all $x\in \mathfrak g$. By Engel's Theorem \cite[I \S 3.2]{GTM9}, $\mathfrak g$ is nilpotent. Any representation of $K\cong u(\mathfrak g)$ is a restricted representation of $\mathfrak g$. Therefore any irreducible representation of $K$ is one-dimensional with trivial action of the augmentation ideal of $K$. Hence the augmentation ideal of $K$ is nilpotent and $K$ is local.
Finally, we need to show $(2)\Rightarrow (1)$. Suppose $\ensuremath{\bold{k}}=N_0\subset N_1\subset \cdots N_n=H$ is the sequence of normal Hopf subalgebras stated in Proposition \ref{cocommfiltration} for $H$. By Proposition \ref{cocommfiltration}(2), we know $N_1=K$ is local. We will show inductively that each $N_m$ is local. Assume $N_m$ to be local and denote $\sigma$ as the Frobenius map of $\ensuremath{\bold{k}}$. We have the following injective Hopf algebra map according to Proposition \ref{cocommfiltration}(3) and the definition of $\sigma$-linear Hopf algebra maps:
\[\xymatrix{
N_{m+1}/N_{m}^+N_{m+1}\ar@{^(->}[r]& \left(N_{m}/N_{m-1}^+N_{m}\right)_\sigma.
}\]
Note that any finite-dimensional Hopf algebra $A$ is local if and only if its augmented ideal $A^+$ is nilpotent. Since $(A\otimes_\sigma \ensuremath{\bold{k}})^+=(A^+)\otimes_\sigma\ensuremath{\bold{k}}$, we see that $A$ is local if and only if $A_\sigma$ is local. Hence $\left(N_{m}/N_{m-1}^+N_{m}\right)_\sigma$ is local. Moreover, by Proposition \ref{BPH}(2), $N_{m+1}/N_{m}^+N_{m+1}$ is local. Therefore there exist integers $l,d$ such that $(N_{m+1}^+)^d\subseteq N_m^+N_{m+1}$ and $(N_m^+)^l=0$. Hence $(N_{m+1}^+)^{ld}\subseteq (N_m^+)^dN_{m+1}=0$. Here we have used $N_m^+N_{m+1} = N_{m+1}N_m^+$, which follows from \cite[Cor. 3.4.4]{MO93} and the fact that $N_m$ is normal. This completes the proof.
\end{proof}
\begin{remark}\label{GSL}
Let $G$ be a connected affine algebraic group scheme over $\ensuremath{\bold{k}}$, and $G_1$ be the first Frobenius kernel of G. By \cite[Prop. 4.3.1 Exp. XVII]{SGA3}, we know that $G$ is unipotent if and only if $\ensuremath{\text{\upshape Lie}}\left(G\right)$ is unipotent, i.e., for any $x\in \ensuremath{\text{\upshape Lie}}(G_1)$, there exists integer $n>0$, such that $x^{p^n}=0$. Moreover, $\ensuremath{\text{\upshape Lie}}\left(G\right)=\ensuremath{\text{\upshape Lie}}\left(G_1\right)$. Hence $G$ is unipotent if and only if $G_1$ is unipotent. Denote the coordinate ring $A=\ensuremath{\bold{k}}[G]$. Then $\ensuremath{\bold{k}}[G_1] = A/A^{+(p)}A$, where $A^{(p)}=\{a^p\ |\ a\in A\}$. We can state the above assertion in another way: $A$ is connected if and only if $A/A^{+(p)}A$ is connected. If $A$ is finite-dimensional, as shown in Proposition \ref{cocommfiltration}(2), $\left(A/A^{+(p)}A\right)^*$ is the Hopf subalgebra of $A^*$ generated by its primitive elements. This provides an alternative proof for Theorem \ref{NPLA} and shows that the locality criterion in Theorem \ref{NPLA} for finite-dimensional cocommutative connected Hopf algebras parallel the criteria for unipotency of finite connected group schemes over $\ensuremath{\bold{k}}$.
\end{remark}
\section{Hochschild cohomology of restricted universal enveloping algebras}
Suppose $H$ is a Hopf algebra. Denote by $\ensuremath{\bold{k}}$ the trivial $H$-bicomodule. The Hochschild cohomology $\ensuremath{\text{\upshape H}}^\bullet(\ensuremath{\bold{k}},H)$ of $H$ with coefficients in $\ensuremath{\bold{k}}$ can be computed as the homology of the differential graded algebra $\Omega H$ defined as follows \cite[Lemma 1.1]{cstefan1998hochschild}:
\begin{itemize}
\item As a graded algebra, $\Omega H$ is the tensor algebra $T(H)$,
\item The differential in $\Omega H$ is given by $d^0=0$ and for $n\ge 1$
\begin{align*}
d^n=1\otimes I_n+\sum_{i=0}^{n-1} (-1)^{i+1} I_i\otimes\Delta\otimes I_{n-i-1}+(-1)^{n+1}I_n\otimes 1.
\end{align*}
\end{itemize}
This DG algebra is usually called the \textbf{cobar construction} of $H$. See \cite[\S 19]{GTM205} for the basic properties of cobar constructions. Throughout, we will use $\ensuremath{\text{\upshape H}}^\bullet(\ensuremath{\bold{k}},H)$ to denote the homology of the DG algebra $(\Omega H,d)$.
\begin{lemma}\label{DimExtCo}
Let $H$ be a finite-dimensional Hopf algebra. Thus
\begin{eqnarray*}
\ensuremath{\text{\upshape H}}^n\left(\ensuremath{\bold{k}},H\right)\cong \ensuremath{\text{\upshape H}}^n\left(H^*,\ensuremath{\bold{k}}\right)\cong \ensuremath{\text{\upshape Ext}}^n_{H^*}\left(\ensuremath{\bold{k}},\ensuremath{\bold{k}}\right),
\end{eqnarray*}
for all $n\ge 0$.
\end{lemma}
\begin{proof}
We still denote by $\ensuremath{\bold{k}}$ the trivial $H$-bimodule. Then the first isomorphism comes from \cite[Prop. 1.4]{cstefan1998hochschild}. Let $M$ be a $H$-bimodule with the trivial right structure. We define the right structure of $M^{\ensuremath{\text{\upshape ad}}}$ by $m.h=S(h)m$ using the antipode $S$ of $H$ for any $m\in M,h\in H$. Then it is easy to see $\ensuremath{\bold{k}}^{\ensuremath{\text{\upshape ad}}}\cong \ensuremath{\bold{k}}$ as trivial right $H$-modules. Hence the second isomorphism is derived from \cite[Thm. 1.5]{cstefan1998hochschild}.
\end{proof}
Let $\mathfrak g$ be a restricted Lie algebra. We denote by $u(\mathfrak g)$ the restricted universal enveloping algebra of $\mathfrak g$. Analogue to ordinary Lie algebras, restricted $\mathfrak g$-modules are in one-to-one correspondence with $u(\mathfrak g)$-modules, i.e., a vector space $M$ is a restricted $\mathfrak g$-module if there exists an algebra map $T: \mathfrak u(\mathfrak g)\to \ensuremath{\text{\upshape End}}_\ensuremath{\bold{k}}(M)$.
\begin{proposition}\label{Liealgebrainclusion}
Let $\mathfrak g$ be a restricted Lie algebra with basis $\{x_1,x_2,\cdots,x_n\}$. Then the image of
\begin{align*}
\left\{\omega(x_i),\ x_j\otimes x_k\ |\ 1\le i\le n,1\le j<k\le n\right\}
\end{align*}
is a basis in $\ensuremath{\text{\upshape H}}^2\left(\ensuremath{\bold{k}},u(\mathfrak g)\right)$.
\end{proposition}
\begin{proof}
Denote $K=u\left(\mathfrak g\right)$ and let $C_p^n$ be the elementary abelian $p$-group of rank $n$. It is clear that $K^*$ is isomorphic to $\ensuremath{\bold{k}} [C_p^n]$ as algebras. Then it follows from, e.g., \cite[P. 558 (4.1)]{QAST} that $\dim \ensuremath{\text{\upshape H}}^2(K^*, \ensuremath{\bold{k}})=\dim \ensuremath{\text{\upshape H}}^2(C_p^n, \ensuremath{\bold{k}})=n(n+1)/2$. Thus by Lemma \ref{DimExtCo}, $\dim \ensuremath{\text{\upshape H}}^2(\ensuremath{\bold{k}},K)=n(n+1)/2$. First, it is direct to check that all $\omega(x_i)$ and $x_j\otimes x_k$ are cocycles in $\Omega K$. We only check for $x_j\otimes x_k$ here. Notice that $d^2=1\otimes I\otimes I-\Delta\otimes I+I\otimes \Delta-I\otimes I\otimes 1$. Thus
\begin{align*}
d^2\left(x_j\otimes x_k\right)&=1\otimes x_j\otimes x_k-\Delta(x_j)\otimes x_k+x_j\otimes \Delta(x_k)-x_j\otimes x_k\otimes 1\\
&=1\otimes x_j\otimes x_k-(x_j\otimes 1+1\otimes x_j)\otimes x_k+x_j\otimes(x_k\otimes 1+1\otimes x_k)-x_j\otimes x_k\otimes 1\\
&=0.
\end{align*}
Secondly, we need to show they are linearly independent in $\ensuremath{\text{\upshape H}}^2(\ensuremath{\bold{k}},K)=\ensuremath{\text{\upshape Ker}}\ d^2/\ensuremath{\text{\upshape Im}}\ d^1$. We only deal with the case when $p\ge 3$. The remaining case of $p=2$ is similar. By the \ensuremath{\text{\upshape PBW}}\ Theorem, $K$ has a basis formed by
\begin{align*}
\left\{x_1^{i_1}\ x_2^{i_2}\cdots x_n^{i_n}\ |\ 0\le i_1,i_2,\cdots,i_n\le p-1\right\}.
\end{align*}
Because the differential $d^1=1\otimes I-\Delta+I\otimes 1$ in $\Omega K$ only uses the comultiplication, without loss of generality, we can assume $\mathfrak g$ to be abelian. Suppose each variable $x_i$ of $K$ has degree one. Assign the usual total degree to any monomial in $K$. Also the total degree of a tensor product $A\otimes B$ in $K\otimes K$ is the sum of the degrees of $A$ and $B$ in $K$. Therefore $d^1$ preserves the degree from $K$ to $K\otimes K$ for any monomial. Notice that $\omega(x_i)$ has degree $p$ and $x_j\otimes x_k$ has degree two. We can treat them separately. Suppose that $\sum_i \alpha_i\omega(x_i)\in \ensuremath{\text{\upshape Im}} d^1$. First, we consider the ideal $I=(x_2,\cdots,x_n)$ in $K$. By passing to the quotient $K/I$, we have $\alpha_1\omega(\overline{x_1})\in \ensuremath{\text{\upshape Im}}\ \overline{d^1}$, where $\overline{d^1}: K/I\to K/I\otimes K/I$. But every monomial in $K/I$, which is generated by $x_1$, has degree less than $p$. This forces that $\alpha_1=0$. The same argument works for all the coefficients. Now suppose $\sum_{j< k} \alpha_{jk}x_j\otimes x_k\in \ensuremath{\text{\upshape Im}}\ d^1$. Therefore there exists $\sum_{j\le k} \lambda_{jk}x_jx_k\in K$ such that
\begin{align*}
\sum_{j< k} \alpha_{jk}\ x_j\otimes x_k&=d^1\left(\sum_{j\le k} \lambda_{jk}\ x_jx_k\right)\\
&=\sum_{j\le k}\lambda_{jk} \left(1\otimes x_jx_k-\Delta(x_jx_k)+x_jx_k\otimes 1\right)\\
&=-\sum_{j\le k}\lambda_{jk}\left(x_j\otimes x_k+x_k\otimes x_j\right).
\end{align*}
By applying the \ensuremath{\text{\upshape PBW}}\ Theorem to $K\otimes K$, we have all the coefficients equal zero. This completes the proof.
\end{proof}
\begin{lemma}\label{combinecohomologyclass}
Let $\mathfrak g$ be a restricted Lie algebra. Then the cocycle
\begin{align*}
\sum_{i=1}^{n} \alpha_i^p\ \omega\left(x_i\right)-\omega\left(\sum_{i=1}^n \alpha_i\ x_i\right)
\end{align*}
is zero in $\ensuremath{\text{\upshape H}}^2\left(\ensuremath{\bold{k}},u(\mathfrak g)\right)$, where $x_i\in \mathfrak g$ and $\alpha_i\in \ensuremath{\bold{k}}$ for all $1\le i\le n$.
\end{lemma}
\begin{proof}
Denote by $K$ the restricted universal enveloping algebra of $\mathfrak g$. First, it is direct to check that $\omega(x)$ is a cocycle in $(\Omega K,d)$ for any $x\in \mathfrak g$. Hence the expression in the statement is also a cocycle in $(\Omega K,d)$. We only need to show that it lies in the coboundary $\ensuremath{\text{\upshape Im}}\ d^1$. Without loss of generality, we can assume $\mathfrak g$ to be finite-dimensional. Because $\ensuremath{\bold{k}}$ is algebraically closed in $\mathbb F_p$, we can replace $\ensuremath{\bold{k}}$ with some finite field $\mathbb F_q$. By basic algebraic number theory, there exists some number field $L\supset \mathbb Q$, where $p$ remains prime in the ring of integers $\mathcal O_L$ such that $\mathcal O_L/(p)=\mathbb F_q$. Now by choosing representatives for $\mathbb F_q$ in $\mathcal O_L$, we can view $\mathfrak g$ as a free module over $\mathcal O_L$ with a Lie bracket $[\ ,\ ]$, representing all the relations between a chosen basis for $\mathfrak g$. Denote by $A=\mathcal U(\mathfrak g)$ the universal enveloping algebra of $\mathfrak g$ over $\mathcal O_L$, which is a Hopf algebra as usual. There is a quotient map $\pi: A\to u(\mathfrak g)$, which factors through $A/(p)$. Therefore it suffices to prove that for any $x,y\in \mathfrak g$, there exists some $\Theta\in A$ such that
\begin{align}\label{addomega}
\omega(x)+\omega(y)-\omega(x+y)=1\otimes \Theta-\Delta(\Theta)+\Theta\otimes 1.
\end{align}
The general result will follow by applying the quotient map $\pi$ to \eqref{addomega}, and the induction on the number of variables appearing in the expression. By Lemma \ref{palgebra}, in $A\otimes_{\mathcal O_L}\mathcal O_L/(p)=A\otimes_{\mathcal O_L}\mathbb F_q=A/(p)$, there exists some $z\in \mathfrak g$ such that
\begin{align*}
(x+y)^p=x^p+y^p+z.
\end{align*}
So back in $A$, we have some $\Theta \in A$ such that
\begin{align*}
(x+y)^p=x^p+y^p+z+p\ \Theta.
\end{align*}
Thus in $A$, we can calculate $\Delta(x+y)^p$ in two different ways:
\begin{align*}
\Delta(x+y)^p&=\left(\Delta(x+y)\right)^p\tag{I}\\
&=\left((x+y)\otimes 1+1\otimes (x+y)\right)^p\\
&=(x+y)^p\otimes 1+1\otimes (x+y)^p+p\ \omega(x+y)\\
&=(x^p+y^p+z)\otimes 1+1\otimes (x^p+y^p+z)+p\ \Theta \otimes 1+1\otimes p\ \Theta+p\ \omega(x+y).
\end{align*}
On the other hand,
\begin{align*}
\Delta(x+y)^p&=\Delta\left(x^p+y^p+z+p\ \Theta \right)\tag{II}\\
&=x^p\otimes 1+1\otimes x^p+p\ \omega (x)+y^p\otimes 1+1\otimes y^p+p\ \omega(y)+z\otimes 1+1\otimes z+p\ \Delta(\Theta)\nonumber\\
&=(x^p+y^p+z)\otimes 1+1\otimes (x^p+y^p+z)+p\ \omega(x)+p\ \omega(y)+p\ \Delta(\Theta).\nonumber
\end{align*}
Therefore we have the following identity in $A\otimes A$.
\begin{align*}
p\ \{\omega(x)+\omega(y)-\omega(x+y)\}=p\ \{1\otimes \Theta-\Delta(\Theta)+\Theta\otimes 1\}.
\end{align*}
Since $A$ is a domain, we can cancel $p$ from both sides. This completes the proof.
\end{proof}
\begin{definition}
Let $H$ be a Hopf algebra. For any $x\in H$, define the adjoint map $T_x$ on $\Omega H$ by
\begin{align*}
T^n_x=\sum_{i=0}^{n-1} I_i\otimes \ensuremath{\text{\upshape ad}} (x)\otimes I_{n-i-1},
\end{align*}
where $\ensuremath{\text{\upshape ad}}(x)(H)=[x,H]$.
\end{definition}
\begin{lemma}\label{chainmap}
If $H$ is any Hopf algebra, then $T_x$ is a degree zero cochain map from $\Omega H$ to itself for all $x\in \ensuremath{\text{\upshape P}}(H)$. Moreover, $\ensuremath{\text{\upshape P}}(H)=\ensuremath{\text{\upshape H}}^1(\ensuremath{\bold{k}},H)$ and $\bigoplus_{n\ge 0}\ensuremath{\text{\upshape H}}^n\left(\ensuremath{\bold{k}},H\right)$ is a graded restricted $\ensuremath{\text{\upshape P}}(H)$-module via the adjoint map.
\end{lemma}
\begin{proof}
First, for simplicity write $T=T_x$ for some $x\in \ensuremath{\text{\upshape P}}(H)$. We prove $d^nT^n=T^{n+1}d^n$ inductively for all $n\ge 0$. It is easy to check that it holds for $n=0,1$. Notice that
\begin{eqnarray*}
d^n=d^{n-1}\otimes I+(-1)^{n-1}I_{n-1}\otimes d^1,
\end{eqnarray*}
for all $n\ge 2$. Thus
\begin{align*}
&d^n T^n\\
&=\left(d^{n-1}\otimes I+(-1)^{n-1}I_{n-1}\otimes d^1\right)\left(T^{n-1}\otimes I+I_{n-1}\otimes T^1\right)\\
&=d^{n-1}T^{n-1}\otimes I+d^{n-1}\otimes T^1+(-1)^{n-1}T^{n-1}\otimes d^1+(-1)^{n-1}I_{n-1}\otimes d^1T^1\\
&=T^{n}d^{n-1}\otimes I+d^{n-1}\otimes T^1+(-1)^{n-1}T^{n-1}\otimes d^1+(-1)^{n-1}I_{n-1}\otimes T^2d^1\\
&=T^{n}d^{n-1}\otimes I+d^{n-1}\otimes T^1+(-1)^{n-1}\left(T^{n-1}\otimes I_2+I_{n-1}\otimes T^1\otimes I\right)\left(I_{n-1}\otimes d^1\right)+(-1)^{n-1}I_{n-1}\otimes (I\otimes T^1)d^1\\
&=T^{n}d^{n-1}\otimes I+d^{n-1}\otimes T^1+(-1)^{n-1}\left(T^{n}\otimes I\right)\left(I_{n-1}\otimes d^1\right)+(-1)^{n-1}I_{n-1}\otimes \left(I\otimes T^1\right)d^1\\
&=\left(T^{n}\otimes I+I_n\otimes T^1\right)\left(d^{n-1}\otimes I+(-1)^{n-1}I_{n-1}\otimes d^1\right)\\
&=T^{n+1}d^n
\end{align*}
Therefore $T$ induces an action of $\ensuremath{\text{\upshape P}}(H)$ on $\ensuremath{\text{\upshape H}}^n(\ensuremath{\bold{k}},H)$ for each $n$. Moreover, we know $\ensuremath{\text{\upshape P}}(H)$ is a restricted Lie algebra via the $p$-th power map in $H$. It is clear that $[T_x,T_y]=T_{[x,y]}$ and $T_x^p=T_{x^p}$ for any $x,y\in \ensuremath{\text{\upshape P}}(H)$. Hence $\bigoplus_{n\ge 0}\ensuremath{\text{\upshape H}}^n\left(\ensuremath{\bold{k}},H\right)$ becomes a graded restricted $\ensuremath{\text{\upshape P}}(H)$-module via $T$. Finally, $\ensuremath{\text{\upshape P}}(H)\cong \ensuremath{\text{\upshape H}}^1(\ensuremath{\bold{k}},H)$ by definition.
\end{proof}
\begin{theorem}\label{Cohomologylemma}
Let $K\subseteq H$ be an inclusion of connected Hopf algebras with first order $n\ge 2$. Then the differential $d^1$ induces an injective restricted $\mathfrak g$-module map
\[
\xymatrix{
H_n/K_n\ar@{^(->}[r]&\ensuremath{\text{\upshape H}}^2(\ensuremath{\bold{k}},K),
}\]
where $\mathfrak g=\ensuremath{\text{\upshape P}}(H)$.
\end{theorem}
\begin{proof}
By Corollary \ref{productcoradical}, $H_n$ becomes a restricted $\mathfrak g$-module via the adjoint action since $[\ensuremath{\text{\upshape P}}(H),H_n]\subseteq [H_1,H_n]\subseteq H_n$. We know $\mathfrak g=\ensuremath{\text{\upshape P}}(H)=\ensuremath{\text{\upshape P}}(K)$ for the inclusion has first order $n\ge 2$. Hence the $\mathfrak g$-action factors through $H_n/K_n$. Choose any $x\in H_n$. We know $d^1(x)=1\otimes x-\Delta(x)+x\otimes 1\in H_{n-1}\otimes H_{n-1}=K_{n-1}\otimes K_{n-1}\subseteq K\otimes K$ by \cite[Lemma 5.3.2]{MO93}. Furthermore, we can view $(\Omega K,d_K)$ as a subcomplex of $(\Omega H,d_H)$. Then $d_K^2d_H^1(x)=d_H^2d_H^1(x)=0$. Hence $d^1(x)$ is a cocycle in $\Omega K$ and $d^1$ maps $H_n$ into $\ensuremath{\text{\upshape H}}^2(\ensuremath{\bold{k}},K)$. The map $d^1$ factors through $H_n/K_n$ for $d^2d^1(K_n)=0$. To show the induced map is injective, suppose $d^1(x)\in\ensuremath{\text{\upshape Im}}\ d_K^1$. Then there exists some $y\in K$ such that $d^1(x)=d^1(y)$, which implies that $d^1(x-y)=0$. By definition, we have $x-y\in \ensuremath{\text{\upshape P}}(H)=\ensuremath{\text{\upshape P}}(K)$. Hence $x\in K\bigcap H_n=K_n$ by Remark \ref{BHC}. Finally, $d^1$ is compatible with the $\mathfrak g$-action on $\ensuremath{\text{\upshape H}}^2(\ensuremath{\bold{k}},K)$ by Lemma \ref{chainmap}.
\end{proof}
\begin{theorem}\label{HCT}
Let $\mathfrak g$ be a restricted Lie algebra with basis $\{x_1,x_2,\cdots,x_n\}$. Suppose $u(\mathfrak g)\subsetneq H$ is an inclusion of connected Hopf algebras. Then there exists some $x\in H\setminus u(\mathfrak g)$ such that
\begin{align*}
\Delta(x)=x\otimes 1+1\otimes x+\omega\left(\sum_i\alpha_ix_i\right)+\sum_{j<k}\alpha_{jk}x_j\otimes x_k
\end{align*}
with coefficients $\alpha_i,\alpha_{jk}\in \ensuremath{\bold{k}}$. Moreover, the first order for the inclusion can only be $1$, $2$ or $p$.
\end{theorem}
\begin{proof}
Denote by $d$ the first order for the inclusion. By definition, $d=1$ implies that $\mathfrak g\subsetneq \ensuremath{\text{\upshape P}}(H)$. Then we can find some primitive element $x\in \ensuremath{\text{\upshape P}}(H)\setminus \mathfrak g\subseteq H\setminus u(\mathfrak g)$ such that $\Delta(x)=x\otimes 1+1\otimes x$. In the following, we may assume $d\ge 2$. By Theorem \ref{Cohomologylemma} and Proposition \ref{Liealgebrainclusion}, there exists $x\in H_d\setminus u(\mathfrak g)$ such that
\begin{align*}
1\otimes x-\Delta(x)+x\otimes 1=d^1(x)=-\sum_i \alpha_i^p\ \omega(x_i)-\sum_{j<k}\alpha_{jk}\ x_j\otimes x_k.\tag{I}\label{E1}
\end{align*}
By the choice of $x$, we know the coefficients are not all zero. By Lemma \ref{combinecohomologyclass}, there exists some $y\in u(\mathfrak g)$ such that
\begin{align*}
1\otimes y-\Delta(y)+y\otimes 1=d^1(y)=\sum_{i}\alpha_i^p\ \omega(x_i)-\omega\left(\sum_{i}\alpha_i\ x_i\right).&\tag{II}\label{E2}
\end{align*}
If we add \eqref{E1} to \eqref{E2}, then we have
\begin{align*}
(x+y)\otimes 1-\Delta(x+y)+1\otimes (x+y)=-\omega\left(\sum_{i}\alpha_i\ x_i\right)-\sum_{j<k}\alpha_{jk}\ x_j\otimes x_k.
\end{align*}
This implies that
\[
\Delta(x+y)=(x+y)\otimes 1+1\otimes (x+y)+\omega\left(\sum_{i}\alpha_i\ x_i\right)+\sum_{j<k}\alpha_{jk}\ x_j\otimes x_k.
\]
It is clear that $x+y\in H\setminus u(\mathfrak g)$. Finally, because the associated graded Hopf algebra $\ensuremath{\text{\upshape gr}} H$ is coradically graded as mentioned in \cite[Def. 1.13]{Andruskiewitsch02pointedhopf}, it is easy to see that if all $\alpha_i=0$ then $d=2$. Otherwise $d=p$. Hence the first order $d$ can only be $1$, $2$ or $p$. This completes the proof.
\end{proof}
\section{Connected Hopf algebras of dimension $p^2$}
The starting point for classifying finite-dimensional connected Hopf algebras turns out to be when the dimension of the Hopf algebras is just $p$. It is obvious that such Hopf algebras are primitively generated, i.e., by some primitive element $x$. As a consequence of the characteristic of the base field, $x^p$ is still primitive. This implies that $x^p=\lambda x$ for some $\lambda\in\ensuremath{\bold{k}}$, since the dimension of the primitive space is one. By rescaling of the variable, we can always assume the coefficient $\lambda$ to be zero or one. Thus we have the following result:
\begin{theorem}\label{D1}
All connected Hopf algebras of dimension $p$ are isomorphic to either $\ensuremath{\bold{k}}[x]/(x^p)$ or $\ensuremath{\bold{k}}[x]/(x^p-x)$, where $x$ is primitive.
\end{theorem}
\begin{corollary}
All local Hopf algebras of dimension $p$ are isomorphic to $\ensuremath{\bold{k}}[x]/(x^p)$ with comultiplication either $\Delta(x)=x\otimes 1+1\otimes x$ or $\Delta(x)=x\otimes 1+1\otimes x+x\otimes x$.
\end{corollary}
\begin{proof}
By Proposition \ref{BPH}(1), $p$-dimensional local Hopf algebras are in one-to-one correspondence with $p$-dimensional connected Hopf algebras by vector space dual. Therefore by Theorem \ref{D1}, there are two non-isomorphic classes of local Hopf algebras of dimension $p$. It is clear that $\ensuremath{\bold{k}} [x]/(x^p)$ is a local algebra of dimension $p$. Regarding the coalgebra structure, when $\Delta(x)=x\otimes 1+1\otimes x$, it is connected. When $\Delta(x)=x\otimes 1+1\otimes x+x\otimes x$, $\Delta(x+1)=(x+1)\otimes (x+1)$, which is a group-like element. Therefore it is cosemisimple. They are certainly non-isomorphic as coalgebras.
\end{proof}
In the rest of the section, we concentrate on the classification of connected Hopf algebras of dimension $p^2$. We first consider the case when $\dim \ensuremath{\text{\upshape P}}(H)=1$. By Corollary \ref{FCLH}, we have $\ensuremath{\bold{k}}\subset K\subset H$, where $K$ is generated by some $x\in \ensuremath{\text{\upshape P}}(H)$. By Proposition \ref{BPH}(5), we know $K$ is isomorphic to the restricted universal enveloping algebra of the one-dimensional restricted Lie algebra spanned by $x$. Therefore by Proposition \ref{Liealgebrainclusion}, $\ensuremath{\text{\upshape H}}^2(\ensuremath{\bold{k}},K)$ is one-dimensional with the basis representing by the element
\begin{align*}
\omega(x)=\sum_{i=1}^{p-1}\ \frac{(p-1)!}{i!(p-i)!}\ x^i\otimes x^{p-i}.
\end{align*}
Furthermore, by Theorem \ref{HCT}, there exists some $y\in H\setminus K$ such that $\Delta\left(y\right)=y\otimes 1+1\otimes y+\omega(x)$.
\begin{lemma}\label{D2P1C}
Let $H$ be a connected Hopf algebra of dimension $p^2$ with $\dim\ensuremath{\text{\upshape P}}(H)=1$. Then $H$ is isomorphic to one of the following
\begin{itemize}
\item[(1)] $\ensuremath{\bold{k}}\left[x,y\right]/(x^p,y^p)$,
\item[(2)] $\ensuremath{\bold{k}}\left[x,y\right]/(x^p,y^p-x)$,
\item[(3)] $\ensuremath{\bold{k}}\left[x,y\right]/(x^p-x,y^p-y)$,
\end{itemize}
where the coalgebra structure is given by
\begin{align}\label{comultiplicationD1P1}
\Delta(x)&=x\otimes 1+1\otimes x,\\
\Delta(y)&=y\otimes 1+1\otimes y+\omega(x).\notag
\end{align}
\end{lemma}
\begin{proof}
By the previous argument, we can find elements $x,y\in H$ with the comultiplications given in \eqref{comultiplicationD1P1}. They generate a Hopf subalgebra of $H$ by Remark \ref{BHC}. Since $H$ has dimension $p^2$, $H$ is generated by $x,y$. It is clear that $[x,y]$ is primitive since
\begin{align*}
\Delta\left(\left[x,y\right]\right)&=\left[\Delta(x),\Delta(y)\right]\\
&=\left[x\otimes 1+1\otimes x,y\otimes 1+1\otimes y+\omega\left(x\right)\right]\\
&=\left[x,y\right]\otimes 1+1\otimes \left[x,y\right].
\end{align*}
In other words, we can write $[x,y]=\lambda x$ for some $\lambda\in \ensuremath{\bold{k}}$, which implies that $[x^n,y]=n\lambda\ x^n$ for any $n\ge 1$. Therefore we can show that
\begin{align}\label{commP2xy}
\left[\omega(x),y\otimes 1+1\otimes y\right]&=\left[\sum_{i=1}^{p-1}\frac{(p-1)!}{i!(p-i)!}\ x^i\otimes x^{p-i}\ ,\ y\otimes 1+1\otimes y\right]\\
&=\sum_{i=1}^{p-1}\frac{(p-1)!}{i!(p-i)!}\ \left([x^i,y]\otimes x^{p-i}+x^i\otimes [x^{p-i}, y]\right)\notag\\
&=\sum_{i=1}^{p-1}\frac{(p-1)!}{i!(p-i)!}\ \left(i\lambda\ x^i\otimes x^{p-i}+x^i\otimes (p-i)\lambda\ x^{p-i}\right)\notag\\
&=\sum_{i=1}^{p-1} \frac{p!}{i!(p-i)!}\lambda\ x^i\otimes x^{p-i}\notag\\
&=0\notag.
\end{align}
Since $\omega(x)^p=\omega(x^p)$, we have
\begin{align}\label{D2PY}
\Delta\left(y^p\right)=\left(y\otimes 1+1\otimes y+\omega(x)\right)^p=y^p\otimes 1+1\otimes y^p+\omega(x^p).
\end{align}
By Theorem \ref{D1}, we can assume that $x^p=0$ or $x^p=x$. When $x^p=0$, according to the above equation \eqref{D2PY}, $y^p$ is primitive. Then we can write $y^p=\mu x$ for some $\mu\in \ensuremath{\bold{k}}$. Thus $\lambda^p x=x\ \ensuremath{\text{\upshape ad}}(y)^p=[x,y^p]=[x,\mu x]=0$, which implies that $\lambda=0$. By further rescaling of the variables, we can assume $\mu$ to be either one or zero, which yields the first two classes. On the other hand, when $x^p=x$, by \eqref{D2PY} again, $y^p-y$ is primitive. Then we can write $y^p=y+\mu x$ for some $\mu \in \ensuremath{\bold{k}}$. Moreover, $[x,y]=[x^p,y]=\ensuremath{\text{\upshape ad}}(x)^py=0$. After the linear translation $y=y'+\sigma x$ satisfying $\sigma^p=\sigma+\mu$, we have $y'^p=y'$ while $\Delta(y')=y'\otimes 1+1\otimes y'+\omega(x)$. This gives the third class. It remains to show those three Hopf algebras are non-isomorphic. The first two are local with different number of minimal generators and the third one is semisimple. Hence they are non-isomorphic as algebras. This completes the classification.
\end{proof}
Finally, the classification for connected Hopf algebras of dimension $p^2$ follows:
\begin{theorem}\label{D2}
Let $H$ be a connected Hopf algebra of dimension $p^2$. When $\dim \ensuremath{\text{\upshape P}}(H)=2$, it is isomorphic to one of the following:
\begin{itemize}
\item[(1)] $\ensuremath{\bold{k}}\left[x,y\right]/\left(x^p,y^p\right)$,
\item[(2)] $\ensuremath{\bold{k}}\left[x,y\right]/\left(x^p-x,y^p\right)$,
\item[(3)] $\ensuremath{\bold{k}}\left[x,y\right]/\left(x^p-y,y^p\right)$,
\item[(4)] $\ensuremath{\bold{k}}\left[x,y\right]/\left(x^p-x,y^p-y\right)$,
\item[(5)] $\ensuremath{\bold{k}}\langle x,y\rangle/\left([x,y]-y,x^p-x,y^p\right)$,
\end{itemize}
where $x,y$ are primitive.
When $\dim \ensuremath{\text{\upshape P}}(H)=1$, it is isomorphic to one of the following:
\begin{itemize}
\item[(6)] $\ensuremath{\bold{k}}\left[x,y\right]/(x^p,y^p)$,
\item[(7)] $\ensuremath{\bold{k}}\left[x,y\right]/(x^p,y^p-x)$,
\item[(8)] $\ensuremath{\bold{k}}\left[x,y\right]/(x^p-x,y^p-y)$,
\end{itemize}
where $\Delta\left(x\right)=x\otimes 1+1\otimes x$ and $\Delta\left(y\right)=y\otimes 1+1\otimes y+\omega(x)$.
\end{theorem}
\begin{proof}
By Proposition \ref{BPH}(6), we know $\dim \ensuremath{\text{\upshape P}}(H)\le 2$. If $\dim\ensuremath{\text{\upshape P}}(H)=2$, then $H$ is primitively generated and $H\cong u(\mathfrak g)$ for some two-dimensional restricted Lie algebra $\mathfrak g$ by Proposition \ref{BPH}(5). Therefore Proposition \ref{D2Lie} provides the classification. When $\dim \ensuremath{\text{\upshape P}}(H)=1$, it is directly from Lemma \ref{D2P1C}. Finally, it is clear that the Hopf algebras given in (1)-(5) are non-isomorphic to the ones given in (6)-(8), since their primitive spaces have different dimension. The Hopf algebras in (1)-(5) are obviously non-isomorphic as algebras. Neither are the ones in (6)-(8). This completes the proof.
\end{proof}
\begin{corollary}\label{localp2}
Let $H$ be a local Hopf algebra of dimension $p^2$. Then H is isomorphic to either $k\left[\xi, \eta\right]\big/ (\xi^p, \eta^p)$ or $k\left [\xi\right] \big/(\xi^{p^2})$ as algebras. When $H\cong k [\xi, \eta] / (\xi^p, \eta^p)$, the coalgebra structure is given by one of the following:
\begin{itemize}
\item[(1)] $\Delta\left(\xi\right)=\xi\otimes 1+1\otimes \xi,\\ \Delta\left(\eta\right)=\eta\otimes 1+1\otimes \eta$,
\item[(2)] $\Delta\left(\xi\right)=\xi\otimes 1+1\otimes \xi+\xi\otimes \xi,\\ \Delta\left(\eta\right)=\eta\otimes 1+1\otimes \eta$,
\item[(3)] $\Delta\left(\xi\right)=\xi\otimes 1+1\otimes \xi,\\ \Delta\left(\eta\right)=\eta\otimes 1+1\otimes \eta+\omega\left(\xi\right)$,
\item[(4)] $\Delta\left(\xi\right)=\xi\otimes 1+1\otimes \xi+\xi\otimes \xi,\\ \Delta\left(\eta\right)=\eta\otimes 1+1\otimes \eta+\eta\otimes \eta$,
\item[(5)] $\Delta\left(\xi\right)=\xi\otimes 1+1\otimes \xi+\xi\otimes \xi,\\ \Delta\left(\eta\right)=\eta\otimes 1+1\otimes \eta+\xi\otimes \eta$.
\end{itemize}
When $H\cong k\left[\xi\right]\big/ (\xi^{p^2})$, the coalgebra structure is given by
\begin{itemize}
\item[(6)] $\Delta\left(\xi\right)=\xi\otimes 1+1\otimes \xi$,
\item[(7)] $\Delta\left(\xi\right)=\xi\otimes 1+1\otimes \xi+\omega\left(\xi^p\right)$,
\item[(8)] $\Delta\left(\xi\right)=\xi\otimes 1+1\otimes \xi+\xi\otimes \xi$.
\end{itemize}
\end{corollary}
\begin{proof}
Denote the dual Hopf algebra of $H$ by $H^*$. By Proposition \ref{BPH}(1), $H^*$ is a connected Hopf algebra of dimension $p^2$. When $\dim \ensuremath{\text{\upshape P}}(H^*)=2$, as shown in Theorem \ref{D2}, there are five non-isomorphic classes for $H^*$. By duality, there are also five non-isomorphic classes for $H$. Furthermore, from Proposition \ref{BPH}(4), $\dim J/J^2=\dim \ensuremath{\text{\upshape P}}(H^*)=2$, where $J$ is the Jacobson radical of $H$. Notice that $H^*$ is cocommutative. Then $H$ is commutative and we have $H\cong \ensuremath{\bold{k}}[\xi,\eta]/(\xi^p,\eta^p)$ by \cite[Thm. 14.4]{GTM66}. It is easy to check that the coalgebra structures given in $(1)$-$(5)$ are non-isomorphic. The same argument applies to the other case. Theorem \ref{D2} shows that when $\dim \ensuremath{\text{\upshape P}}(H^*)=1$, there are three non-isomorphic classes. Since $\dim J/J^2=\dim \ensuremath{\text{\upshape P}}(H^*)=1$, $H$ is isomorphic to $\ensuremath{\bold{k}}[\xi]/(\xi^{p^2})$ as algebras. Because those given in $(6)$-$(8)$ are non-isomorphic as coalgebras. They complete the list.
\end{proof}
\begin{remark}
In fact, the Hopf algebras in Corollary \ref{localp2} (1)-(8) are in one-to-one correspondence with those in Theorem \ref{D2} (1)-(8) via duality. Below, in each case, we describe the generator(s) $\xi,\ \eta$ as linear functional(s) on the basis $\{x^i\ y^j\, |\, 0\le i,j\le p-1\}$.
\begin{align}
&\xi\left(x^iy^j\right)=
\begin{cases}
1 & i=1,\ j=0\\
0 & \mbox{otherwise}\\
\end{cases},\quad
\eta\left(x^iy^j\right)=
\begin{cases}
1 & i=0,\ j=1\\
0 & \mbox{otherwise}\tag{1}\\
\end{cases}\\
&\xi\left(x^iy^j\right)=
\begin{cases}
1 & i\neq 0,\ j=0\\
0 & \mbox{otherwise}\\
\end{cases},\quad
\eta\left(x^iy^j\right)=
\begin{cases}
1 & i=0,\ j=1\\
0 & \mbox{otherwise}\tag{2}\\
\end{cases}\\
&\xi\left(x^iy^j\right)=
\begin{cases}
1 & i=1,\ j=0\\
0 & \mbox{otherwise}\\
\end{cases},\quad
\eta\left(x^iy^j\right)=
\begin{cases}
-1 & i=0,\ j=1\\
0 & \mbox{otherwise}\tag{3}\\
\end{cases}\\
&\xi\left(x^iy^j\right)=
\begin{cases}
1 & i\neq 0,\ j=0\\
0 & \mbox{otherwise}\\
\end{cases},\quad
\eta\left(x^iy^j\right)=
\begin{cases}
1 & i=0,\ j\neq 0\\
0 & \mbox{otherwise}\tag{4}\\
\end{cases}\\
&\xi\left(x^iy^j\right)=
\begin{cases}
1 & i\neq 0,\ j=0\\
0 & \mbox{otherwise}\\
\end{cases},\quad
\eta\left(x^iy^j\right)=
\begin{cases}
1 & j=1\\
0 & \mbox{otherwise}\tag{5}\\
\end{cases}\\
&\xi\left(x^iy^j\right)=
\begin{cases}
1 & i=1,\ j=0\\
0 & \mbox{otherwise}\tag{6-8}
\end{cases}.
\end{align}
\end{remark}
\begin{theorem}\label{centerP1}
Let $H$ be a finite-dimensional connected Hopf algebra with $\dim\ensuremath{\text{\upshape P}}(H)=1$. Then the center of $H$ contains $\ensuremath{\text{\upshape P}}(H)$.
\end{theorem}
\begin{proof}
Suppose $\ensuremath{\text{\upshape P}}(H)$ is spanned by $x$. By Corollary \ref{FCLH}, $H$ has an increasing sequence of normal Hopf subalgebras:
\begin{align*}
\ensuremath{\bold{k}}=N_0\subset N_1\subset N_2\subset \cdots \subset N_n=H
\end{align*}
satisfying $N_1$ is generated by $x$ and $N_{n-1}\subset H$ is normal with $p$-index one. We show by induction on $n$ such that the center of $H$ contains $x$. It is trivial when $n=1$. Assume that $n\ge 2$. Then by Theorem \ref{Cohomologylemma}, we can find some $y\in H\setminus N_{n-1}$ such that $\Delta(y)=y\otimes 1+1\otimes y+u$, where $u\in N_{n-1}\otimes N_{n-1}$, which together with $N_{n-1}$ generate $H$. Apply Theorem \ref{FREENESS} to $N_{n-1}\subset H$, we have $y^p+\lambda\ y+a=0$ for some $\lambda \in \ensuremath{\bold{k}}$ and $a\in N_{n-1}$.
By induction, $x\in Z(N_{n-1})$. Then it suffices to show $[x,y]=0$. It is easy to check that $[x,y]$ is primitive. Therefore we can write $[x,y]=\mu x$ for some $\mu \in \ensuremath{\bold{k}}$. By rescaling, we can further assume either $x^p=0$ or $x^p=x$. When $x^p=0$, by Theorem \ref{NPLA}, $H$ is local. Then its quotient $H/N_{n-1}^+H$, which is generated by the image of $y$, is local too. Hence the image of $y$ in $H/N_{n-1}^+H$ is nilpotent since it is primitive. Thus in the relation $y^p+\lambda\ y+a=0$, we must have $\lambda=0$ and $y^p+a=0$. A calculation therefore shows that $\mu^px=x(\ensuremath{\text{\upshape ad}} y)^p=[x,y^p]=[x,-a]=0$ which implies that $[x,y]=\mu x=0$. When $x^p=x$, we have $[x,y]=[x^p,y]=(\ensuremath{\text{\upshape ad}} x)^py=0$. This completes the proof.
\end{proof}
|
1,477,468,750,629 | arxiv | \section{\label{}}
\section{Introduction}
The LHCb experiment has been conceived to study CP violation and other rare
phenomena in the B-meson decays with very high precision. The experiment
is at the moment in the last phase of its construction, and is expected
to be fully operational when the LHC machine will deliver its
first pp collisions at 14 TeV in the summer of 2008. Figure~\ref{LHCb}
shows the layout of the detector, of which a detailed description can be
found in ~\cite{reop}. The detector has been designed to be able to cope with
an instantaneous luminosity up to $\sim 5.10^{32}\rm cm^{-2}s^{-1}$, and
a total radiation dose corresponding to $\sim 20\rm~fb^{-1}$. After an initial
shake down of the detector in 2008, the aim is to look for New Physics (NP)
signatures compatible with luminosities around $\sim 0.5 \rm~fb^{-1}$.
The next four to five years, LHCb will accumulate $\sim 10\rm~fb^{-1}$
to exploit the full physics program envisaged for the present detector.
In the next section a selection will be presented of the expected performance
of LHCb within the aforementioned luminosity range.
As mentioned above, LHCb will run at luminosities a factor 20-50 below the
$10^{34}\rm cm^{-2}s^{-1}$ design luminosity of the LHC. The machine optics of LHCb do allow
to focus the beams sufficiently to run at luminosities a factor ten larger.
Hence, the upgrade of LHCb is purely a question of the detector being
able to profit from a higher peak luminosity.
Section~\ref{sect:lumi} will describe the conditions as a function of
the delivered peak luminosity, and the limitations of LHCb to efficiently
exploit an increase in luminosity.
The baseline upgrade scenario of the detector to SuperLHCb will be
discussed in section~\ref{sect:super}, followed by expectations of
yields for some selected physics channels in comparison with the proposed
SuperKEKB performance in section~\ref{sect:yield}. The conclusions will be
presented in section~\ref{sect:conclusions}.
\begin{figure*}[t]
\centering
\includegraphics[width=135mm]{y-LHCb-reoptimized.eps}
\caption{LHCb detector layout, showing the Vertex Locator (VELO),
the dipole magnet, the two RICH detectors, the four tracking stations
TT, T1-T3, the Scintillating Pad Detector (SPD), Preshower (PS), Electromagnetic (ECAL) and Hadronic (HCAL) calorimeters, and the five muon stations M1-M5.} \label{LHCb}
\end{figure*}
\section{Expected performance of LHCb}
The expected performance of LHCb is determined by generating pp interactions
using the PYTHIA 6.2 generator~\cite{pythia}, with the predefined option
MSEL=2. To extrapolate to 14 TeV CM the value of the $p_{\rm T}^{min}$
parameter has been tuned as a function of energy to existing data~\cite{ptmin}.
The resulting charged track multiplicities in the acceptance of the
spectrometer are $\sim25\%$ larger than a similar tuning of CDF~\cite{cdf}.
The particles are propagated through a detailed detector description using
GEANT.
Pileup in a bunch crossing, and spill-over from preceding and following
bunches is included. Trigger studies have shown that the events written to
storage are dominated by $\rm b\bar{b}$-events, hence all background is assumed
to originate from $\rm b\bar{b}$-events, of which the equivalent of
about 13 minutes of LHCb running have been fully simulated.
\subsection{BR(B$_s\rightarrow \mu^+\mu^-)$}
The rare loop decay of B$_s\rightarrow \mu^+\mu^-$ is sensitive to extensions
of the Standard Model (SM) through loop corrections. Within the SM the
decay rate has been computed~\cite{brmumu} to be
BR(B$_s\rightarrow \mu^+\mu^-)=(3.4\pm 0.4)10^{-9}$. NP physics beyond the SM
can increase this BR. In the minimal super-symmetric extension
of the SM (MSSM)
the BR increases as $\rm tan^6\beta$, where $\rm tan\beta$ is the ratio of the
Higgs vacuum expectation values. Hence, this makes the BR sensitive to
models which prefer a relatively large $\rm tan\beta$. As an example
figure~\ref{ellis} shows the expected BR as a function of the gaugino
mass in the framework of a constrained minimal super-symmetric extension
of the SM (CMSSM)\cite{ellis}.
The experimental challenge lies in the rejection of background, which
is predominantly due to two muons which combine to form a good vertex with
a signal mass. The muons originate either from semi-leptonic B-decays, or
are due to misidentification of hadrons. LHCb combines a good
invariant mass resolution, $\sigma(\rm M_{\mu\mu})\approx 20$ MeV, and
excellent vertex resolution. In addition, the trigger can accept events
with $p_{\rm T}^{\mu}\geq 1$ GeV. Figure~\ref{mumu} shows the sensitivity~\cite{mumupap}
of BR(B$_s\rightarrow \mu^+\mu^-$) as a function of integrated luminosity.
Within the first years of running LHCb should be able to probe the whole
CMSSM parameter space for large $\rm tan\beta$ values via this rare loop decay.
\begin{figure}[h]
\centering
\includegraphics[width=80mm,clip=]{bsmumu-orig2.ps}
\caption{The CMSSM prediction for BR(B$_s\rightarrow \mu^+\mu^-)$ as a function of the gaugino mass $m_{1/2}$ from~\cite{ellis}.}
\label{ellis}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=80mm]{Bsmm.eps}
\caption{The LHCb reach to observe ($3\sigma$) or discover ($5\sigma$)
the BR(B$_s\rightarrow \mu^+\mu^-)$ as a function of integrated luminosity.}
\label{mumu}
\end{figure}
\subsection{NP effects in $\rm B\rightarrow K^*(K\pi)\mu^+\mu^-$}
While it is shown in the previous section that LHCb is very sensitive to NP
effects at large $\rm tan\beta$ with a modest
integrated luminosity, this section will explore the sensitivity to small
$\rm tan\beta$ parameter space using the second transversity amplitude
$\rm A^{(2)}_T$\cite{matias} in the decay $\rm B\rightarrow K^*(K\pi)\mu^+\mu^-$.
Figure~\ref{f0502060} shows $\rm A^{(2)}_T$ as a function of the dimuon mass for
both the SM expectation, and for a representative choice of NP parameters,
notably $\rm tan\beta=5$,
which do take into account the constraints from present observations.
Note that the whole region between the shown NP curves and the SM are filled by
solutions consistent with the constraints.
The expected LHCb 95$\%$ confidence interval sensitivity for $10\rm~fb^{-1}$
has been superimposed~\cite{ulrik}, assuming our measurements will fall precisely
on the chosen NP expectation.
While $\rm 10~fb^{-1}$ might allow to observe a hint of NP,
an ten fold increase in statistic will allow a real observation of NP
if nature has chosen this particular constellation.
\begin{figure}[h]
\centering
\includegraphics[width=80mm,clip=]{fig61.ps}
\caption{$\rm A^{(2)}_T$ as a function of the dimuon mass for the SM
(top curve), and
in the presence of NP contributions to the Wilson coefficients $C_7,~C_9$ and
$C_{10}$ as described in~\cite{matias}. The data points indicate the
expected 95$\%$ confidense level sensitivity of LHCb for an integrated
luminosity of $\rm 10~fb^{-1}$.} \label{f0502060}
\end{figure}
\subsection{NP in $b\rightarrow s\bar{s}s$ transitions}
Arguably the most intriguing hint of NP contributions to virtual loops
in B-decays comes from the discrepancy between
$\rm sin 2\beta$ measured in the time dependent CP asymmetries in $b\rightarrow c\bar{c}s$
and in $b\rightarrow s\bar{s}s$ transitions. The later cannot decay via
a tree diagram in the SM, and hence is sensitive to NP contributions in its
loop decay diagrams. The HFAG~\cite{hfag} averages for
$\rm sin 2\beta(B\rightarrow J/\psi K^0_S)=0.668 \pm 0.026 $, and
$\rm sin 2\beta^{eff}(B\rightarrow \phi K^0_S)=0.39 \pm 0.18 $. Although
the discrepancy is not statistically significant, all $b\rightarrow s\bar{s}s$
show a value of $\rm sin 2\beta^{eff}$ which is lower than the tree counterpart.
The expected sensitivity of LHCb for $\rm 10~fb^{-1}$ is
$\sigma(\rm sin 2\beta^{eff}(B\rightarrow \phi K^0_S))=\pm0.14$, while
B-factories for a combined integrated luminosity of $2~\rm ab^{-1}$ expect
an error of $\pm 0.12$ in $\rm sin 2\beta^{eff}(B\rightarrow \phi K^0_S)$.
In addition LHCb has access to measuring the time dependent CP asymmetries in
$\rm B_s$-decays, which give access to the CP violating weak phase $\phi$.
While $\phi_d^{\rm SM}({\rm B}\rightarrow J/\psi\rm K^0_S)=2\beta$,
$\phi_s^{\rm SM}({\rm B}\rightarrow J/\psi\phi)=2\chi$, which is
constrained to -0.035$\pm^{0.014}_{0.006}$ by a fit to the unitary triangle
within the SM\cite{ckmfitter}. NP in the $\rm B_s\leftrightarrow \bar{B}_s$
mixing box diagram could enhance $\phi_s$. With a modest integrated luminosity
of $0.5~\rm fb^{-1}$ LHCb is expected to reach a sensitivity of
$\sigma(\phi_s(\rm{B_s}\rightarrow J/\psi\phi))=0.046$~\cite{psiphi}. Already this sensitivity
will constrain the parameters space of many extensions of the SM~\cite{ligeti}.
The golden hadronic
counterpart is the decay $\rm B_s \rightarrow\phi\phi$, which can only proceed
via loop diagrams in the SM. In addition there is a cancellation of the
$\rm B_s$ mixing and decay phase in the SM~\cite{raidal}, which makes that
$\phi_s(\rm B_s\rightarrow \phi\phi)\approx 0.$
The BR$({\rm B_s}\rightarrow J/\psi\rm (\mu^+\mu^-)\phi(K^+K^-))$ is a factor eight larger than
BR(${\rm B_s}\rightarrow \rm \phi(K^+K^-)\phi(K^+K^-))$. In addition,
as will be explained in the next section, this
channel is much harder to trigger efficiently than channels with
muons in the final state.
As a consequence, LHCb expects $\sigma(\phi_s^{\phi\phi})=0.054$~\cite{phiphi}
for an integrated luminosity of $\rm 10~fb^{-1}$.
Even a factor of twenty increase in statistics will result in an
experimental error on $\phi_s^{\phi\phi}$ which is larger than the theoretical
error.
\section{The Luminosity Upgrade}
\label{sect:lumi}
Before going into details about what is limiting the LHCb detector to
already profit from day one from larger luminosities, what follows
is a brief description of the experimental environment at the LHCb
interaction point as a function of luminosity.
As already mentioned in the introduction, the LHC machine has been designed
to deliver a luminosity up to $10^{34}\rm cm^{-2}s^{-1}$ at a
General Purpose Detector (GPD). The optics
around the LHCb interaction point (P8) allows LHCb to run at a luminosity
up to $50\%$ of the luminosity available at a GPD. Hence, the nominal
LHC machine could deliver luminosities up to $5.10^{33}\rm cm^{-2}s^{-1}$
at P8\footnote{
While the nominal LHC is sufficient for the LHCb upgrade, there is
a proposal to increase the nominal luminosity of the machine to
$8.10^{34}\rm cm^{-2}s^{-1}$, the SLHC, around the middle of the next decade.
The bunch separation for LHCb will remain 25 ns, but there are two
schemes to fill the bunches. The preferred scheme will use large currents
in the even bunches, and low current in the odd bunches. Since P8
is displaced relative to the GPD interaction points by 1.5 bunch spacings,
it will result in colliding odd with even bunches in P8. In the GPD collision
points the collisions are odd$\times$odd and even$\times$even. This will
allow LHCb to choose its luminosity using the current in the odd bunches.
A GPD will ignore the odd$\times$odd interactions, since it will contribute
a luminosity at least a factor 400 smaller than what is obtained in
the even$\times$even collisions.
}.
The bunch crossing rate at P8 is given by the LHC machine to be 40.08 MHz,
while 2622 out of the theoretically possible 3564 crossings~\cite{jorgen}
have protons in both bunches.
Hence, the
maximum rate of crossings with at least one pp interaction is $\sim 30$ MHz.
The expected inelastic pp cross-section is 79.2 mb, of which 63 mb has at
least two charged particles which can be reconstructed, the so-called
visible cross-section. Figure~\ref{poisson} show the number of crossings
with at least one visible interaction and
the mean number of visible interaction per crossing
as a function of luminosity.
\begin{figure}[h]
\centering
\includegraphics[width=80mm,clip=]{poisson-1.eps}
\caption{Top plot shows the number of crossings with visible pp-interactions as
a function of luminosity. The bottom plot shows the average number of visible
pp-interactions per crossing, for events with at least one pp-interaction.
} \label{poisson}
\end{figure}
Note that increasing the luminosity from
$(2\rightarrow 10).10^{32}\rm cm^{-2}s^{-1}$ will only increase
the mean number of interactions per crossing by a factor two, since
the number of crossings with at least one interactions increases from
$10\rightarrow 26$ MHz. While the increase in occupancy for
detectors which are only sensitive to pileup is minimal, spill-over
increases linearly with luminosity as is indicated in the bottom plot of
figure~\ref{poisson}.
\subsection{The LHCb Trigger}
LHCb has a two level trigger system, called Level-0 (L0) and the High Level
Trigger (HLT). L0 is a trigger implemented in hardware, and its purpose is
to reduce the rate of crossings with interactions to below a rate of 1.1 MHz.
This is the maximum rate at which all LHCb data can be readout by the
front-end (FE) electronics.
L0 reconstructs the highest $E_{\rm T}$ hadron, electron and
photon, and the two highest $p_{\rm T}$ muons. It triggers on events with
a threshold of typically $E_{\rm T}^{\rm hadron}\gtrsim 3.5$ GeV,
$E_{\rm T}^{\rm e,\gamma}\gtrsim 2.5$ GeV, and $p_{\rm T}^{\mu}\gtrsim 1$ GeV
at $2.10^{32}\rm~ cm^{-2}s^{-1}$.
Figure~\ref{l0acc} shows the yield of L0-triggered events, normalized to
their yield at $2.10^{32}\rm~ cm^{-2}s^{-1}$ as a function of the luminosity for a
leptonic and a hadronic B-decay channel.
\begin{figure}[h]
\centering
\includegraphics[width=80mm,clip=]{acc-nplot.eps}
\caption{The L0-trigger yield as a function of luminosity for two
decay channels: $\mu\mu\rm K^*$ (open points) and $\phi\phi$ (closed points).
The total L0-trigger yield, and the contributions from the L0-hadron and muon
triggers are shown separately.
} \label{l0acc}
\end{figure}
The L0-hadron trigger absorbs $\sim 70\%$ of the L0 bandwidth at
$2.10^{32}\rm~ cm^{-2}s^{-1}$, and its threshold is already larger than
half the B-mass. The increase in mainly the rate of visible pp interactions
requires an increase in the threshold, and the resulting loss in efficiency
nullifies the increase in luminosity, resulting in an almost constant yield
for the hadron trigger. Contrary, the muon trigger absorbs only $\sim 15\%$ of
the available bandwidth at $2.10^{32}\rm~ cm^{-2}s^{-1}$, at which rate it already has a efficiency
around $90\%$ for leptonic B-decays. For larger luminosities the loss
in efficiency is minor, showing an almost linear dependence of its yield
on luminosity. Note that at a luminosity of $5.10^{32}\rm~ cm^{-2}s^{-1}$
about half the yield in $\rm B_s\rightarrow\phi\phi$ is due to the
muon trigger on the leptonic decay of the tagging B.
After L0, all detectors are readout, and full event building is performed
on the CPU nodes of the Event Filter Farm (EFF).
The HLT consists of a C++ application which is running
on every CPU of the EFF, which contains between 1000 and 2000 multi-core
computing nodes.
Each HLT application has access to all data in one event, and thus in principle
could be executing the off-line selection algorithms, which would render
it a 100$\%$ trigger efficiency by definition. But given the 1 MHz
output rate of L0 and the limited CPU power available, the
HLT aims at rejecting the bulk of the events by using only part of the
full information which is available.
The HLT starts with so-called alleys, where
each alley addresses one of the trigger types of the L0-trigger, enriching
the B-content of the events by refining the L0 objects, and adding
impact parameter information.
If an event is selected by at least one alley,
it is processed by the inclusive triggers, where specific resonances
are reconstructed and selected, and the exclusive triggers, which aim
to fully reconstruct B-hadron final states.
Events will be written to storage with a rate of $\sim 2$ kHz.
As is shown in figure~\ref{l0acc}, even hadronic B-decays receive a
considerable fraction of their L0 efficiency due to the muon trigger which
is usually fired by a leptonic decay of the opposite B. Hence, in the 2 kHz
output rate about half is reserved for events with a large $p_{\rm T}$ muon
with a significant impact parameter.
Simulation shows that
900 Hz of single muon triggers contain $\sim 550$ Hz of true
$\rm B\rightarrow \mu X$ decays.
Figure~\ref{inclmu} illustrates the
yield of B-decays, of which the decay products are
fully contained in the LHCb acceptance, while the event is triggered on the
semi-leptonic B-decay of the other B-hadron.
\begin{figure}[h]
\centering
\includegraphics[width=80mm,clip=]{etab.eps}
\caption{Top plot shows the pseudo rapidity ($\eta$) correlation of a
$\rm B\bar{B}$-pair. Of all the produced $\rm B\bar{B}$-pairs (black dots), the single
muon trigger selects the pairs of which one B has its decay-muon inside the
acceptance of LHCb (green squares). The blue squares indicate the rapidity
of the other B, if all its decay products are inside the LHCb acceptance.
The bottom plot shows the fraction of ``other'' B-decays in the LHCb acceptance
as a function of the number of B-decay products.
} \label{inclmu}
\end{figure}
It shows the correlation of the $\rm B\bar{B}$-pair in pseudo rapidity,
which makes that when the lepton from a semi-leptonic decay of one of
the B-mesons is in the acceptance of LHCb, there is a $30-40\%$ probability
that all the decay product of the opposite B are also contained in the spectrometer.
Hence, just the inclusive muon trigger will already provide a rate of
$\sim 10^9/\rm fb^{-1}$ fully contained B-decays, with
a tagging performance of $\epsilon\rm D^2\approx 0.15$ due to the presence
of a large $p_{\rm T}$ muon.
The typical efficiency of the whole trigger chain for hadronic, radiative and
leptonic B-decays is 25-30$\%$, 30-40$\%$ and 60-70$\%$ respectively.
An upgrade of the trigger should not only be able to cope with
larger luminosities, but should be designed to at least gain a factor two
for hadronic B-decays like $\rm B_s\rightarrow\phi\phi$.
\subsection{Tracking and Particle Identification}
The tracking of LHCb commences by reconstructing all tracks inside the VELO.
The VELO is based on silicon (Si)-sensors, and the channel occupancy at
$\rm 2.10^{32}~\rm cm^{-2}s^{-1}$ is $\sim 1\%$, which is kept roughly constant
as a function of the radius due to the layout of the strips.
This occupancy increases to $\sim 3\%$ at $\rm 2.10^{33}~\rm cm^{-2}s^{-1}$.
As a result the tracking performance~\cite{matt} in the VELO looses only
$2.7\%$ in efficiency for this factor ten increase in luminosity, while
using reconstruction code tuned for the low luminosities. This is as
expected from figure~\ref{poisson}, since the VELO electronics has a limited
sensitivity to spill-over, and as a consequence only $27\%$ of its
hits at $\rm 2.10^{33}~\rm cm^{-2}s^{-1}$ are due to spill-over.
To assign momentum to the VELO tracks, every track is combined with a hit
in the T-stations, located behind the magnet, in turn. Around the trajectory
defined by the VELO track and a T-hit, a search is performed in the other
tracking stations including TT. In addition there is also a stand-alone
pattern recognition performed in the T-stations, mainly to recover
$\rm K^0_S$ which decay behind the VELO. The outer part of the T-stations (OT) are
constructed
of 5 mm diameter straws, with a drift-time up to 50 ns.
Including the effect of the length of the 2.4 m long wires,
this requires a readout gate of three
consecutive crossings to obtain the maximum efficiency. As a consequence,
the OT occupancy rises from $6\rightarrow 25\%$ for a ten-fold luminosity increase
from $\rm (2\rightarrow 20).10^{32}~\rm cm^{-2}s^{-1}$.
At $\rm 2.10^{33}~\rm cm^{-2}s^{-1}$ $60\%$ of the OT hits are due to spill-over.
Both TT and the inner part of the T-stations
(IT) are made of Si-sensors. At $\rm 2.10^{33}~\rm cm^{-2}s^{-1}$ $44 (25)\%$ of the TT(IT) hits are due to spill-over.
The tracking performance as a function of luminosity is shown
in figure~\ref{teff}.
\begin{figure}[h]
\centering
\includegraphics[width=80mm,clip=]{teff.ps}
\caption{The tracking efficiency as a function of luminosity. Limited
spill-over uses the same spill-over as obtained at a luminosity of
$\rm 2.10^{32}~\rm cm^{-2}s^{-1}$, irrespective of the luminosity.
} \label{teff}
\end{figure}
Above a luminosity of $5.10^{32}~\rm cm^{-2}s^{-1}$ the difference between
the tracking performance with full spill-over and a spill-over equivalent to
the spill-over at $\rm 2.10^{32}~\rm cm^{-2}s^{-1}$ is clearly visible.
The per track loss in tracking efficiency from
$\rm (2\rightarrow 10).10^{32}~\rm cm^{-2}s^{-1}$ is $\sim 5\%$, which is
a small price to be paid,
even for a 4-prong decay, compared to the factor 5 increase in luminosity.
However, an additional increase of a factor two in luminosity would result
in a loss of 36$\%$ of the 4-prong decays, hence would almost eliminate the
additional factor two increase.
The electronics of the RICH detector is virtually insensitive to spill-over.
The increase in occupancy for an event with a $\rm B\bar{B}$-pair and its
pileup events is only a factor 2.5 for the factor ten increase in luminosity
to $\rm 2.10^{33}~\rm cm^{-2}s^{-1}$. This is even a smaller increase than
shown in figure~\ref{poisson}, since pp-interactions producing a
$\rm B\bar{B}$-pair cause about twice the occupancy compared to visible
pp-interactions. The efficiency to positively identify a kaon degrades
by $10\%$ for the 10 fold increase in luminosity. The loss is dominated
by the degradation of the RICH1 performance, which has the higher
backgrounds and occupancies. In the above simulation the effect of
inefficiency due to overflowing buffers at high occupancy has not been
taken into account, since it will be argued in the next section that the
front-end
electronics will have to be replaced to allow the trigger to be upgraded.
The muon chambers and the calorimeters both have negligible sensitivity
to spill-over, and hence the increase in occupancy follows the same trend
as that of the RICH. Their performances development as a function of luminosity
is more related to dead time inefficiency and radiation damage, rather than
occupancy per bunch crossing. These effects are not simulated in the MC, and
hence no reliable performance as a function of luminosity is available.
\section{The SuperLHCb Detector}
\label{sect:super}
In the previous section it was shown that the sub-system of LHCb which
does not scale in performance with an increased luminosity is the trigger, and
in particular the trigger for hadronic B-decays which will not be able to
retain its efficiency for larger luminosities.
Since the trigger efficiency for hadronic B-decays is expected to be
$25-30\%$, the goal of an upgrade of the trigger should also be to to improve
on the hadron trigger efficiency by at least a factor two.
At 14 TeV center of mass pp collisions $\sigma_{b\bar{b}}$ is assumed to be
500 $\mu$b. Hence, with a luminosity of $\rm 2.10^{33}~\rm cm^{-2}s^{-1}$
there will be $10^6$ $b\bar{b}$-pairs produced in the LHCb interaction point
per second, of which $43\%$ will have at least one B-hadron with a polar
angle below 400 mrad, i.e. pointing in the direction of the spectrometer.
Hence, an efficient and selective trigger should already at a very large rate
be able to distinguish between wanted and unwanted B-decays.
Pilot studies on improving the trigger all show that the only way
to be able to provide adequate selectivity of the trigger, and maintain
large efficiency for hadronic B-decays is to be able to measure both
the momentum and impact parameter of B-decay products simultaneously.
The present FE-architecture imposes that the detectors which do not
participate in the L0-trigger can only be read-out with a maximum event rate of
1.1 MHz, and that the L0-latency available for making the L0 decision, which
is now 1.5$\mu$s, can be stretched to a few $\mu$s at most.
The algorithms required to efficiently
select B-decays require latencies far superior to what is available with
the present architecture.
Hence, SuperLHCb has opted for a FE-architecture which
requires all sub-detectors to read-out their data at the full 40 MHz rate of
the LHC machine. The data should be transmitted over optical fibers to
an interface board (TELL40~\cite{guido}) between the FE and a large EFF. The trigger
algorithm is then executed on the EFF, much like the present HLT.
Technology tracking estimates show that by 2013 SuperLHCb should be able to
acquire sufficient CPU power to be able to perform a HLT like trigger
on a large CPU farm. However, in case the EFF at the start of SuperLHCb would
be undersized, the TELL40 boards will be equipped with a throttle to
prevent buffer overflows. This throttle should also include an event selection
much like the present L0, based on the data available in the TELL40 boards, to
enrich rather than just pre-scale events. At a luminosity of
$\rm 6.10^{32}~\rm cm^{-2}s^{-1}$
and an assumed CPU power able to process 5 MHz of events, the
trigger efficiency is $66\%$ for the channel
$\rm B_s\rightarrow D_s^\mp K^\pm$. For this simulation the throttle requires
at least one HCAL-cluster with $E_{\rm T}^{\rm hadron}> 3$ GeV,
which has an efficiency of $76\%$ for this signal.
Note that a 2 GeV requirement
would correspond to 10 MHz of input rate into the EFF, while it would
increase the efficiency from $76\rightarrow 95\%$ at the start of the EFF,
while with LHCb running at $\rm 2.10^{32}~\rm cm^{-2}s^{-1}$
the equivalent efficiency for this channel at the start of the HLT is
only $39\%$.
The upgraded FE-architecture requires that the FE-electronics of
all sub-detectors
needs to be replaced,
with the exception of the muon chambers which already have the 40 MHz capability.
The Si-detectors, which cover the areas close to the beam, will
suffer from a five fold increase in allowed radiation dose,
and hence need to be replaced by more radiation resistant technologies.
For the RICH the photon detection and the FE-electronics is combined
in a Hybrid Photo Detector, which needs to be replaced entirely.
The OT requires the replacement of its FE-boards. Running these detectors with
a slightly faster gas, combined with taking advantage of being able to
pre-process spill-over in the TELL40 boards could
reduce the occupancy from $25\rightarrow 17\%$ at
$\rm 2.10^{33}~\rm cm^{-2}s^{-1}$.
This could be combined with enlarging the coverage of IT, to reduce
the occupancy close to the beam even further.
The M1 muon chamber, which is located just before the Calorimeter, would
suffer from a too high occupancy, and will be removed. It now serves
to provide an improved momentum measurement in L0, which will no longer
be necessary.
The resolution in the inner part of the Calorimeter will degrade with radiation.
It will have to be replaced with a more radiation tolerant technology.
This might also allow LHCb to extend the calorimeter coverage down to
$\eta=5$ from the present maximum pseudo rapidity of 4.2, i.e. an increase
in coverage of 25$\%$.
Last but not least, all results presented from simulation studies did not
attempt to adapt the algorithms to a higher occupancy environment, hence
they are considered to be conservative.
\section{Projected Yields of SuperLHCb}
\label{sect:yield}
The LHC machine schedule assumes that first collisions at 14 TeV will be
delivered in the summer of 2008. The top plot of figure~\ref{yield} shows
the expected integrated luminosity profile, which assumes that LHCb will
run at $\rm (2-5).10^{32}~\rm cm^{-2}s^{-1}$, and that the machine and experiment
will only slowly ramp up to the full capability in 2011.
\begin{figure*}[t]
\centering
\includegraphics[width=135mm,clip=, angle=-90.]{yield-cor.eps}
\caption{Top plot shows the projected integrated luminosity for LHCb and
Super LHCb, compared to (Super)KEKB. The middle and bottom plots shows
the expected yield for the channels indicated, after trigger and
strict off-line selection to obtain a good Background/Signal ratio.
The yield of $\rm Bs\rightarrow\phi\phi$ and $\rm B\rightarrow\phi K_S$ has
been multiplied by
$\epsilon D^2$, 0.07 and 0.3 for LHCb and KEKB respectively,
to take into account the better tagging performance at a B-factory.
} \label{yield}
\end{figure*}
LHCb would then
run at maximum luminosity for two years, and then have a one year shutdown
to change over to the new FE-architecture in 2013. In 2014 it assumes to run at
half of its full capability, after which it accumulates $20~\rm fb^{-1}$ per
year for the rest of the next decade.
For comparison the running scenario
of the proposed SuperKEKB~\cite{skekb} is shown in the same plot.
The closed squares (open circles) show the information of LHCb (KEKB).
The middle and bottom plots show
the expected yield for $\rm B\rightarrow K^*\mu\mu$, $\rm Bs\rightarrow\phi\phi$ and $\rm B\rightarrow\phi K_S$\, after trigger and
strict off-line selection to obtain a good Background/Signal ratio.
For the channels which require tagging to perform the time dependent CP asymmetry analysis, the yield has
been multiplied by the effective tagging efficiency
$\epsilon D^2$, 0.07 and 0.3 for LHCb and KEKB respectively,
to take into account the better tagging performance at a B-factory.
\section{Conclusions}
\label{sect:conclusions}
The last few years have seen an impressive progress in precision measurements
of B-decays, notably from the B-factories and the Fermilab collider.
The remarkable agreement between CP conserving and violating observables
indicates that the main source
of CP-violation can be attributed to the KM-mechanism~\cite{km}.
In their quest for discovering NP, the new generation of experiments have
to be able to detect small deviations from the SM, which requires increasingly
larger data sets. LHCb is nearing the end of its construction, and will
be ready to look for NP with a projected integrated luminosity of
around $10~\rm fb^{-1}$ in the years to come. This paper describes the
way LHCb can be upgraded to be able to have access to NP even beyond the
possibilities of the first phase of LHCb.
The main component of LHCb which limits it to profit from the available
nominal luminosity of the LHC machine is the hadron-trigger. Consequently
SuperLHCb will have a new FE-architecture and trigger which aims at
being able to cope with luminosities around $\rm 2.10^{33}~\rm cm^{-2}s^{-1}$,
and which will have a hadron trigger efficiency twice larger than the
present trigger, resulting in a twenty fold increase in hadronic B-decays
available for analysis. In addition, the leptonic decay channels will profit
from an increase in luminosity at least linearly.
\begin{acknowledgments}
This paper would not have been possible without the work done by many of
my colleagues in LHCb, notably those who contributed to the ``1$^{st}$ LHCb
Collaboration Upgrade Workshop''~\cite{edinburgh}. I would like to thank
them all. I would like to thank Franz Muheim for carefully reading the
manuscript.
\end{acknowledgments}
\bigskip
|
1,477,468,750,630 | arxiv | \section{Introduction}\label{SecIntro}
For $p$ prime and $n\in\mathbb{N}=\{0,1,2,3,\ldots\}$, the exponent of the highest power of $p$ that divides $n$ is called the \textit{$p$-adic valuation of $n$}, which we denote $\nu_{p}(n)$. The valuation of $0$ is defined to be $+\infty$. Formally, the valuation of a positive integer $n$ of the form $n=p^{k}d$, where $k\in\mathbb{N}$ and $d$ is an integer not divisible by $p$, is $\nu_{p}(n)=k$. We can find $p$-adic valuations of sequences by finding the valuation of each successive term. The present work considers 2-adic valuations of sequences generated from the natural numbers by evaluating quadratic functions of the form $f(n)=an^{2}+bn+c$ where $a,b,c\in\mathbb{Z}$ and $a\neq0$.
Information about sequences of valuations can be viewed in two different ways: in terms of sequences and in terms of trees. We let $(\nu_2(f(n)))_{n\geq 0}$ denote the sequence of 2-adic valuations for the quadratic function $f(n)$. Since every positive natural number $n$ can be written in the form $n=2^k d$, where $d$ is not divisible by 2, in many cases, we can determine the valuations of outputs of the quadratic function $f(n)$ using characteristics of the coefficients $a$, $b$, and $c$. The main results are given in Theorems~\ref{MainThm1} and~\ref{MainThm2}; one would anticipate these results can be extended to odd primes with some modifications, which will be addressed in future work.
\begin{theorem}\label{MainThm1} Let $f(n)=an^{2}+bn+c$ where $a,b,c\in\mathbb{Z}$ with $a\neq 0$ and, without loss of generality, $a,b,c$ are not all even. Then
\begin{enumerate}
\item If $a$ and $b$ are even and $c$ is odd, then $\nu_{2}(f(n))=0$ for all $n\in\mathbb{N}$.
\item If $a$ is even and $b$ is odd, then $(\nu_{2}(f(n)))_{n\geq 0}$ is an unbounded sequence.
\item If $a$ is odd and $b$ is even, then
\begin{enumerate}
\item if $b^{2}-4ac=0$, then $(\nu_{2}(f(n)))_{n\geq 0}$ is an unbounded sequence;
\item if $b^{2}-4ac=4^{\ell}\Delta$ for $\ell\in\mathbb{Z}^{+}$ as large as possible and $\Delta\equiv\modd{1} {8}$, then $(\nu_{2}(f(n)))_{n\geq 0}$ is an unbounded sequence;
\item if $b^{2}-4ac=4^{\ell}\Delta$ for $\ell\in\mathbb{Z}^{+}$ as large as possible,
$\Delta\equiv\modd{m} {8}$, and $m\in\left\{2,3,5,6,7\right\}$, then the sequence $(\nu_{2}(f(n)))_{n\geq 0}$ is bounded and its minimal period length equals $2^{\ell}$.
\end{enumerate}
\item If $a$ and $b$ are odd and $c$ is even, then $(\nu_{2}(f(n)))_{n\geq 0}$ is an unbounded sequence.
\item If $a$, $b$, and $c$ are odd, then $\nu_{2}(f(n))=0$ for all $n\in\mathbb{N}$.
\end{enumerate}
\end{theorem}
Theorem~\ref{MainThm1} is proved in Section~\ref{SecInf}. Henceforward, we will refer to the \textit{minimal period length} simply as the \textit{period}. In Case 3, we use the discriminant to determine whether roots to $f(n)=0$ lie in the 2-adic numbers $\mathbb{Q}_2$ or the ring of 2-adic integers $\mathbb{Z}_2$. Corollary~\ref{InfSpecCor} takes care of Case 3(a). Even though the statement of this theorem only classifies these sequences as unbounded, the proofs of Cases 2 and 4 reveal more information about the 2-adic valuations. Theorem~\ref{MainThm1} represents a complete answer to when $\nu_{2}(f(n))_{n\geq0}$ is bounded or unbounded using only the coefficients of the quadratic polynomial. Furthermore, Theorem~\ref{MainThm1} gives an explicit period length for the bounded sequences which can be determined by the coefficients of the quadratic polynomial. In the unbounded cases we are able to describe what possible valuations will be for certain subsequences. Such statements are easier to frame in the sense of trees, which are discussed in Section~\ref{SecParityandTrees}. Theorem~\ref{MainThm2}, proved in Sections~\ref{BoundedSection} and~\ref{SecStrucTree}, completely determines all valuations in the non-trivial bounded case (3(c) of Theorem~\ref{MainThm1}).
\begin{theorem}\label{MainThm2}
Let $f(n)=an^{2}+bn+c$ where $a,b,c\in\mathbb{Z}$. If $a$ is odd and $b$ is even and $b^{2}-4ac=4^{\ell}\Delta$ for $\ell\in\mathbb{Z}^{+}$ as large as possible with $\ell\geq2$, $\Delta\equiv\modd{m} {8}$, and $m\in\left\{2,3,5,6,7\right\}$, then the sequence $(\nu_{2}(f(n)))_{n\geq 0}$ is bounded with period equal to $2^{\ell}$. Furthermore, we have the following 2-adic valuations:
\begin{displaymath}
\nu_{2}(f(n))=\begin{cases}
0,&\ \text{if}\ n\equiv\modd{ a^{-1}\left(1-\frac{b}{2}\right)} {2};\\
2(i-1),&\ \text{if}\ n\equiv\modd{ a^{-1}\left(2^{i-1}-\frac{b}{2}\right)} {2^{i}}\ \text{with}\ 2\leq i<\ell;\\
2(\ell-1),&\ \text{if}\ n\equiv\modd{ a^{-1}\left(2^{\ell-1}-\frac{b}{2}\right)} {2^{\ell}}\ \text{and}\ m=6,2;\\
2\ell-1,&\ \text{if}\ n\equiv\modd{ a^{-1}\left(2^{\ell-1}-\frac{b}{2}\right)} {2^{\ell}}\ \text{and}\ m=7,3;\\
2\ell,&\ \text{if}\ n\equiv\modd{ a^{-1}\left(2^{\ell-1}-\frac{b}{2}\right)} {2^{\ell}}\ \text{and}\ m=5;\\
2\ell-1,&\ \text{if}\ n\equiv\modd{ a^{-1}\left(2^{\ell}-\frac{b}{2}\right)} {2^{\ell}}\ \text{and}\ m=6,2;\\
2(\ell-1),&\ \text{if}\ n\equiv\modd{ a^{-1}\left(2^{\ell}-\frac{b}{2}\right)} {2^{\ell}}\ \text{and}\ m=7,5,3;\\
\end{cases}
\end{displaymath}
where $a^{-1}$ is the inverse of $\modd{a} {2^{\ell}}$.
\end{theorem}
The case $\ell=1$ is covered by Lemma~\ref{FinLem1}. In this case, the sequence is periodic with period equal to 2. Theorem~\ref{MainThm2} is proved in Proposition~\ref{FinThm} and Corollary~\ref{FinCor}. Both of these results are an extension of the work by Byrnes et al.~\cite{Byrnes}, which only considered quadratics of the form $f(n)=an^{2}+c$. The work of Medina et al.~\cite{Medina} details conditions under which these sequences are bounded or unbounded for general primes but we extend these results for $p=2$ by providing the exact conditions on the coefficients of quadratic equations. Furthermore, we provide a closed form giving the exact valuation for the bounded sequences relying only on the coefficients of the quadratic function. Boundedness of $p$-adic valuations of polynomial sequences is also discussed in Bell's work \cite{Bell}.
\section{Parity and trees}\label{SecParityandTrees}
Consider a quadratic function of the form $f(n)=an^2+bn+c,$ where $a,b$ and $c$ are integers and $a$ is nonzero. To prove the results stated in Theorems~\ref{MainThm1} and~\ref{MainThm2}, we consider the eight possible cases based on the parity of the coefficients $a$, $b$, and $c$. In the case where $a,b,$ and $c$ are all even, there exists an $i\in\mathbb{N}$ such that $2^{i}$ divides $a,b$, and $c$ but $2^{i+1}$ does not. Then $f(n)=2^{i}(a_{0}n^{2}+b_{0}n+c_{0})$ and
it follows that $\nu_{2}(f(n))=i+\nu_{2}(a_{0}n^{2}+b_{0}n+c_{0})$. Hence, this case can be reduced to one of the other seven cases. So we assume, unless stated otherwise, that $a$, $b$, and $c$ are not all even.
Two more cases of Theorem~\ref{MainThm1} are trivial (Case 1 where $a,b$ are even, and Case 5, where $a,b,$ and $c$ are odd), since $\nu_{2}(f(n))=0$ for all $n\in\mathbb{N}$. For the remaining five cases, we classify the behavior using trees. In the case that $a$ is odd and $b$ is even we show, with the help of the discriminant, that $f(n)=0$ has a root in $\mathbb{Q}_2$. We must take some care since some quadratics may not have a zero in $\mathbb{Q}_2$.
As discussed in Section~\ref{SecIntro}, we can present information about the sequence of valuations using a tree. We begin the construction of the tree with the top node representing the valuation of the quadratic $f(n)$ evaluated at any natural number $n$. If the $2$-adic valuation is constant for every $n$ in this node, then we stop the construction, as $\nu_2(n)$ is completely determined for the sequence. If $\nu_2(n)$ is not constant, this node splits into two branches, where one branch represents all numbers of the form $n=2q$ and the other branch represents all numbers of the form $n=2q+1$, where in both cases $q\in\mathbb{N}$. We then repeat this step as necessary to create the tree. The nodes correspond to the sets $\{2^{i}q+r_{i-1}|q\in\mathbb{N}\}$ where
\begin{equation}\label{2adicRemainder}
r_{i-1}=\sum_{k=0}^{i-1}\alpha_{k}2^{k},
\end{equation}
for fixed coefficients $\alpha_{k}\in\{0,1\}$. This process does not always terminate. If it terminates, we say that the tree is \textit{finite}; otherwise, the tree is \textit{infinite}. We say a node is \textit{non-terminating} if $(\nu_{2}(f(n)))_{n\geq0}$ is non-constant for every $n$ in that equivalence class. We say a node is \textit{terminating} if $(\nu_{2}(f(n)))_{n\geq 0}$ is constant for every $n$ in that equivalence class. In practice, we label the node with this constant valuation.
\begin{figure}
\begin{center}
\begin{tikzpicture}[level 1/.style={sibling distance=4cm},
level 2/.style={sibling distance=6cm}]
\node at ([shift={(0cm,0cm)}]current page.north)[draw](x){$\nu_{2}(f(n))$}
child{node[draw] (a){$\nu_{2}(f(2q+1))$}
child[grow=south,level distance=4cm] {node[draw] (c){$\nu_{2}(f(4q+3))$}}
child[grow=south west,level distance=4cm] {node[draw] (d){$\nu_{2}(f(4q+1))$}}}
child {node[draw] (b){$\nu_{2}(f(2q))$}
child[grow=south,level distance=4cm] {node[draw] (e){$\nu_{2}(f(4q+2))$}}
child[grow=south east,level distance=4cm] {node[draw] (f){$\nu_{2}(f(4q))$}}}
;
\path (x) edge node[fill=white] [draw=none, midway] {$2q+1$} (a);
\path (x) edge node[fill=white][draw=none, midway]{$2q$} (b);
\path (a) edge node[fill=white] [draw=none, midway] {$4q+3$} (c);
\path (a) edge node[fill=white][draw=none, midway]{$4q+1$} (d);
\path (b) edge node[fill=white] [draw=none, midway] {$4q+2$} (e);
\path (b) edge node[fill=white][draw=none, midway]{$4q$} (f);
\end{tikzpicture}
\end{center}
\caption{Levels 0, 1, and 2 of a tree.}
\end{figure}
For each of the remaining five nontrivial cases on the parity of the coefficients $a$, $b$ and $c$, either $(\nu_{2}(f(n)))_{n\geq0}$ produces a finite tree or an infinite tree. We say a finite tree has \textit{$\ell$ levels} if there exists $\ell\in\mathbb{Z}^{+}$ such that for all $r_{\ell-1}\in\{0,1,2,\ldots,2^{\ell}-1\}$ we have $(\nu_{2}(f(2^{\ell}q+r_{\ell-1})))_{q\geq 0}$ constant for all $q\in\mathbb{N}$, and $\ell$ is the smallest possible value. Every node at level $\ell$ in a finite tree has a constant valuation, which depends on $r_{\ell-1}$.
Each node of a tree represents a subsequence of the sequence of 2-adic valuations. A finite tree of $\ell$ levels represents a sequence with period equal to $2^{\ell}$.
In the literature, these finite trees are also called \textit{finite automata.} The sequences generated via the 2-adic valuation are called \textit{2-automatic sequences} and, in particular, the sequences $f(2^{i}q+r)$ are known as the \textit{2-kernel sequences.} See Allouche and Shallit's book~\cite{AlloucheShallit} and Bell's paper~\cite{Bell} for more details.
\subsection{2-adic numbers and selected lemmas}\label{SecLemmas}
First, we state several well-known lemmas. The first is a well-established fact about the $p$-adic valuation, which can also be defined on the set $\mathbb{Q}$ and extends to $\mathbb{Q}_2$; see Lemma 3.3.2 in~\cite{Gouvea}.
\begin{lemma}\label{SuppLemm1}
Let $x,y\in\mathbb{Q}$, then $\nu_{p}(xy)=\nu_{p}(x)+\nu_{p}(y)$.
\end{lemma}
An element $n$ in $\mathbb{Q}_2$ can be represented in the form
\begin{equation}\label{SuppLemma3.5}
n=\sum_{i=k}^{\infty}\alpha_{i}2^{i}
\end{equation}
where $k=-\nu_{2}(n)$ and $\alpha_{i}\in\left\{0,1\right\}$ for all $i$; it is well-known that this representation is unique.
Lemma~\ref{SuppLemm1} and the construction of $\mathbb{Q}_{2}$ are well-known~\cite{Gouvea}. Medina et al.~\cite{Medina} provide a useful characterization of the sequence of 2-adic valuations of a polynomial. Before we state the result, we recall the following characterization of the ring of 2-adic integers
\begin{equation*}
\mathbb{Z}_{2}=\left\{n\in\mathbb{Q}_{2}:n=\sum_{i=0}^{\infty}\alpha_{i}2^{i}\ \text{where}\ \alpha_{i}\in\left\{0,1\right\}\right\}.
\end{equation*}
\begin{lemma}\label{SuppLemm6}
(\cite{Medina}, Theorem 2.1) Let $f(n)\in\mathbb{Z}[n]$ be a polynomial that is irreducible over $\mathbb{Z}$. Then $(\nu_{2}(f(n)))_{n\geq0}$ is either periodic or unbounded. Moreover, $(\nu_{2}(f(n)))_{n\geq0}$ is periodic if and only if $f(n)$ has no zeros in $\mathbb{Z}_{2}$. In the periodic case, the minimal period length is a power of $2$.
\end{lemma}
We assume that the quadratic $f(n)$ is irreducible because, if not, by Lemma~\ref{SuppLemm1}, $$\nu_p(f(n))=\nu_p(g(n)\cdot h(n))=\nu_p(g(n))+\nu_p(h(n)),$$ where $g(n)$ and $h(n)$ are irreducible.
Therefore, to determine whether $(\nu_{2}(f(n)))_{n\geq0}$ is periodic or unbounded, it suffices to determine if $f(n)$ has zeros in $\mathbb{Q}_2$ and then determine whether the zeros are also in $\mathbb{Z}_{2}$.
The following lemmas will be used in Section~\ref{SecInf} to identify when the square root of a number is in $\mathbb{Z}_{2}$. The version of Hensel's lemma stated below determines when a polynomial in $\mathbb{Z}_{2}[x]$ has zeros in $\mathbb{Z}_{2}$. Lemma~\ref{roots}, which follows from Lemma~\ref{Hensel}, specifically determines whether the polynomial $f(x)=x^{2}-a$ has zeros in $\mathbb{Z}_{2}$.
\begin{lemma}\label{Hensel}
(Hensel's lemma,~\cite[Sec.~6.4]{Robert}) Assume that $P\in\mathbb{Z}_{2}[x]$ and $x_{0}\in\mathbb{Z}_{2}$ satisfies \begin{equation*}
P(x_{0})\equiv\modd{0} {2^{n}}
\end{equation*}
If $\phi=\nu_{2}(P'(x_{0}))<n/2$, then there exists a unique zero $\xi$ of $P$ in $\mathbb{Z}_{2}$ such that
\begin{equation*}
\xi\equiv\modd{x_{0}} {p^{n-\phi}}\ \text{and}\ \nu_{2}(P'(\xi))=\nu_{2}(P'(x_{0}))
\end{equation*}
\end{lemma}
\begin{lemma}\label{roots}
(\cite[Sec.~6.6]{Robert}) The function $f(x)=x^{2}-a$ has a zero in $\mathbb{Z}_{2}^{\times}$, the set of invertible elements of $\mathbb{Z}_{2}$, if and only if $a\equiv\modd{1} {8}$.
\end{lemma}
\section{Proof of Theorem~\ref{MainThm1}: unbounded cases and infinite trees}\label{SecInf}
We now prove Theorem~\ref{MainThm1}. The main idea is to describe the roots to $f(n)=0$ in $\mathbb{Q}_{2}$ simply using the quadratic formula, the parity of the coefficients, and the lemmas presented in Section~\ref{SecLemmas}. Moreover, according to Lemma~\ref{SuppLemm6}, if a zero exists in $\mathbb{Z}_{2}$, it manifests as an infinite branch in the tree. We begin with Cases 2 and 4.
To this end, note that in Case 2, we can write $a=2r$ and $b=2k+1$ for some $r,k\in\mathbb{Z}$. Then $an^{2}+bn+c=0$ has roots of the form
\begin{equation}\label{rootform1}
x=\frac{-2k-1\pm\sqrt{1-8(rc-\beta)}}{4r},
\end{equation} where $\beta=(k^{2}+k)/2$. Set $j=rc-\beta$.
Also, in Case 4, we can write $a=2r+1$, $b=2k+1$, and $c=2p$. Then $an^{2}+bn+c=0$ has roots of the form
\begin{equation}\label{rootform2}
x=\frac{-2k-1\pm\sqrt{1-8((2r+1)p-\beta)}}{2(2r+1)},
\end{equation} where $\beta=(k^2+k)/2$. Set $j=(2r+1)p-\beta$. Observe that in either case the roots contain $\sqrt{1-8j}$ where $j\in\mathbb{Z}$. Since $\sqrt{1-8j}$ is a zero of the function $g(x)=x^{2}-(1-8j)$, by Lemma~\ref{roots} the zero is in $\mathbb{Z}_{2}$.
Notice that both roots \eqref{rootform1} and \eqref{rootform2} have an even denominator. We still need to check if these roots are in $\mathbb{Q}_{2}$ or $\mathbb{Z}_{2}$. Therefore, in light of Lemma~\ref{SuppLemm6}, Case 2 (Proposition~\ref{InfCase1}) and Case 4 (Proposition~\ref{InfCase2}) are proved by an inductive argument on the behavior of the tree. It turns out that, in Case 2, $f(n)$ has exactly one zero in $\mathbb{Z}_2$ and in Case 4, $f(n)$ has two zeros in $\mathbb{Z}_2$. See Figure~\ref{fig:Ex8} in the Appendix for an example of a tree with one infinite branch and Figure~\ref{fig:Ex7} for an example of a tree with two infinite branches.
\begin{proposition}\label{InfCase1}
If $a$ is even and $b$ is odd, then the 2-adic valuation tree of $f(n)=an^2+bn+c$ has exactly one infinite branch. Furthermore, the valuation of the terminating node at the $i^{th}$ level is $i-1$.
\end{proposition}
\begin{proof}
Note that this Proposition corresponds to Case 2 of Theorem~\ref{MainThm1}. Substituting $a=2r$ and $b=2k+1$ for some $r,k\in\mathbb{Z}$, we get $an^2+bn+c=2(rn^2+kn)+n+c$. Now suppose that $c$ is even. If $n$ is even, then $2(rn^{2}+kn)+n+c$ is divisible by 2 and so $\nu_{2}(f(2n))\geq1$. If $n$ is odd, then $2(rn^{2}+kn)+n+c$ is not divisible by 2 and so $\nu_{2}(f(2n+1))=0$. An analogous argument shows that, for $c$ odd, $\nu_{2}(f(2n))=0$ and $\nu_{2}(f(2n+1))\geq1$. Therefore, the conclusion of the proposition is valid at the initial step.
Now, arguing inductively, suppose that $n=2^{i}q+r_{i-1}$ is the non-terminating node, that is $\nu_{2}(f(n))\geq i$. So $f(n)\equiv\modd{0} {2^{i}}$ or $a(2^{i}q+r_{i-1})^{2}+b(2^{i}q+r_{i-1})+c=2^{i}\beta$ where $\beta\in\mathbb{Z}$. Consider $f(n)$ evaluated at the next level:
$$a(2^{i+1}q+r_{i-1})^{2}+b(2^{i+1}q+r_{i-1})+c\equiv ar_{i-1}^{2}+br_{i-1}+c
\equiv \modd{2^{i}\beta} {2^{i+1}},$$ and
\begin{align*}
a(2^{i+1}q+2^{i}+r_{i-1})^{2} + b(2^{i+1}q+2^{i}+r_{i-1})+c
&\equiv ar_{i-1}^{2}+2^{i}b+br_{i-1}+c \\
& \equiv 2^{i}\beta+2^{i}b \equiv \modd{2^{i}(\beta+b)} {2^{i+1}}.
\end{align*}
Since $b$ is odd it follows that the valuation of one node is $i$ and the other is greater than $i$ depending on if $\beta$ is odd or even. Therefore one node terminates and the other is non-terminating.
\end{proof}
\begin{proposition}\label{InfCase2}
If $a$ and $b$ are odd, and $c$ is even, then the 2-adic valuation tree of $f(n)=an^2+bn+c$ has two infinite branches. Furthermore, the valuation of the terminating node at the $i^{th}$ level is $i$.
\end{proposition}
\begin{proof}
This proposition addresses Case 4 of Theorem~\ref{MainThm1}. Write $a=2r+1$, $b=2k+1$, and $c=2p$ for some integers $r,k,$ and $p$. First note that both $a(2q)^{2}+b(2q)+c$ and $a(2q+1)^{2}+b(2q+1)+c$ are congruent to $\modd{0} {2}$. We now verify that the proposition holds at the initial step.
In the $2q$ case, we check $4q$ and $4q+2$. Note that
$$a(4q)^{2}+b(4q)+c\equiv \modd{c} {4}$$
and
$$a(4q+2)^{2}+b(4q+2)+c\equiv\modd{ 2b+c} {4}.$$
If $c\equiv\modd{0} {4}$, then $2b+c\not\equiv\modd{0} {4}$. If $c\not\equiv\modd{0} {4}$ then $c=2p$ with $p$ odd and $2b+c=2(b+p)\equiv\modd{0} {4}$. That is, either $$\nu_{2}(f(4q))\geq2 \text{ and }\nu_{2}(f(4q+2))=1,\text{ or}$$ $$\nu_{2}(f(4q))=1 \text{ and }\nu_{2}(f(4q+2))\geq2.$$ For the $2q+1$ case, we check $4q+1$ and $4q+3$.
Note that $$a(4q+1)^{2}+b(4q+1)+c\equiv\modd{a+b+c} {4}$$
and $$a(4q+3)^{2}+b(4q+3)+c\equiv\modd{a+3b+c} {4}.$$
By hypothesis, $a+b+c=2(r+k+p)$ and $a+3b+c=2(r+3k+p+2)$. But note that $r+3k+p+2=(r+k+p+1)+(2k+1)$. Now it is clear that $r+3k+p+2$ is even (odd) if and only if $r+k+p+1$ is odd (even). Again, either $$\nu_{2}(f(4q+1))\geq2\text{ and }\nu_{2}(f(4q+3))=1\text{, or}$$ $$\nu_{2}(f(4q+1))=1\text{ and } \nu_{2}(f(4q+3))\geq2.$$
For the inductive step, now suppose that
$n=2^{i}q+r_{i-1}$ and $n=2^{i}q+r_{i-1}^{*}$ are the non-terminating nodes where $r_{i-1}=\sum_{k=1}^{i-1}\alpha_{k}2^{k}+1$ (the odd side branch) and $r_{i-1}^{*}=\sum_{k=1}^{i-1}\alpha_{k}2^{k}$ (the even side branch) where $\alpha_{k}\in\left\{0, 1\right\}$. The fact that these branches are non-terminating follows from the same argument as in the proof of Proposition~\ref{InfCase1}.
\end{proof}
We now consider Case 3(b) of Theorem~\ref{MainThm1}.
\begin{proposition}\label{InfSpecThm}
Let $a$ be odd, $b$ be even and $b^{2}-4ac=4^{\ell}\Delta$ for some $\ell\in\mathbb{Z}^+$ as large as possible and $\Delta\equiv\modd{1} {8}$, then the 2-adic valuation tree of $f(n)=an^2+bn+c$ has two infinite branches.
\end{proposition}
\begin{proof}
Let $a$ be odd and $b=2k$ for some $k\in\mathbb{Z}$. Fix $\ell\in\mathbb{Z}^+$. Then $an^{2}+bn+c=0$ has roots of the form $x=\frac{-k\pm\sqrt{k^{2}-ac}}{a}$. By the hypothesis $4k^{2}-ac=2^{2\ell}\Delta$ where $\Delta\equiv\modd{1} {8}$.
If $\Delta<0$ then we can naturally write $\Delta=1-8j$ where $j\in\left\{1,2,3,\ldots\right\}$.
If $\Delta>0$, then we can write $\Delta=1+8j=1-8(-j)$ where $j\in\mathbb{N}$.
Thus in either case $\Delta=1-8j$ where $j\in\mathbb{Z}$. Then
it follows that $\sqrt{4k^{2}-4ac}=2^{\ell}\sqrt{1-8j}$. By Lemma~\ref{roots}, $\sqrt{1-8j}$ is in $\mathbb{Z}_{2}$. Furthermore, since the denominator of $x$ is odd this also guarantees that $x\in\mathbb{Z}_{2}$. Therefore, there are two infinite branches, one corresponding to each root.
\end{proof}
\begin{corollary}\label{InfSpecCor}
Under the conditions of Proposition~\ref{InfSpecThm}, if $b^{2}-4ac=0$, the tree has one infinite branch.
\end{corollary}
\begin{proof}
In this case (3(a) of Theorem~\ref{MainThm1}) roots take the form $x=-\frac{b}{2a}$. Since $b=2k$, then $x=-\frac{k}{a}$ which has 2-adic form $x=\sum_{i=0}^{\infty}\alpha_{i}2^{i}$ where $\alpha_{i}$ is either 0 or 1. This guarantees that the one branch is infinite.
\end{proof}
\begin{remark}
Note the connection between subsequences of $(\nu_2(f(n)))_{n\geq 0}$ and the infinite branches of a tree.
Proposition~\ref{InfCase1} asserts that for all $i\in\mathbb{Z}^{+}$ there exists exactly one subsequence of the form $n=2^{i}q+r_{i-1}$ such that $\nu_{2}(f(n))\geq i$ and exactly one subsequence of the form
$n=2^{i}q+r_{i-1}^{*}$ with $\nu_{2}(f(n))=i-1$. Similarly, Proposition~\ref{InfCase2} asserts that for all $i\in\mathbb{Z}^{+}$ there are exactly two subsequences corresponding to $n=2^{i}q+r_{i-1}$ such that $\nu_{2}(f(n))\geq i+1$ and exactly two subsequences of the form $n=2^{i}q+r_{i-1}^{*}$ with $\nu_{2}(f(n))=i$. For $r_{i-1}$ and $r_{i-1}^{*}$, the representations presented in equation (\ref{2adicRemainder}) of Section~\ref{SecParityandTrees} equate the coefficients $\alpha_{k}$ and $\alpha_{k}^{*}$ for all $0\leq k\leq i-2$, and
meanwhile $\alpha_{i-1}^{*} \equiv \modd{\alpha_{i-1}+1} {2}$.
As for the cases of Proposition~\ref{InfSpecThm} and Corollary~\ref{InfSpecCor}, we can apply Lemma~\ref{SuppLemm6} to conclude that these sequences are unbounded. Much like Propositions~\ref{InfCase1} and~\ref{InfCase2}, we can say that the results of Proposition~\ref{InfSpecThm} yield that for all $i\in\mathbb{N}$ there are exactly two subsequences of the form $n=2^{i}q+r_{i-1}$, where $(\nu_{2}(f(n)))_{n\geq 0}$ is not constant, while Corollary~\ref{InfSpecCor} asserts there is exactly one such subsequence.
\end{remark}
\section{Bounded cases and finite trees}\label{BoundedSection}
In this section, we prove Case 3(c) of Theorem~\ref{MainThm1} and the first part of Theorem~\ref{MainThm2}. The coefficients of these quadratics satisfy the following: $a$ is odd and $b$ is even, and $b^{2}-4ac=4^{\ell}\Delta$, where $\ell\in\mathbb{Z}^+$ is as large as possible, $\Delta\equiv\modd{m} {8}$, and $m\in\{2,3,5,6,7\}$. Their trees are finite with $\ell$ levels. We can again apply the reasoning of the proof of Proposition \ref{InfSpecThm}.
If $\Delta<0$ we can naturally write $\Delta=m-8j$ where $j\in\mathbb{N}$ and if $\Delta>0$ then we write $\Delta=m+8j=m-8(-j)$ where $j\in\mathbb{N}$ or $j=0$. Henceforth, we will write $\Delta=m-8j$ where $j\in\mathbb{Z}$. Again, by Lemma~\ref{roots} functions of the form $g(x)=x^{2}-(m-8j)$ do not have a zero in $\mathbb{Z}_{2}$. By Lemma~\ref{SuppLemm6}, the corresponding valuation sequences are periodic. Figures~\ref{fig:Ex5} and~\ref{fig:Ex6} in the Appendix illustrate examples of finite trees arising from functions $f_3(n)=15n^2+1142n+25559$
and $f_4(n)=5n^2+106n+1125$.
We should take a moment to note why we only need to consider these five values of $m$. First note that in Cases 3(b) and 3(c) of Theorem~\ref{MainThm1}, where $a$ is odd and $b$ is even, we have the condition that $\ell$ is as large as possible. This corresponds to factoring out as many powers of 4 as possible, ruling out the possibilities $m\in\{0,4\}$. Now if $m=1$ (Case 3(b), covered in
Section~\ref{SecInf}), an infinite tree is created. This leaves the cases $m\in\{2,3,5,6,7\}$. As discussed above, the zeros of these quadratic functions are not elements of $\mathbb{Q}_{2}$; therefore, their trees must be finite. The proofs of the next two propositions follow the proofs of Propositions~\ref{InfCase1} and~\ref{InfCase2}.
\begin{proposition}\label{FinThm}
If $a$ is odd and $b$ is even, and $b^{2}-4ac=4^{\ell}\Delta$ where $\ell\in\mathbb{Z}^+$ is as large as possible, $\Delta\equiv\modd{m} {8}$, and $m\in\{2,3,5,6,7\}$, then the 2-adic valuation tree of $f(n)$ is finite with $\ell$ levels.
\end{proposition}
The proof of this proposition is broken down into Lemmas~\ref{FinLem1},~\ref{FinLem2}, and~\ref{FinLem3}. Unless stated otherwise, let $b=2k$ for some $k\in\mathbb{Z}$. Lemma~\ref{FinLem1} covers the case $\ell=1$, in which the 2-adic valuation tree has exactly one level. Lemmas~\ref{FinLem2} and~\ref{FinLem3} describe valuations for finite trees with more than one level; Lemma~\ref{FinLem3} describes the valuation at the final level and Lemma~\ref{FinLem2} describes the other levels. Under the assumptions of Proposition~\ref{FinThm}, with $a$ odd and $b$ even, we complete the square and use properties of the $p$-adic valuation to obtain $\nu_{2}(an^{2}+bn+c)=\nu_{2}((an+k)^{2}-k^{2}+ac)$.
\begin{lemma}\label{FinLem1}
Let $\ell=1$, i.e., $b^{2}-4ac=4\Delta$, $\Delta\equiv\modd{m} {8}$, and $m\in\{2,3,5,6,7\}$. If $m\in\{2,7\}$ and $b\equiv\modd{0} {4}$ or if $m\in\{3,6\}$ and $b\equiv\modd{2} {4}$, then
\begin{displaymath}
\nu_{2}(an^{2}+bn+c)=\begin{cases}
0,&\ \text{if}\ n\ \text{even};\\
1,&\ \text{if}\ n\ \text{odd}.
\end{cases}
\end{displaymath}
If $m\in\left\{3,6\right\}$ and $b\equiv\modd{0} {4}$ or if $m\in\left\{2,7\right\}$ and $b\equiv\modd{2} {4}$, then
\begin{displaymath}
\nu_{2}(an^{2}+bn+c)=\begin{cases}
1,&\ \text{if}\ n\ \text{even};\\
0,&\ \text{if}\ n\ \text{odd}.
\end{cases}
\end{displaymath}
If $m=5$ and $b\equiv\modd{0} {4}$, then
\begin{displaymath}
\nu_{2}(an^{2}+bn+c)=\begin{cases}
0,&\ \text{if}\ n\ \text{even};\\
2,&\ \text{if}\ n\ \text{odd}.
\end{cases}
\end{displaymath}
If $m=5$ and $b\equiv\modd{2} {4}$, then
\begin{displaymath}
\nu_{2}(an^{2}+bn+c)=\begin{cases}
2,&\ \text{if}\ n\ \text{even};\\
0,&\ \text{if}\ n\ \text{odd}.
\end{cases}
\end{displaymath}
\end{lemma}
\begin{proof}
Using the convention that $\Delta=m-8j$ where $j\in\mathbb{Z}$ and $m\in\left\{2,3,5,6,7\right\}$, consider the case where $m=7$ and $b\equiv\modd{2} {4}$. Then, since $b=2k$, we have $k$ odd. If $n$ is even, then $(an+k)^2\equiv\modd{ k^{2}} {2}$ and so it follows that
$$((an+k)^{2}-k^{2}+ac) \equiv k^2-7 \equiv -6 \equiv \modd{0} {2},$$
but
$$((an+k)^{2}-k^{2}+ac)\equiv k^2-7 \equiv -6 \equiv \modd{2} {4}.$$
Therefore $\nu_{2}(an^{2}+bn+c)=1$ when $n$ is even. Similarly, when $m=7$ and $b\equiv\modd{2} {4}$ if $n$ is odd, then $(an+k)^{2}$ is even.
Therefore,
$(an+k)^{2}-k^{2}+ac \equiv -7 \equiv \modd{1} {2}$.
Thus $\nu_{2}(an^{2}+bn+c)=0$ when $n$ is odd.
Now consider the case where $m=7$ and $b\equiv\modd{0}
{4}$. We have $b=2k$ with $k$ even. Thus, if $n$ is odd we have
$$(an+k)^{2}-k^{2}+ac \equiv -6\equiv \modd{0} {2} $$
and
$$(an+k)^{2}-k^{2}+ac \equiv k^2-7 \equiv -6 \equiv \modd{2} {4}.$$
Thus $\nu_{2}(an^{2}+bn+c)=1$ when
$n$ is odd. When $n$ is even we have $(an+k)^{2}-k^{2}+ac
\equiv -7 \equiv \modd{1} {2}$. Thus $\nu_{2}(an^{2}+bn+c)=0$ when $n$
is even.
The cases of $m\in\{2,3,6\}$ when $b\equiv\modd{0} {4}$ or $b\equiv\modd{2} {4}$ can be handled in the same fashion. For $m=5$, the valuations are slightly different.
Consider the case where $m=5$. Recall that $b=2k$ for some $k\in\mathbb{Z}$. Note that
$$
b^2-4ac=4(5-8j),
$$
and hence $k^2-ac=5-8j$. Thus,
$$
(an+k)^2-k^2+ac=(an+k)^2-5+8j.
$$
If $(an+k)$ is even, which is the case when both $n$ and $k$ are even or both $n$ and $k$ are odd, then $(an+k)^2-5+8j$ is odd.
Thus, $\nu_{2}(an^{2}+bn+c)=0$.
Now suppose that $(an+k)$ is odd, which is true when $n$ and $k$ have different parity. Then $(an+k)^2\equiv\modd{1} {4}$, and this implies
$$
(an+k)^2-5+8j \equiv 1-5+8j \equiv -4+8j \equiv\modd{0} {4}.
$$
Thus, $\nu_{2}(an^{2}+bn+c)\geq 2$. \\
Since $(an+k)$ is odd, let $an+k=2d+1$, for some $d\in \mathbb{Z}$. Then,
\begin{align*}
(an+k)^2-5+8j &= (2d+1)^2-5+8j\\
&\equiv\modd{4(d^2+d-1)} {8}.
\end{align*}
Observe that $d^2+d-1$ is odd, regardless of whether $d$ is even or odd. Thus, $\nu_{2}(an^{2}+bn+c)<3$. Therefore,
$\nu_{2}(an^{2}+bn+c)=2$.
\end{proof}
\begin{lemma}\label{FinLem2}
Under the assumptions of Proposition~\ref{FinThm} (Case 3(c) of Theorem~\ref{MainThm1}) let $\ell\geq 2$ and suppose $0<i<\ell$.
At the $i^{th}$ level there is one terminal and one non-terminal node. Furthermore, the terminal node has valuation $2(i-1)$ and the non-terminal node has valuation at least $2i$.
\end{lemma}
First we need:
\begin{claim}\label{SuppLemm7}
Let $a,k\in\mathbb{Z}$ with $a$ odd. Let $g(n)=an+k$, then $(\nu_{2}(g(n)))_{n\geq0}$ creates an unbounded sequence.
\end{claim}
\begin{proof}
First note that the root of $ax+k=0$ is $x=-\frac{k}{a}$. Also note that $\nu_{2}(x)=\nu_{2}(-k)-\nu_{2}(a)$. Since $a$ is odd, $\nu_{2}(a)=0$. Therefore $\nu_{2}(x)=\nu_{2}(-k)\geq0$. By equation~\eqref{SuppLemma3.5}, $x\in\mathbb{Z}_{2}$, so Lemma~\ref{SuppLemm6} implies that $(\nu_{2}(g(n)))_{n\geq0}$ is an unbounded sequence.
\end{proof}
\begin{proof}
To prove Lemma~\ref{FinLem2}, we proceed by an inductive argument on $i$. Again, using the convention that $\Delta=m-8j$ where $j\in\mathbb{Z}$, for the base case $i=1$, note that
$b^{2}-4ac \equiv 4^{\ell}(m-8j) \equiv \modd{0} {4}$.
Recall that $b=2k$. First, assume that $k$ is even. If $n$ is even, then $an+k$ is even and so $(an+k)^{2}-k^{2}+ac \equiv \modd{0} {4}$. Thus $\nu_{2}(an^{2}+bn+c)=\nu_{2}((an+k)^{2}-k^{2}+ac)\geq2$ by Claim~\ref{SuppLemm7}. If $n$ is odd, then $(an+k)^{2}-k^{2}+ac \equiv \modd{1} {2}$,
and again using the technique of completing the square,
$\nu_{2}(an^{2}+bn+c)=0$. If $k$ is odd, a similar argument shows that $\nu_{2}(an^{2}+bn+c)\geq2$ when $n$ is odd. Observe also that Claim~\ref{SuppLemm7} can be used to show that $(\nu_2((an+k)^2))_{n\geq 0}$
forms an unbounded sequence therefore $\nu_{2}((an+k)^{2}-k^{2}+ac)\geq2$. Thus, the claim is true for $i=1$.
For the inductive step, notice that since $i<\ell$,
it follows that $$b^{2}-4ac \equiv 4^{\ell}(m-8j) \equiv \modd{0} {2^{2i}}.$$
Suppose there exists an $i-1\geq 0$ such that $n=2^{i-1}q+r_{i-2}$ splits into two nodes: one node terminating with valuation $2(i-1)$ and the other node having valuation of at least $2i$.
We let $n=2^{i}q+r_{i-1}$ denote the non-terminating node,
where $r_{i-1}=\sum_{h=0}^{i-1}\alpha_{h}2^{h}$ with
$\alpha_{h}\in\left\{0,1\right\}$, for all $0\leq h\leq i-2$, and $q\in\mathbb{Z}$.
Then we have
$$(an+k)^{2}-k^{2}+ac \equiv {(a(2^{i}q+r_{i-1})+k)^{2}} \equiv
\modd{0} {2^{2i}} ,$$
so $\nu_{2}(an^{2}+bn+c)\geq2i$. This also implies that
$a(2^{i}q+r_{i-1})+k \equiv \modd {0} {2^{i}}$. Thus $ar_{i-1}+k=2^{i}\beta$ where $\beta\in\mathbb{Z}$. Now suppose that $k$ is even. (The proof for $k$ odd can be handled in the same fashion, and thus is omitted.) Since $k$ is even, then $r_{i-1}$ must be even.
Consider the $(i+1)^{st}$ level where $i+1<\ell$. Here again we
have
$$b^{2}-4ac=\modd{4^{\ell}(m-8j)} {2^{2(i+1)}}\equiv0.$$
Moving to the next level, in the case $n=2^{i+1}q+r_{i-1}$ we have
\begin{align*}
\nu_{2}((an+k)^{2}-4^{\ell-1}(m-8j))&=\nu_{2}((a(2^{i+1}q+r_{i-1})+k)^{2}-4^{\ell-1}(m-8j))\\
&=\nu_{2}((2^{i+1}aq+ar_{i-1}+k)^{2}-4^{\ell-1}(m-8j))\\
&=\nu_{2}((2^{i+1}aq+2^{i}\beta)^{2}-4^{\ell-1}(m-8j))\\
&=\nu_{2}(2^{2i}(2aq+\beta)^{2}-2^{2(\ell-1)}(m-8j)),
\end{align*}
and in the case $n=2^{i+1}q+2^{i}+r_{i-1}$ we have
\begin{align*}
&\nu_{2}((an+k)^{2}-4^{\ell-1}(m-8j))\\
&\hspace{30pt}=\nu_{2}((a(2^{i+1}q+2^{i}+r_{i-1})+k)^{2}-4^{\ell-1}(m-8j))\\
&\hspace{30pt}=\nu_{2}((2^{i+1}aq+2^{i}a+ar_{i-1}+k)^{2}-4^{\ell-1}(m-8j))\\
&\hspace{30pt}=\nu_{2}((2^{i+1}aq+2^{i}a+2^{i}\beta)^{2}-4^{\ell-1}(m-8j))\\
&\hspace{30pt}=\nu_{2}(2^{2i}(2aq+a+\beta)^{2}-2^{2(\ell-1)}(m-8j)).
\end{align*}
Since $\beta\in\mathbb{Z}$ either $2aq+\beta$ or $2aq+a+\beta$ is odd and the other is even. As long as $i+1<\ell$ then in the odd case the valuation is $2i$ and in the even case the valuation is at least $2(i+1)$.
\end{proof}
\begin{lemma}\label{FinLem3}
If $a$ is odd and $b$ is even with $b=2k$ for $k\in\mathbb{Z}$, and $b^{2}-4ac=4^{\ell}\Delta$ where $\ell\in\mathbb{Z}^+$ is as large as possible, $\Delta\equiv\modd{m} {8}$, and $m\in\{2,3,5,6,7\}$, then at the $\ell^{th}$ level the nodes of the 2-adic valuation tree terminate with valuations of $2(\ell-1)$, $2\ell-1$ or $2\ell$.
Suppose that $n=2^{\ell}q+r_{\ell-2}$.
If $an+k\equiv\modd{0} {2^{\ell}}$, then
\begin{displaymath}
\nu_{2}(f(n))=\begin{cases}
2(\ell-1),&\ \text{if}\ m=7,5,3;\\
2\ell-1,&\ \text{if}\ m=6,2;
\end{cases}
\end{displaymath}
and if $an+k\not\equiv\modd{0} {2^{\ell}}$, then
\begin{displaymath}
\nu_{2}(f(n))=\begin{cases}
2(\ell-1),&\ \text{if}\ m=6,2;\\
2\ell-1,&\ \text{if}\ m=7,3;\\
2\ell,&\ \text{if}\ m=5.
\end{cases}
\end{displaymath}
Suppose that $n=2^{\ell}q+2^{\ell-1}+r_{\ell-2}$. If $an+k\equiv\modd{0} {2^{\ell}}$, then
\begin{displaymath}
\nu_{2}(f(n))=\begin{cases}
2(\ell-1),&\ \text{if}\ m=6,2;\\
2\ell-1,&\ \text{if}\ m=7,3;\\
2\ell,&\ \text{if}\ m=5;
\end{cases}
\end{displaymath}
and if $an+k\not\equiv\modd{0} {2^{\ell}}$, then
\begin{displaymath}
\nu_{2}(f(n))=\begin{cases}
2(\ell-1),&\ \text{if}\ m=7,5,3;\\
2\ell-1,&\ \text{if}\ m=6,2.
\end{cases}
\end{displaymath}
\end{lemma}
\begin{proof}
By Lemma~\ref{FinLem2} there exists a non-terminating node $n=2^{\ell-1}q+r_{\ell-2}$ with $q\in\mathbb{Z}$ and $$\nu_{2}((an+k)^{2}-k^{2}+ac)\geq2(\ell-1).$$
Consider $n=2^{\ell}q+r_{\ell-2}$ with $q\in\mathbb{Z}$. By the same argument as in Lemma~\ref{FinLem2} and using the convention that $\Delta=m-8j$ where $j\in\mathbb{Z}$, we have $$(an+k)^{2}-k^{2}+ac=(2^{\ell}aq+2^{\ell-1}\beta)^{2}-2^{2(\ell-1)}(m-8j)=2^{2(\ell-1)}((2aq+\beta)^{2}+8j-m),$$ where $\beta\in\mathbb{Z}$. Recall that $a$ is odd. Then depending on whether $\beta$ is even or odd, simple calculations show the first two results.
In the case when $n=2^{\ell}q+2^{\ell-1}+r_{\ell-2}$ with $q\in\mathbb{Z}$ we have $$(an+k)^{2}-k^{2}+ac=2^{2(\ell-1)}((2aq+a+\beta)^{2}+8j-m),$$ where $\beta\in\mathbb{Z}$.
Then again depending on whether $\beta$ is odd or even, it is straightforward to show the last two results.
\end{proof}
\section{Structure of finite trees}\label{SecStrucTree}
The section describes the overall structure of finite trees, continuing the discussion of Case 3(c) of Theorem~\ref{MainThm1}, in which $a$ is odd, $b$ is even, $b^{2}-4ac=4^{\ell}\Delta$ where $\Delta\equiv\modd{m} {8}$, and $m\in\{2,3,5,6,7\}$. Throughout this section, we make use of several operators. The operators allow us to track changes from very easily described trees, which we call type $(\ell,1)$, to more complicated trees.
\begin{definition}[Translation operator,~\cite{Grafakos}]
For quadratics of the form $f(n)=an^{2}+bn+c$ we define $\tau^{s}(f)(n)=f(n-s)$ for $s\in\mathbb{R}$, namely $\tau^{s}(f)(n)=a(n-s)^{2}+b(n-s)+c=an^{2}+(b-2as)n+(c+as^{2}-bs)$.
\end{definition}
\begin{proposition}\label{FinPropStruc1}
Let the assumptions of Proposition~\ref{FinThm} hold for the function $f(n)=an^{2}+bn+c$ and suppose $s\in\mathbb{Z}$.
Then we have the following relationship
\begin{equation*}
\nu_{2}(f(2^{i}q+r_{i-1}))
=\nu_{2}(\tau^{s}f(2^{i}q+(r_{i-1}+s) \bmod {2^{i}})).
\end{equation*}
That is the valuations $\nu_{2}(f(n))$ at the node of the form $n=2^{i}q+r_{i-1}$ are moved to the node of the form $n=\modd{2^{i}q+(r_{i-1}+s)} {2^{i}}$ under the operation $\tau^{s}$. \end{proposition}
\begin{proof}
Note that finite trees with $\ell$ levels correspond to periodic sequences
with a period equal to $2^{\ell}$. Since $\tau^s$ is a translation operator, every element in the sequence $(\nu_2(f(n)))_{n\geq0}$ is moved over $s$ spaces.
\end{proof}
\begin{definition}[$S$-operator]\label{defn-S-operator}
Let $a$ be a positive, odd integer. For quadratics of the form $f(n)=n^{2}+bn+ac$ we define $S^{a}(f)(n)=an^{2}+bn+c$. Likewise, for quadratics of the form $f(n)=an^{2}+bn+c$ define $S^{a^{-1}}(f)(n)=n^{2}+bn+ac$.
\end{definition}
In general, the $S$-operator need not output a quadratic function with an integer constant term. However, the present work only applies $S^a$ to functions whose output has integer coefficients.
\begin{definition}[Dilation operator,~\cite{Grafakos}]
For quadratics of the form $f(n)=an^{2}+bn+c$ we define $\delta^{s}(f)(n)=f(sn)$ for $s\in\mathbb{R}$, namely $\delta^{s}(f)(n)=a(sn)^{2}+b(sn)+c$.
\end{definition}
\begin{lemma}\label{FinLemStruc2}
Under the assumptions of Proposition~\ref{FinThm} the trees created by $f(n)=n^{2}+bn+ac$ and $S^{a}(f)(n)$ where $a\in\mathbb{Z}$ have the same number of levels. Similarly, the trees created by $g(n)=an^{2}+bn+c$ and $\tau^{s}(g)(n)$ where $s\in\mathbb{Z}$ have the same number of levels.
\end{lemma}
\begin{proof}
The assumptions of Proposition~\ref{FinThm} represent Cases 3(b) and 3(c) of Theorem~\ref{MainThm1}. Simple calculations show that the discriminants of $f(n)$ and $S^{a}(f)(n)$ are the same, and that the discriminants of $g(n)$ and $\tau^{s}(g)(n)$ are the same. The conclusions then follow directly from Proposition~\ref{FinThm}.
\end{proof}
\begin{proposition}\label{FinPropStruc2}
Let the assumptions of Proposition~\ref{FinThm} hold and suppose $f(n)=n^{2}+bn+ac$. Then we have the following relationship
\begin{equation*}
\nu_{2}(f(2^{i}q+r_{i-1}))=\nu_{2}(S^{a}(f(2^{i}q+a^{-1}\cdot r_{i-1}))).
\end{equation*}
That is the valuation $\nu_{2}(f(n))$ at the node the form $n=2^{i}q+r_{i-1}$ is moved to the node of the form of $n=2^{i}q+(a^{-1}\cdot \modd{r_{i-1})} {2^{i}}$ under the operation $S^{a}$. In this context $a^{-1}$ is the inverse of $\modd{a} {2^{i}}$.
\end{proposition}
\begin{proof}
Since $a$ is odd, note that
$$\nu_{2}(S^{a}(f)(n))=\nu_{2}((an^{2}+bn+c))=\nu_{2}((an)^{2}+b(an)+ac)=\nu_{2}(\delta^{a}(f)(n)),$$
where $\delta^{a}(f)(n)=f(an)$ is the dilation operator. Thus, the valuation of $f(n)$ for $n=2^{i}q+r_{i-1}$ is the same as the valuation of $n'=2^{i}(a^{-1}q)+a^{-1}\cdot r_{i-1}$ after the $S^a$-operator is applied.
\end{proof}
Suppose that $f(n)=an^{2}+bn+c$ creates a finite tree. We say that this tree is \textit{type $(\ell,1)$}, for $\ell\geq2$, if at every level the non-terminating node is of the form $n=2q$ or $n=2^{i}q+2^{i-2}+\cdots+2^{1}+2^{0}$ for $i<\ell$ and the tree has $\ell$ levels. We also say that a quadratic function is type $(\ell,1)$ if it creates an $(\ell,1)$ tree. That is, $f(n)$ creates a finite tree of the following form:
\begin{figure}[h!]
\centering
\begin{equation*}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm]
\draw [line width=0.8pt] (0,7)-- (-1,6);
\draw [line width=0.8pt] (0,7)-- (1,6);
\draw [fill=black] (0,7) circle (1pt);
\draw [fill=black] (-1,6) circle (1pt);
\draw [fill=black] (1,6) circle (1pt);
\draw [line width=0.8pt] (-1,6)-- (-2,5);
\draw [line width=0.8pt] (-1,6)-- (0,5);
\draw [fill=black] (0,5) circle (1pt);
\draw [fill=black] (-2,5) circle (1pt);
\draw [line width=0.8pt] (-2,5)-- (-3,4);
\draw [line width=0.8pt] (-2,5)-- (-1,4);
\draw [fill=black] (-3,4) circle (1pt);
\draw [fill=black] (-1,4) circle (1pt);
\draw [line width=0.8pt,dotted] (-3,4)-- (-5,2);
\draw [line width=0.8pt] (-5,2)-- (-6,1);
\draw [line width=0.8pt] (-5,2)-- (-4,1);
\draw [fill=black] (-5,2) circle (1pt);
\draw [fill=black] (-6,1) circle (1pt);
\draw [fill=black] (-4,1) circle (1pt);
\end{tikzpicture}
\end{equation*}
\caption{The form of trees of type $(\ell,1)$.}
\label{fig:ell_1_tree}
\end{figure}
\vspace{0.2in}
Here, we suppose that $\ell\geq2$ because $\ell=1$ creates a tree with one level, see Lemma~\ref{FinLem1}, and the directional behavior we seek to classify is not defined. The conditions $4a^2-4ac=4^{\ell}\Delta$ for $\ell\in\mathbb{Z}^+$ as large as possible, $\Delta\equiv\modd{m} {8}$, and $m\in\{2,3,5,6,7\}$ imply $c$ must be odd.
\begin{proposition}\label{FinPropStruc3}
Under the assumptions of Proposition~\ref{FinThm}, if $c$ is odd and $\ell\geq2$ is an integer, then a quadratic of the form $f(n)=an^{2}+2an+c$ creates a tree that is of type $(\ell,1)$. Furthermore, we have
\begin{displaymath}
\nu_{2}(f(n))=\begin{cases}
0,&\ \text{if}\ n\equiv\modd{0} {2};\\
2(i-1),&\ \text{if}\ n\equiv\modd{\sum_{k=0}^{i-2}2^{k}} {2^{i}}\ \text{with}\ 2\leq i<\ell;\\
2(\ell-1),&\ \text{if}\ n\equiv\modd{\sum_{k=0}^{\ell-2}2^{k}} {2^{\ell}}\ \text{and}\ m=6,2;\\
2\ell-1,&\ \text{if}\ n\equiv\modd{\sum_{k=0}^{\ell-2}2^{k}} {2^{\ell}}\ \text{and}\ m=7,3;\\
2\ell,&\ \text{if}\ n\equiv\modd{\sum_{k=0}^{\ell-2}2^{k}} {2^{\ell}}\ \text{and}\ m=5;\\
2\ell-1,&\ \text{if}\ n\equiv\modd{\sum_{k=0}^{\ell-1}2^{k}} {2^{\ell}}\ \text{and}\ m=6,2;\\
2(\ell-1),&\ \text{if}\ n\equiv\modd{\sum_{k=0}^{\ell-1}2^{k}} {2^{\ell}}\ \text{and}\ m=7,5,3.
\end{cases}
\end{displaymath}
\end{proposition}
\begin{proof}
In light of Lemma~\ref{FinLem2}, we know that if a node is non-terminating, then it produces two nodes that either both terminate (i.e., these nodes are at the $\ell^{th}$ level) or one node is non-terminating and the other is terminating. So in order to show that the tree is of type $(\ell,1)$, we only need to confirm that nodes corresponding to $n=2^{i}q+2^{i-1}+\cdots+2^{1}+2^{0}$, where $1\leq i\leq\ell$, are always non-terminating. Since $a$ is odd, completing the square and using the convention that $4a^{2}-4ac=4^{\ell}\Delta$ where $\Delta=m-8j$ where $j\in\mathbb{Z}$ gives
\begin{align*}
\nu_{2}(f(n))&=\nu_{2}(an^{2}+2an+c)=\nu_{2}(a(n+1)^{2}-a+c)\\
&=\nu_{2}(a^{2}(n+1)^{2}-a^{2}+ac)=\nu_{2}(a^{2}(n+1)^{2}-4^{\ell-1}(m-8j))\\
&=\nu_{2}(a^{2}(2^{i}q+2^{i-1}+2^{i-2}+\cdots+2+1+1)^{2}-4^{\ell-1}(m-8j))\\
&=\nu_{2}(a^{2}(2^{i}q+2^{i})^{2}-4^{\ell-1}(m-8j))\\
&=\nu_{2}(a^{2}4^{i}(q+1)^{2}-4^{\ell-1}(m-8j)).
\end{align*}
If $q$ is odd, then $n=2^{i}q+2^{i-1}+\cdots+2^{1}+2^{0}$ is the non-terminating node, provided $i<\ell$, and produces two nodes one of which does not terminate. If $i=\ell$, then both nodes terminate by Proposition~\ref{FinThm}.
The nodes that terminate are of the form $n=2^{i}q+2^{i-2}+\cdots+2^{1}+2^{0}$ when $1\leq i<\ell$. The case when $n=2q$ is handled by the proof of Lemma~\ref{FinLemStruc2}. For the case $1<i<\ell$, by the same calculation as above we have $$\nu_{2}(f(n))=\nu_{2}(a^{2}2^{2(i-1)}(2q+1)^{2}-4^{\ell-1}(m-8j))$$ Since $2q+1$ is odd and $i<\ell$ the valuation must be $2(i-1)$.
In the case when $n=2^{\ell}q+2^{\ell-2}+\cdots+2^{1}+2^{0}$ we have $$\nu_{2}(f(n))=\nu_{2}(a^{2}2^{2(\ell-1)}(2q+1)^{2}-4^{\ell-1}(m-8j))$$ Thus the valuation must be $2(\ell-1)$ if $m=6,2$, or $2\ell-1$ if $m=7,3$ or $2\ell$ if $m=5$.
Finally if $n=2^{\ell}q+2^{\ell-2}+\cdots+2^{1}+2^{0}$ we have $\nu_{2}(f(n))=\nu_{2}(a^{2}2^{2\ell}(q+1)^{2}-4^{\ell-1}(m-8j))$. Thus the valuation must be $2(\ell-1)$ if $m=7,5,3$ or $2\ell-1$ if $m=6,2$.
\end{proof}
If the function $f(n)=an^{2}+bn+c$ meets the assumptions of Proposition~\ref{FinThm} (Case 3(c) of Theorem~\ref{MainThm1}) note if we define the function $g(n)=n^{2}+2n-\left(1-\frac{b}{2}\right)^{2}+2\left(1-\frac{b}{2}\right)+ac$, then
it follows that $S^{a}(\tau^{1-\frac{b}{2}}(g))(n)=f(n)$. Therefore, by Propositions~\ref{FinPropStruc1},~\ref{FinPropStruc2}, and~\ref{FinPropStruc3} we immediately have the following corollary.
\begin{corollary}\label{FinCor}
If $f(n)=an^{2}+bn+c$ meets the assumptions of Proposition~\ref{FinThm} (Case 3(c)) with $\ell\geq 2$, then
\begin{displaymath}
\nu_{2}(f(n))=\begin{cases}
0,&\ \text{if}\ n\equiv\modd{ a^{-1}\left(1-\frac{b}{2}\right)} {2};\\
2(i-1),&\ \text{if}\ n\equiv\modd{ a^{-1}\left(2^{i-1}-\frac{b}{2}\right)} {2^{i}}\ \text{with}\ 2\leq i<\ell;\\
2(\ell-1),&\ \text{if}\ n\equiv\modd{ a^{-1}\left(2^{\ell-1}-\frac{b}{2}\right)} {2^{\ell}} \text{and}\ m=6,2;\\
2\ell-1,&\ \text{if}\ n\equiv\modd{ a^{-1}\left(2^{\ell-1}-\frac{b}{2}\right)} {2^{\ell}}\ \text{and}\ m=7,3;\\
2\ell,&\ \text{if}\ n\equiv\modd{ a^{-1}\left(2^{\ell-1}-\frac{b}{2}\right)} {2^{\ell}}\ \text{and}\ m=5;\\
2\ell-1,&\ \text{if}\ n\equiv\modd{ a^{-1}\left(2^{\ell}-\frac{b}{2}\right)} {2^{\ell}}\ \text{and}\ m=6,2;\\
2(\ell-1),&\ \text{if}\ n\equiv\modd{ a^{-1}\left(2^{\ell}-\frac{b}{2}\right)} {2^{\ell}}\ \text{and}\ m=7,5,3;\\
\end{cases}
\end{displaymath}
where $a^{-1}$ is the inverse of $\modd{a} {2^{\ell}}$.
\end{corollary}
\begin{proof}
Simply note that $g$ is type $(\ell,1)$ and recall the ways in which the operators affect the function $g$. Each terminating node, under the operators, moves from $n=2^{i}q+r_{i-2}$ to $n=2^{i}q+\modd{a^{-1}\left(r_{i-2}+1-\frac{b}{2}\right)} {2^{i}}$. In the case of type $(\ell,1)$ we have $r_{i-2}=\sum_{k=0}^{i-2}2^{k}$. Thus $r_{i-2}+1=2^{i-1}$ in each case.
\end{proof}
\section{Acknowledgments}
The authors would like to thank Dr.\ Victor Moll for suggesting this topic. We would also like to thank following institutions for providing support to collaborate: ICERM, AIM, Kentucky Wesleyan College, Ursinus College, and Stephen F. Austin State University. We are grateful to our other colleagues for their ongoing support: Dr.\ Maila Brucal-Hallare, Dr.\ Jean-Claude Pedjeu, and Dr.\ Bianca Thompson. And, finally, we are very grateful to the reviewer who provided many very helpful and insightful comments.
\section*{Appendix: figures illustrating trees and tables of values for $2$-adic valuation sequences of some quadratic functions}
In the following tree representations, a closed circle indicates a terminating node and an open circle indicates a non-terminating node.
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{Ex8JHL.jpg}
\vspace{0.2in}
\small
\begin{tabular}{c|cccc ccccc ccc}
$n$&0&1&2&3&4&5&6&7&8&9&10&11\\ \hline
$f_1(n)$&$-25$&$-8$&17&50&91&140&197&262&335&416&505&602\\
$\nu_{2}(f_12(n))$&0&3&0&1&0&2&0&1&0&5&0&1
\end{tabular}
\vspace{0.2in}
\begin{tabular}{c|cccc ccccc cc}
$n$&12&13&14&15&16&17&18&19&20&21\\ \hline
$f_1(n)$&707&820&941&1070&1207&1352&1505&1666&1835&2012\\
$\nu_{2}(f_1(n))$&0&2&0&1&0&3&0&1&0&2
\end{tabular}
\vspace{0.2in}
\caption{The 2-adic valuation tree for $f_1(n)=4n^{2}+13n-25$. Theorem~\ref{MainThm1} predicts that $(\nu_2(f_1(n)))_{n\geq 0}$ is an unbounded sequence, as it satisfies Case 2.}
\label{fig:Ex8}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{Ex7JHL.png}
\vspace{0.2in}
\small
\begin{tabular}{c|cccc ccccc ccc}
$n$&0&1&2&3&4&5&6&7&8&9&10\\ \hline
$f_2(n)$&$-28$&$-3$&48&125&228&357&512&693&900&1133&1392\\
$\nu_{2}(f_2(n))$&2&0&4&0&2&0&9&0&2&0&4&\\
\end{tabular}
\vspace{0.2in}
\begin{tabular}{c|cccc ccccc ccc}
$n$&11&12&13&14&15&16&17&18&19\\ \hline
$f_2(n)$&1677&1988&2325&2688&3077&3492&3933&4400&4893\\
$\nu_{2}(f_2(n))$&0&2&0&7&0&2&0&4&0\\
\end{tabular}
\vspace{0.2in}
\caption{The 2-adic valuation tree and data for $f_2(n)=13n^{2}+12n-28$. Notice that Theorem~\ref{MainThm1} predicts that $(\nu_2(f_2(n)))_{n\geq 0}$ is an unbounded sequence, as it satisfies Case 3(a) since $12^{2}-4\cdot13(-28)=4^{3}(1-8(-3))$.}
\label{fig:Ex7}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{Ex5JHL.png}
\vspace{0.2in}
\small
\begin{tabular}{c|cccc ccccc}
$n$&0&1&2&3&4&5&6&7\\\hline
$f_3(n)$&25559&26716&27903&29120&30367&31644&32951&34288\\
$\nu_{2}(f_3(n))$&0&2&0&6&0&2&0&4
\end{tabular}
\vspace{0.2in}
\begin{tabular}{c|cccc ccccc}
$n$&8&9&10&11&12&13&14&15\\\hline
$f_3(n)$&35655&37052&38479&39936&41423&42940&44487&46064\\
$\nu_{2}(f_3(n))$&0&2&0&10&0&2&0&4
\end{tabular}
\vspace{0.2in}
\caption{The 2-adic valuation tree and data for $f_3(n)=15n^2+1142n+25559$. Notice that Theorem~\ref{MainThm1} predicts that $(\nu_{2}(f_3(n))_{n\geq0}$ is a bounded sequence, as it satisfies Case 3(c) since $1142^2-4\cdot 15 \cdot 25559=4^7(2-8\cdot 2)$.}
\label{fig:Ex5}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{Ex6JHL.jpg}
\vspace{0.2in}
\small
\begin{tabular}{c|cccc ccccc cc}
$n$&0&1&2&3&4&5&6&7&8&9\\\hline
$f_4(n)$&1125&1236&1357&1488&1629&1780&1941&2112&2293&2484\\
$\nu_{2}(f_4(n))$&0&2&0&4&0&2&0&6&0&2
\end{tabular}
\vspace{0.2in}
\begin{tabular}{c|cccc ccccc cc}
$n$&10&11&12&13&14&15&16&17&18&19\\\hline
$f_4(n)$&2685&2896&3117&3348&3589&3840&4101&4372&4653&4944\\
$\nu_{2}(f_4(n))$&0&4&0&2&0&8&0&2&0&4
\end{tabular}
\vspace{0.2in}
\caption{The 2-adic valuation tree and data for $f_4(n)=5n^{2}+106n+1125$. Notice that Theorem~\ref{MainThm1} predicts that $(\nu_{2}(f_4(n))_{n\geq0}$ is a bounded sequence, as it satisfies Case 3(c) since $106^2-4\cdot 5 \cdot 1125=4^5(5-8\cdot 2)$.}
\label{fig:Ex6}
\end{figure}
|
1,477,468,750,631 | arxiv | \section{Introduction}\label{sec:introduction}}
\IEEEPARstart{E}{motions} play a large role in our lives, defining our experiences and shaping how we view the world and interact with other humans. Perceiving the emotions of social partners helps us understand their behaviors and decide our actions towards them. For example, people communicate very differently with someone they perceive to be angry and hostile than they do with someone they perceive to be calm and content. Furthermore, the emotions of unknown individuals can also govern our behavior, (e.g., emotions of pedestrians at a road-crossing or emotions of passengers in a train station). Because of the importance of perceived emotion in everyday life, automatic emotion recognition is a critical problem in many fields such as games and entertainment, security and law enforcement, shopping, human-computer interaction, human-robot interaction, etc.
Humans perceive the emotions of other individuals using verbal and non-verbal cues. Robots and AI devices that possess speech understanding and natural language processing capabilities are better at interacting with humans. Deep learning techniques can be used for speech emotion recognition and can facilitate better interactions with humans~\cite{devillers2015inference}.
Understanding the perceived emotions of individuals using non-verbal cues is a challenging problem. \blue{Humans use the non-verbal cues of facial expressions and body movements to perceive emotions.} With a more extensive availability of data, considerable research has focused on using facial expressions to understand emotion~\cite{fabian2016emotionet}. However, recent studies in psychology question the communicative purpose of facial expressions and doubt the quick, automatic process of perceiving emotions from these expressions~\cite{russell2003facialandvocal}. There are situations when facial expressions can be unreliable, such as with ``mock" or ``referential expressions"~\cite{ekman1993facialexpression}. Facial expressions can also be unreliable depending on whether an audience is present~\cite{fernandezdols1995}.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{images/cover.png}
\caption{\textbf{Identiying Perceived Emotions}: We present a novel algorithm to identify the perceived emotions of individuals based on their walking styles. Given an RGB video of an individual walking (top), we extract his/her walking gait as a series of 3D poses (bottom). We use a combination of deep features learned via an LSTM and affective features computed using posture and movement cues to then classify into basic emotions (e.g., happy, sad, etc.) using a Random Forest Classifier.}
\label{fig:cover}
\end{figure}
Research has shown that body expressions are also crucial in emotion expression and perception~\cite{kleinsmith2013affective}. For example, when presented with bodies and faces that expressed either anger or fear (matched correctly with each other or as mismatched compound images), observers are biased towards body expression~\cite{meeren2005rapid}. Aviezer et al.'s study~\cite{aviezer2012} on positive/negative valence in tennis players showed that faces alone were not a diagnostic predictor of valence, but the body alone or the face and body together can be predictive.
Specifically, body expression in walking, or an individual's gait, has been proven to aid in the perception of emotions. In an early study by Montepare et al.~\cite{montepare1987identification}, participants were able to identify sadness, anger, happiness, and pride at a significant rate by observing affective features such as increased arm swinging, long strides, a greater foot landing force, and erect posture. Specific movements have also been correlated with specific emotions. For example, sad movements are characterized by a collapsed upper body and low movement activity~\cite{wallbott1998}. Happy movements have a faster pace with more arm swaying~\cite{michalak2009embodiment}.
\textbf{Main Results:} We present an automatic emotion identification approach for videos of walking individuals (Figure~\ref{fig:cover}). We classify walking individuals from videos into happy, sad, angry, and neutral emotion categories. These emotions represent emotional states that last for an extended period and are more abundant during walking~\cite{ma2006motion}. We extract gaits from walking videos as 3D poses. We use an LSTM-based approach to obtain deep features by modeling the long-term temporal dependencies in these sequential 3D human poses. We also present spatiotemporal \textit{affective features} representing the posture and movement of walking humans. We combine these affective features with LSTM-based deep features and use a Random Forest Classifier to classify them into four categories of emotion. We observe an improvement of $13.85\%$ in the classification accuracy over other gait-based perceived emotion classification algorithms (Table~\ref{tab:accuracySota}). \blue{We refer to our LSTM-based model between affective and deep features and the perceived emotion labels as our novel data-driven mapping.}
We also present a new dataset, \textit{``Emotion Walk (EWalk),"} which contains videos of individuals walking in both indoor and outdoor locations. Our dataset consists of $1384$ gaits and the perceived emotions labeled using Mechanical Turk.
Some of the novel components of our work include:\\
\noindent 1. A novel data-driven mapping between the affective features extracted from a walking video and the perceived emotions.
\noindent 2. A novel emotion identification algorithm that combines affective features and deep features, obtaining $80.07\%$ accuracy.
\noindent 3. A new public domain dataset, \textit{EWalk}, with walking videos, gaits, and labeled emotions.
The rest of the paper is organized as follows. In Section 2, we review the related work in the fields of emotion modeling, bodily expression of emotion, and automatic recognition of emotion using body expressions. In Section 3, we give an overview of our approach and present the affective features. We provide the details of our LSTM-based approach to identifying perceived emotions from walking videos in Section 4. We compare the performance of our method with state-of-the-art methods in Section 5. We present the \textit{EWalk} dataset in Section 6.
\section{\blue{Related Work}}
\label{sec:RelatedWork}
\blue{In this section, we give a brief overview of previous works on emotion representation, emotion expression using body posture and movement, and automatic emotion recognition.}
\subsection{\blue{Emotion Representation}}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{images/affectSpace.png}
\caption{All discrete emotions can be represented by points on a 2D affect space of Valence and Arousal~\protect\cite{loutfi2003social,ekman1967head}.}
\label{fig:affectSpace}
\end{figure}
\blue{Emotions have been represented using both discrete and continuous representations~\cite{ekman1967head,mehrabian1980basic,kleinsmith2013affective}. In this paper, we focus on discrete representations of the emotions and identify four discrete emotions (happy, angry, sad, and neutral). However, a combination of these emotions can be used to obtain the continuous representation (Section~\ref{sec:affect}). A mapping between continuous representation and the discrete categories developed by Mikels and Morris~\cite{morris1995observations,mikels2005emotional} can be used to predict other emotions.}
\blue{It is important to distinguish between perceived emotions and actual emotions as we discuss the perception of emotions. One of the most obvious cues to another person's emotional state is his or her self-report~\cite{RobinsonClore2002}. However, self-reports are not always available; for example, when people observe others remotely (e.g., via cameras), they do not have the ability to ask about their emotional state. Additionally, self-reports can be imperfect because people can experience an emotion without being aware of it or be unable to translate the emotion into words~\cite{barrett2019emotional}. Therefore, in this paper, we focus on emotions perceived by observers instead of using self-reported measures.}
\blue{Affect expression combines verbal and nonverbal communication, including eye gaze and body expressions, in addition to facial expressions, intonation, and other cues~\cite{picard1998towardagents}. Facial expressions--like any element of emotional communication--do not exist in isolation. There is no denying that in certain cases, such as with actors and caricatures, it is appropriate to assume affect based on the visual cues from the face, however, in day-to-day life, this does not account for body expressions. More specifically, the way a person walks, or their gait, has been proven to aid in the perception of that person’s emotion~\cite{montepare1987identification}.}
\blue{With the increasing availability of technologies that capture body expression, there is considerable work on the automatic recognition of emotions from body expressions. Most works use a feature-based approach to identify emotion from body expressions automatically. These features are either extracted using purely statistical techniques or using techniques that are inspired by psychological studies. Karg et al.~\cite{karg2013body} surveyed body movement-based methods for automatic recognition and generation of affective expression. Many techniques in this area have focused on activities such as knocking~\cite{gross2010methodology}, dancing~\cite{camurri2003recognizing}, games~\cite{savva2012continuous}, etc. A recent survey discussed various gesture-based emotion recognition techniques~\cite{noroozi2018survey}. These approaches model gesture features (either handcrafted or learned) and then classify these gestures into emotion classes. For example, Piana et al.~\cite{piana2016adaptive,piana2014real,piana2013set,piana2013set} presented methods for emotion recognition from motion-captured data or RGB-D videos obtained from Kinect cameras. Their method is focused on characters that are stationary and are performing various gestures using hands and head joints. They recognize emotions using an SVM classifier that classifies features (both handcrafted and learned) from 3D coordinate and silhouette data. Other approaches have used PCA to model non-verbal movement features for emotion recognition~\cite{de2004modeling,glowinski2011toward}.}
\blue{Laban movement analysis (LMA)~\cite{von1970principles} is a framework for representing the human movement that has widely used for emotion recognition. Many approaches have formulated gesture features based on LMA and used them to recognize emotions~\cite{zacharatos2013emotion,camurri2003recognizing}. The Body Action and Posture Coding System (BAP) is another framework for coding body movement~\cite{dael2012body,huis2014body,van2014body}. Researchers have used BAP to formulate gesture features and used these features for emotion recognition~\cite{dael2012emotion}.}
\blue{Deep learning models have also been used to recognize emotions from non-verbal gesture cues. Sapinski et al.~\cite{sapinski2019emotion} used an LSTM-based network to identify emotions from body movement. Savva et al.~\cite{savva2012continuous} proposed an RNN-based emotion identification method for players of fully-body computer games. Butepage et al.~\cite{butepage2017deep} presented a generative model for human motion prediction and activity classification. Wang et al.~\cite{wang2019learning} proposed an LSTM-based network to recognize pain-related behavior from body expressions. Multimodal approaches that combine cues such as facial expressions, speech, and voice with body expressions have also been proposed~\cite{meeren2005rapid,wagner2011exploring,balomenos2004emotion,caridakis2007multimodal}.}
\blue{In this work, we focus on pedestrians and present an algorithm to recognize emotions from walking using gaits. Our approach uses both handcrafted features (referred to as the affective features), and deep features learned using an LSTM-based network for emotion recognition.}
\subsection{\blue{Automatic Emotion Recognition from Walking}}
\blue{As shown by Montepare et al.~\cite{montepare1987identification}, gaits have the potential to convey emotions. Previous research has shown that gaits can be used to recognize emotions. These approaches have formulated features using gaits obtained as 3D positions of joints from Microsoft Kinect or motion-captured data. For example, Li et al.~\cite{li2016identifying,li2016emotion} used gaits extracted from Microsoft Kinect and recognized the emotions using Fourier Transform and PCA-based features. Roether et al.~\cite{roether2009critical,roether2009features} identified posture and movement features from gaits by conducting a perception experiment. However, their goal was to formulate a set of expressive features and not emotion recognition. Crenn et al.~\cite{crenn2016body} used handcrafted features from gaits and classified them using SVMs. In the following work, they generated neutral movements from expressive movements for emotion recognition. They introduced a cost function that is optimized using the Particle Swarm Optimization method to generate a neutral movement. They used the difference between the expressive and neutral movement for emotion recognition. Karg et al.~\cite{karg2010recognition,karg2009two,karg2009comparison} examined gait information for person-dependent affect recognition using motion capture data of a single walking stride. They formulated handcrafted gait features converted to a lower-dimensional space using PCA and then classified using them SVM, Naive Bayes, fully-connected neural networks. Venture et al.~\cite{venture2014recognizing} used an auto-correlation matrix of the joint angles at each frame of the gait and used similarity indices for classification. Janssen et al.~\cite{janssen2008recognition} used a neural network for emotion identification from gaits and achieved an accuracy of more than $80\%$. However, their method requires special devices to compute 3D ground reaction forces during gait, which may not be available in many situations. Daoudi et al.~\cite{daoudi2017emotion} used a manifold of symmetric positive definite matrices to represent body movement and classified them using the Nearest Neighbors method. Kleinsmith et al.~\cite{kleinsmith2011automatic} used handcrafted features of postures and classified them to recognize affect using a multilayer perceptron automatically. Omlor and Giese~\cite{omlor2007extraction} identified spatiotemporal features that are specific to different emotions in gaits using a novel blind source separation method. Gross et al.~\cite{gross2012effort-shape} performed an Effort-Shape analysis to identify the characteristics associated with positive and negative emotions. They observed features such as walking speed, increased amplitude of joints, thoracic flexion, etc. were correlated with emotions. However, they did not present any method to use these features for automatic identification of emotions.}
\blue{Researchers have also attempted to synthesize gaits with different styles. Ribet et al.~\cite{ribet2019survey} surveyed the literature on style generation and recognition using body movements. Tilmanne et al.~\cite{tilmanne2010expressive} presented methods for generating stylistic gaits using a PCA-based data-driven approach. In the principal component space, they modeled the variability of gaits using Gaussian distributions. In a subsequent work~\cite{tilmanne2012stylistic}, they synthesized stylistic gaits using a method based on Hidden Markov Models (HMMs). This method is based on using neutral walks and modifying them to generate different styles. Troje~\cite{troje2002decomposing} proposed a framework to classify gender and also used it to synthesize new motion patterns. However, the model is not used for emotion identification. Felis et al.~\cite{felis2013modeling} used objective functions to add an emotional aspect to gaits. However, their results only showed sad and angry emotions.}
\blue{Similar to these approaches of emotion recognition from gaits, we use handcrafted features (referred to as affective features). In contrast to previous methods that either use handcrafted features or deep learning methods, we combine the affective features with deep features extracted using an LSTM-based network. We use the resulting joint features for emotion recognition.}
\section{Approach}
In this section, we describe our algorithm (Figure~\ref{fig:overview}) for identifying perceived emotions from RGB videos.
\subsection{Notation}
For our formulation, we represent a human with a set of $16$ joints, as shown in~\cite{dabral2018learning} (Figure~\ref{fig:skeleton}). A pose $P \in \mathbb{R}^{48}$ of a human is a set of 3D positions of each joint $j_i, i \in \{1,2, ..., 16\}$. For any RGB video $V$, we represent the gait extracted using 3D pose estimation as $G$. The gait $G$ is a set of 3D poses ${P_1, P_2,..., P_{\tau}}$ where $\tau$ is the number of frames in the input video $V$. We represent the extracted affective features of a gait $G$ as $F$. Given the gait features $F$, we represent the predicted emotion by $e \in \{ happy, angry, sad, neutral\}$. \blue{While neutral is not an emotion, it is still a valid state to be used for classification. In the rest of the paper, we refer to these four categories, including neutral as four emotions for convenience.} The four basic emotions represent emotional states that last for an extended period and are more abundant during walking~\cite{ma2006motion}. \blue{A combination of these four emotions can be used to predict affective dimensions of valence and arousal and also other emotions~\cite{mikels2005emotional}.}
\begin{figure*}[t]
\centering
\includegraphics[width =\linewidth]{images/diagram.png}
\caption{\textbf{Overview:} Given an RGB video of an individual walking, we use a state-of-the-art 3D human pose estimation technique~\protect\cite{dabral2018learning} to extract a set of 3D poses. These 3D poses are passed to an LSTM network to extract deep features. We train this LSTM network using multiple gait datasets. We also compute affective features consisting of both posture and movement features using psychological characterization. We concatenate these affective features with deep features and classify the combined features into $4$ basic emotions using a Random Forest classifier.}
\label{fig:overview}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.20\textwidth]{images/skeleton.png}
\caption{\textbf{Human Representation}: We represent a human by a set of $16$ joints. The overall configuration of the human is defined using these joint positions and is used to extract the features.}
\label{fig:skeleton}
\end{figure}
\subsection{Overview}
Our real-time perceived emotion prediction algorithm is based on a data-driven approach. We present an overview of our approach in Figure~\ref{fig:overview}. During the offline training phase, we use multiple gait datasets \blue{(described in Section~\ref{sec:mocapDatasets})} and extract affective features. These affective features are based on psychological characterization~\cite{crenn2016body,karg2010recognition} and consist of both posture and movement features. We also extract deep features by training an LSTM network. We combine these deep and affective features and train a Random Forest classifier. At runtime, given an RGB video of an individual walking, we extract his/her gait in the form of a set of 3D poses using a state-of-the-art 3D human pose estimation technique~\cite{dabral2018learning}. We extract affective and deep features from this gait and identify the perceived emotion using the trained Random Forest classifier. We now describe each component of our algorithm in detail.
\subsection{Affective Feature Computation}\label{sec:featureExtraction}
\begin{table}[t]
\caption{\textbf{Posture Features}: We extract posture features from an input gait using emotion characterization in visual perception and psychology literature~\protect \cite{karg2010recognition,crenn2016body}.}
\centering
\begin{tabular}{|l|l|}
\hline
\multicolumn{1}{|c|}{Type} & \multicolumn{1}{c|}{Description} \\ \hline
Volume & Bounding box \\ \cline{1-2}
\multirow{5}{*}{Angle} & At neck by shoulders \\ \cline{2-2}
& \begin{tabular}[c]{@{}l@{}}At right shoulder by \\ neck and left shoulder\end{tabular} \\ \cline{2-2}
& \begin{tabular}[c]{@{}l@{}}At left shoulder by \\ neck and right shoulder\end{tabular}\\ \cline{2-2}
& At neck by vertical and back \\ \cline{2-2}
& At neck by head and back \\ \cline{1-2}
\multirow{5}{*}{Distance} & \begin{tabular}[c]{@{}l@{}}Between right hand \\ and the root joint\end{tabular} \\ \cline{2-2}
& \begin{tabular}[c]{@{}l@{}}Between left hand \\ and the root joint\end{tabular} \\ \cline{2-2}
& \begin{tabular}[c]{@{}l@{}}Between right foot \\ and the root joint\end{tabular} \\ \cline{2-2}
& \begin{tabular}[c]{@{}l@{}}Between left foot \\ and the root joint\end{tabular} \\ \cline{2-2}
& \begin{tabular}[c]{@{}l@{}}Between consecutive\\ foot strikes (stride length)\end{tabular} \\ \cline{1-2}
\multirow{2}{*}{Area} & \begin{tabular}[c]{@{}l@{}}Triangle between \\ hands and neck\end{tabular} \\ \cline{2-2}
& \begin{tabular}[c]{@{}l@{}}Triangle between \\ feet and the root joint\end{tabular} \\ \hline
\end{tabular}
\label{tab:posturefeatures}
\end{table}
For an accurate prediction of an individual's affective state, both posture and movement features are essential~\cite{kleinsmith2013affective}. Features in the form of joint angles, distances, and velocities, and space occupied by the body have been used for recognition of emotions and affective states from gaits~\cite{crenn2016body}. Based on these psychological findings, we compute affective features that include both the posture and the movement features.
We represent the extracted affective features of a gait $\textbf{G}$ as a vector $F \in \mathbb{R}^{29}$.
For feature extraction, we use a single stride from each gait corresponding to consecutive foot strikes of the same foot. We used a single cycle in our experiments because in some of the datasets (CMU, ICT, EWalk) only a single walk cycle was available. When multiple walk cycles are available, they can be used to increase accuracy.
\subsubsection{Posture Features}
We compute the features $F_{p, t} \in \mathbb{R}^{12}$ related to the posture $P_t$ of the human at each frame $t$ using the skeletal representation (computed using TimePoseNet Section~\ref{sec:timeposenet}). We list the posture features in Table~\ref{tab:posturefeatures}. \blue{These posture features are based on prior work by Crenn et al.~\cite{crenn2016body}. They used upper body features such as the area of the triangle between hands and neck, distances between hand, shoulder, and hip joints, and angles at neck and back. However, their formulation does not consider features related to foot joints, which can convey emotions in walking~\cite{roether2009features}. Therefore, we also include areas, distances, and angles of the feet joints in our posture feature formulation.}
We define posture features of the following types:
\begin{itemize}
\item Volume: According to Crenn et al.~\cite{crenn2016body}, body expansion conveys positive emotions while a person has a more compact posture during negative expressions. We model this by the volume $F_{volume, t} \in \mathbb{R}$ occupied by the bounding box around the human.
\item Area: We also model body expansion by areas of triangles between the hands and the neck and between the feet and the root joint $F_{area, t} \in \mathbb{R}^2$.
\item Distance: Distances between the feet and the hands can also be used to model body expansion $F_{distance, t} \in \mathbb{R}^4$.
\item Angle: Head tilt is used to distinguish between happy and sad emotions~\cite{crenn2016body,karg2010recognition}. We model this by the angles extended by different joints at the neck $F_{angle, t} \in \mathbb{R}^5$.
\end{itemize}
We also include stride length as a posture feature. Longer stride lengths convey anger and happiness and shorter stride lengths convey sadness and neutrality~\cite{karg2010recognition}. Suppose we represent the positions of the left foot joint $j_{lFoot}$ and the right foot joint $j_{rFoot}$ in frame $t$ as $\vec{p}(j_{lFoot}, t)$ and $\vec{p}(j_{rFoot}, t)$ respectively. Then the stride length $s \in \mathbb{R}$ is computed as:
\begin{eqnarray}
s = \max\limits_{t \in 1..\tau}||\vec{p}(j_{lFoot}, t) - \vec{p}(j_{rFoot}, t)||
\end{eqnarray}
We define the posture features $F_{p}\in \mathbb{R}^{13}$ as the average of $F_{p, t}, t=\{1,2,..,\tau\}$ combined with the stride length:
\begin{eqnarray}
F_{p} = \frac{\sum_{t} F_{p, t}}{\tau} \cup s,
\end{eqnarray}
\subsubsection{Movement Features}
\begin{table}[t]
\caption{\textbf{Movement Features}: We extract movement features from an input gait using emotion characterization in visual perception and psychology literature~\protect \cite{karg2010recognition,crenn2016body}.}
\centering
\begin{tabular}{|l|l|c|}
\hline
\multicolumn{1}{|c|}{Type} & \multicolumn{1}{c|}{Description} \\ \hline
\multirow{5}{*}{Speed} & Right hand \\ \cline{2-2}
& Left hand \\ \cline{2-2}
& Head \\ \cline{2-2}
& Right foot \\ \cline{2-2}
& Left foot \\ \cline{1-2}
\multirow{5}{*}{Acceleration Magnitude} & Right hand \\ \cline{2-2}
& Left hand \\ \cline{2-2}
& Head \\ \cline{2-2}
& Right foot \\ \cline{2-2}
& Left foot \\ \cline{1-2}
\multirow{5}{*}{Movement Jerk} & Right hand \\ \cline{2-2}
& Left hand \\ \cline{2-2}
& Head \\ \cline{2-2}
& Right foot \\ \cline{2-2}
& Left foot \\ \cline{1-2}
Time & One gait cycle \\ \hline
\end{tabular}
\label{tab:movementfeatures}
\end{table}
Psychologists have shown that motion is an important characteristic for the perception of different emotions~\cite{gross2012effort-shape}. High arousal emotions are more associated with rapid and increased movements than low arousal emotions. We compute the movement features $F_{m, t} \in \mathbb{R}^{15}$ at frame $t$ by considering the magnitude of the velocity, acceleration, and movement jerk of the hand, foot, and head joints using the skeletal representation. For each of these five joints $j_i, i={1,...,5}$, we compute the magnitude of the first, second, and third finite derivatives of the position vector $\vec{p}(j_i, t)$ at frame $t$. We list the movement features in Table~\ref{tab:movementfeatures}. \blue{These movement features are based on prior work by Crenn et al.~\cite{crenn2016body}. Similar to the posture features, we combine the upper body features from Crenn et al.~\cite{crenn2016body} with lower body features related to feet joints.}
Since faster gaits are perceived as happy or angry whereas slower gaits are considered sad~\cite{karg2010recognition}, we also include the time taken for one walk cycle ($gt\in \mathbb{R}$) as a movement feature. We define the movement features $F_{m}\in \mathbb{R}^{16}$ as the average of $F_{m, t}, t=\{1,2,..,\tau\}$:
\begin{eqnarray}
F_{m} = \frac{\sum_{t} F_{m, t}}{\tau} \cup gt,
\end{eqnarray}
We combine posture and movement features and define \textbf{affective features} $F$ as: $F = F_{m} \cup F_{p}$.
\section{Perceived Emotion Identification}
We use a \textit{vanilla} LSTM network~\cite{greff2017lstm} with a cross-entropy loss that models the temporal dependencies in the gait data. We chose an LSTM network to model deep features of walking because it captures the geometric consistency and temporal dependency among video frames for gait modeling~\cite{luo2018lstm}. We describe the details of the training of the LSTM in this section.
\subsection{Datasets}\label{sec:mocapDatasets}
We used the following publicly available datasets for training our perceived emotion classifier:
\begin{itemize}
\item \textbf{Human3.6M}~\cite{h36m_pami}: This dataset consists of $3.6$ million 3D human images and corresponding poses. It also contains video recordings of $5$ female and $6$ male professional actors performing actions in $17$ scenarios including taking photos, talking on the phone, participating in discussions, etc. The videos were captured at 50 Hz with four calibrated cameras working simultaneously. We used motion-captured gaits from $14$ videos of the subjects walking from this dataset.
\item \textbf{CMU}~\cite{CMUGait}: The CMU Graphics Lab Motion Capture Database contains motion-captured videos of humans interacting among themselves (\textit{e.g.}, talking, playing together), interacting with the environment (\textit{e.g.}, playgrounds, uneven terrains), performing physical activities (\textit{e.g.}, playing sports, dancing), enacting scenarios (\textit{e.g.}, specific behaviors), and locomoting (\textit{e.g.}, running, walking). In total, there are motion captured gaits from $49$ videos of subjects walking with different styles.
\item \textbf{ICT}~\cite{narang2017motion}: This dataset contains motion-captured gaits from walking videos of $24$ subjects. The videos were annotated by the subjects themselves, who were asked to label their own motions as well as motions of other subjects familiar to them.
\item \textbf{BML}~\cite{ma2006motion}: This dataset contains motion-captured gaits from $30$ subjects (15 male and 15 female). The subjects were nonprofessional actors, ranging between $17$ and $29$ years of age with a mean age of $22$ years. For the walking videos, the actors walked in a triangle for 30 sec, turning clockwise and then counterclockwise in two individual conditions. Each subject provided $4$ different walking styles in two directions, resulting in $240$ different gaits.
\item \textbf{SIG}~\cite{xia2015realtime}: This is a dataset of $41$ synthetic gaits generated using local mixtures of autoregressive (MAR) models to capture the complex relationships between the different styles of motion. The local MAR models were developed in real-time by obtaining the nearest examples of given pose inputs in the database. The trained model were able to adapt to the input poses with simple linear transformations. Moreover, the local MAR models were able to predict the timings of synthesized poses in the output style.
\item \textbf{EWalk (Our novel dataset)}: We also collected videos and extracted $1136$ gaits using 3D pose estimation. We present details about this dataset in Section~\ref{sec:dataset}.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{images/gaitVideo.png}
\caption{\textbf{Gait Visualizations:} We show the visualization of the motion-captured gaits of four individuals with their classified emotion labels. Gait videos from $248$ motion-captured gaits were displayed to the participants in a web based user study to generate labels. We use that data for training and validation.}
\label{fig:gaitVideo}
\end{figure}
The wide variety of these datasets includes acted as well as non-acting and natural-walking datasets (CMU, ICT) where the subjects were not told to assume an emotion. These datasets provide a good sample of real-world scenarios. \blue{For the acted video datasets, we did not use the acted emotion labels for the gaits, but instead obtained the perceived emotion labels with a user study (Section~\ref{sec:labeling}).}
\subsection{Perceived Emotion Labeling}\label{sec:labeling}
We obtained the perceived emotion labels for each gait using a web-based user study.
\subsubsection{Procedure}
We generated visualizations of each motion-captured gait using a skeleton mesh (Figure~\ref{fig:gaitVideo}). For the \textit{EWalk} dataset, we presented the original videos to the participants when they were available. We hid the faces of the actors in these videos to ensure that the emotions were perceived from the movements of the body and gaits, not from the facial expressions.
\subsubsection{Participants}
We recruited $688$ participants ($279$ female, $406$ male, $\overline{age} = 34.8$) from Amazon Mechanical Turk and the participant responses were used to generate perceived emotion labels. Each participant watched and rated $10$ videos from one of the datasets. The videos were presented randomly and for each video we obtained a minimum of $10$ participant responses. \blue{Participants who provided a constant rating to all gaits or responded in a very short time were ignored for further analysis ($3$ participants total).}
\subsubsection{Analysis}
We asked each participant whether he/she perceived the gait video as happy, angry, sad, or neutral on 5-point Likert items ranging from Strongly Disagree to Strongly Agree. For each gait $\textbf{G}_i$ in the datasets, we calculated the mean of all participant responses ($r^{e}_{i, j}$) to each emotion:
\begin{eqnarray}
r^{e}_i = \frac{\sum_{j=1}^{n_p} r^{e}_{i,j}}{n_p},
\end{eqnarray}
where $n_p$ is the number of participant responses collected and $e$ is one of the four emotions: angry, sad, happy, neutral.
We analyzed the correlation between participants' responses to the questions relating to the four emotions (Table~\ref{tab:correl}). A correlation value closer to $1$ indicates that the two variables are positively correlated and a correlation value closer to $-1$ indicates that the two variables are negatively correlated. A correlation value closer to $0$ indicates that two variables are uncorrelated. As expected, \textit{happy} and \textit{sad} are negatively correlated and \textit{neutral} is uncorrelated with the other emotions.
\begin{table}[t]
\centering
\caption{\textbf{Correlation Between Emotion Responses}: We present the correlation between participants' responses to questions relating to the four emotions.}
\begin{tabular}{|c|c|c|c|c|}
\hline
& Happy & Angry & Sad & Neutral \\ \hline
Happy & 1.000 & -0.268 & -0.775 & -0.175 \\ \hline
Angry & -0.268 & 1.000 & -0.086 & -0.058 \\ \hline
Sad & -0.775 & -0.086 & 1.000 & -0.036 \\ \hline
Neutral & -0.175 & -0.058 & -0.036 & 1.000 \\ \hline
\end{tabular}
\label{tab:correl}
\end{table}
Previous research in the psychology literature suggests that social perception is affected by the gender of the observer~\cite{carli1995nonverbal,forlizzi2007interface,kramer2016closing}. To verify that our results do not significantly depend on the gender of the participants, we performed a t-test for differences between the responses by male and female participants. We observed that the gender of the participant did not affect the responses significantly ($t = -0.952, p = 0.353$).
We obtained the emotion label $e_i$ for $\textbf{G}_i$ as follows:
\begin{eqnarray}
e_i = e \mid r^{e}_i > \theta,
\end{eqnarray}
where $\theta = 3.5$ is an experimentally determined threshold for emotion perception.
If there are multiple emotions with average participant responses greater than $ r^{e}_i > \theta$, the gait is not used for training.
\subsection{Long Short-Term Memory (LSTM) Networks}
LSTM networks~\cite{greff2017lstm} are neural networks with special units known as ``memory cells'' that can store data values from particular time steps in a data sequence for arbitrarily long time steps. Thus, LSTMs are useful for capturing temporal patterns in data sequences and subsequently using those patterns in prediction and classification tasks. To perform supervised classification, LSTMs, like other neural networks, are trained with a set of training data and corresponding class labels. However, unlike traditional feedforward neural networks that learn structural patterns in the training data, LSTMs learn feature vectors that encode temporal patterns in the training data.
LSTMs achieve this by training one or more ``hidden'' cells, where the output at every time step at every cell depends on the current input and the outputs at previous time steps. These inputs and outputs to the LSTM cells are controlled by a set of gates. LSTMs commonly have three kinds of gates: the input gate, the output gate, and the forget gate, represented by the following equations:
\begin{align}
\textit{Input Gate $(i)$:} \qquad & i_t = \sigma(W_i + U_ih_{t-1} + b_i) \\
\textit{Output Gate $(o)$:} \qquad & o_t = \sigma(W_o + U_oh_{t-1} + b_o) \\
\textit{Forget Gate $(f)$:} \qquad & f_t = \sigma(W_f + U_fh_{t-1} + b_f)
\end{align}
where $\sigma(\cdot)$ denotes the activation function and $W_g$, $U_g$ and $b_g$ denote the weight \blue{matrix} for the input at the current time step, the weight matrix for the hidden cell at the previous time step, and the bias, on gate $g\in\{i, o, f\}$, respectively. Based on these gates, the hidden cells in the LSTMs are then updated using the following equations:
\begin{align}
c_t &= f_t \circ c_{t-1} + i_t \circ \sigma(W_cx_t + U_ch_t + b_c) \\
h_t &= \sigma(o_t \circ c_t)
\end{align}
where $\circ$ denotes the Hadamard or elementwise product, $c$ is referred to as the cell state, and $W_c$, $U_c$ and $b_c$ are the weight matrix for the input at the current time step, the weight matrix for the hidden cell at the previous time step, and the bias, on $c$, respectively.
\subsection{Deep Feature Computation}
We used the LSTM network shown in Figure~\ref{fig:overview}. We obtained deep features from the final layer of the trained LSTM network. We used the $1384$ gaits from the various public datasets (Section~\ref{sec:mocapDatasets}). We also analyzed the extracted deep features using an LSTM encoder-decoder architecture with reconstruction loss. We generated synthetic gaits and observed that our LSTM-based deep features correctly model the 3D positions of joints relative to each other at each frame. The deep features also capture the periodic motion of the hands and legs.
\subsubsection{Implementation Details}
The training procedure of the LSTM network that we followed is laid out in Algorithm~\ref{algo:LSTM_net}. For training, we used a mini-batch size of $8$ (\textit{i.e.}, $b=8$ in Algorithm~\ref{algo:LSTM_net}) and $500$ training epochs. We used the Adam optimizer~\cite{adam} with an initial learning rate of $0.1$, decreasing it to $\frac{1}{10}$-th of its current value after $250$, $375$, and $438$ epochs. We also used a momentum of $0.9$ and a weight-decay of $5\times 10^{-4}$. The training was carried out on an NVIDIA GeForce GTX 1080 Ti GPU.
\begin{algorithm}
\caption{LSTM Network for Emotion Perception}\label{algo:LSTM_net}
\hspace*{\algorithmicindent} \textbf{Input:} $N$ training gaits $\{\textbf{G}_i\}_{i=1\dots N}$ and corresponding emotion labels $\{\text{L}_i\}_{i=1\dots N}$.\\
\hspace*{\algorithmicindent} \textbf{Output:} Network parameters $\theta$ such that the loss $\sum_{i=1}^N\lVert\text{L}_i - f_{\theta}(\textbf{G}_i)\rVert^2$ is minimized, where $f_\theta(\cdot)$ denotes the network.
\begin{algorithmic}[1]
\Procedure{Train}{}
\For {number of training epochs}
\For {number of iterations per epoch}
\State Sample mini-batch of $b$ training gaits and corresponding labels
\State Update the network parameters $\theta$ w.r.t. the $b$ samples using backpropagation.
\EndFor
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Classification}\label{sec:classifier}
We concatenate the deep features with affective features and use a Random Forest classifier to classify these concatenated features. Before combining the affective features with the deep features, we normalize them to a range of $[-1, 1]$. We use Random Forest Classifier with $10$ estimators and a maximum depth of $5$. We use this trained classifier to classify perceived emotions. \blue{We refer to this trained classifier as our novel data-driven mapping between affective features and perceived emotion.}
\subsection{Realtime Perceived Emotion Recognition}\label{sec:timeposenet}
At runtime, we take an RGB video as input and use the trained classifier to identify the perceived emotions. \blue{We exploit a real-time 3D human pose estimation algorithm, \textit{TimePoseNet}~\cite{dabral2018learning} to extract 3D joint positions from RGB videos. \textit{TimePoseNet} uses a semi-supervised learning method that utilizes the more widely available 2D human pose data~\cite{lin2014microsoft} to learn the 3D information.}
\textit{TimePoseNet} is a single person model and expects a sequence of images cropped closely around the person as input. Therefore, we first run a real-time person detector~\cite{cao2017realtime} on each frame of the RGB video and extract a sequence of images cropped closely around the person in the video $V$. The frames of the input video $V$ are sequentially passed to \textit{TimePoseNet}, which computes a 3D pose output for each input frame. The resultant poses ${P_1, P_2,..., P_{\tau}}$ represent the extracted output gait $G$. We normalize the output poses so that the root position always coincides with the origin of the 3D space. We extract features of the gait $G$ using the trained LSTM model. We also compute the affective features and classify the combined features using the trained Random Forest classifier.
\section{Results}
We provide the classification results of our algorithm in this section.
\subsection{Analysis of Different Classification Methods}
\blue{We compare different classifiers to classify our combined deep and affective features and compare the resulting accuracy values in Table~\ref{tab:accuracyMethods}. These results are computed using 10-fold cross-validation on $1384$ gaits in the gait datasets described in Section~\ref{sec:mocapDatasets}. We compared Support Vector Machines (SVMs) with both linear and RBF kernel and Random Forest methods. The SVMs were implemented using the one-vs-one approach for multi-class classification. The Random Forest Classifier was implemented with $10$ estimators and a maximum depth of $5$.} We use the Random Forest classifier in the subsequent results because it provides the highest accuracy ($80.07\%$) of all the classification methods. Additionally, our algorithm achieves $79.72\%$ accuracy on the non-acted datasets (CMU and ICT), indicating that it performs equally well on acted and non-acted data.
\begin{table}[t]
\caption{\textbf{Performance of Different Classification Methods}: We analyze different classification algorithms to classify the concatenated deep and affective features. We observe an accuracy of $80.07\%$ with the Random Forest classifier.}
\label{tab:accuracyMethods}
\centering
\begin{tabular}{|l|l|}
\hline
\multicolumn{1}{|c|}{\textbf{Algorithm (Deep + Affective Features)}} & \multicolumn{1}{c|}{\textbf{Accuracy}} \\ \hline
LSTM + Support Vector Machines (SVM RBF) & 70.04\% \\ \hline
LSTM + Support Vector Machines (SVM Linear) & 71.01\% \\ \hline
\textit{LSTM + Random Forest} & \textbf{80.07}\% \\ \hline
\end{tabular}
\end{table}
\subsection{Comparison with Other Methods}
In this section, we present the results of our algorithm and compare it with other state-of-the-art methods:
\begin{itemize}
\item Karg et al.~\cite{karg2010recognition}: This method is based on using gait features related to shoulder, neck, and thorax angles, stride length, and velocity. These features are classified using PCA-based methods. This method only models the posture features for the joints and doesn't model the movement features.
\item Venture et al.~\cite{venture2014recognizing}: This method uses the auto-correlation matrix of the joint angles at each frame and uses similarity indices for classification. The method provides good intra-subject accuracy but performs poorly for the inter-subject databases.
\item Crenn et al.~\cite{crenn2016body}: This method uses affective features from both posture and movement and classifies these features using SVMs. This method is trained for more general activities like knocking and does not use information about feet joints.
\item Daoudi et al.~\cite{daoudi2017emotion}: This method uses a manifold of symmetric positive definite matrices to represent body movement and classifies them using the Nearest Neighbors method.
\item Crenn et al.~\cite{crenn2017toward}: This method synthesizes a neutral motion from an input motion and uses the difference between the input and the neutral emotion as the feature for classifying emotions. This method does not use the psychological features associated with walking styles.
\item Li et al.~\cite{li2016identifying}: This method uses a Kinect to capture the gaits and identifies whether an individual is angry, happy, or neutral using four walk cycles using a feature-based approach. \blue{These features are obtained using Fourier Transform and Principal Component Analysis.}
\end{itemize}
We also compare our results to a baseline where we use the LSTM to classify the gait features into the four emotion classes. Table~\ref{tab:accuracySota} provides the accuracy results of our algorithm and shows comparisons with other methods. These methods require input in the form of 3D human poses and then they identify the emotions perceived from those gaits. For this experiment, we extracted gaits from the RGB videos of the \textit{EWalk} dataset and then provided them as input to the state-of-the-art methods along with the motion-captured gait datasets. \blue{Accuracy results are obtained using 10-fold cross-validation on $1384$ gaits from the various datasets (Section~\ref{sec:mocapDatasets}). For this evaluation, the gaits were randomly distributed into training and testing sets, and the accuracy values were obtained by averaging over $1000$ random partitions.}
\begin{table}[t]
\caption{\textbf{Accuracy}: Our method with combined deep and affective features classified with a Random Forest classifier achieves an accuracy of $80.07\%$. We observe an improvement of $13.85\%$ over state-of-the-art emotion identification methods and an improvement of $24.60\%$ over a baseline LSTM-based classifier. \blue{All methods were compared on $1384$ gaits obtained from the datasets described in Section~\ref{sec:mocapDatasets}}}
\label{tab:accuracySota}
\centering
\begin{tabular}{|l|l|}
\hline
\multicolumn{1}{|c|}{\textbf{Method}} & \textbf{Accuracy} \\ \hline
Baseline (Vanilla LSTM) & 55.47\% \\ \hline
Affective Features Only & 68.11\% \\ \hline
Karg et al.~\cite{karg2010recognition} & 39.58\% \\ \hline
Venture et al.~\cite{venture2014recognizing} & 30.83\% \\ \hline
Crenn et al.~\cite{crenn2016body} & 66.22\% \\ \hline
Crenn et al.~\cite{crenn2017toward} & 40.63\% \\ \hline
Daoudi et al.~\cite{daoudi2017emotion} & 42.52\% \\ \hline
Li et al.~\cite{li2016identifying} & 53.73\% \\ \hline
\textit{Our Method (Deep + Affective Features)} & \textbf{80.07\%} \\ \hline
\end{tabular}
\end{table}
We also show the percentage of gaits that the LSTM+Random Forsest classifier correctly classified for each emotion class in Figure~\ref{fig:cm_lstm}. As we can see, for every class, around $80\%$ of the gaits are correctly classified, implying that the classifier learns to recognize each class equally well. Further, when the classifier does make mistakes, it tends to confuse neutral and sad gaits more than between any other class pairs.
\subsection{Analysis of the Learned Deep Features}
For instance, in section 5.3, "the top 3 principal component directions" is stated, however, this is the first mention of PCA without any details about how, when or where PCA was applied.
We visualize the scatter of the deep feature vectors learned by the LSTM network. \blue{To visualize the $32$ dimensional deep features, we convert them to a 3D space. We use Principal Component Analysis (PCA) and project the features in the top three principal component directions.} This is shown in Figure~\ref{fig:scatter_plot}. We observe that the data points are well-separated even in the projected dimension. By extension, this implies that the deep features are at least as well separated in their original dimension. Therefore, we can conclude that the LSTM network has learned meaningful representations of the input data that help it distinguish accurately between the different emotion classes.
\blue{Additionally, we show the saliency maps given by the network in Figure~\ref{fig:saliency}. We selected one sample per emotion (angry, happy, and sad) and presented the postures in each row. For each gait (each row), we use eight sample frames corresponding to eight timesteps. In each row, going from left to right, we show the evolution of the gait with time. Each posture in the row shows the activation on the joints at the corresponding time step, as assigned by our network. Here, activation refers to the magnitude of the gradient of the loss w.r.t. an input joint (the joint's ``saliency") upon backpropagation through the learned network. Since all input data for our network are normalized to lie in the range $[0, 1]$, and the gradient of the loss function is smooth w.r.t. the inputs, the activation values for the saliency maps are within the $[0, 1]$ range. The joints are colored according to a gradient with $activation = 1$ denoting red and $activation = 0$ denoting black. The network uses these activation values of activated nodes in all the frames to determine the class label for the gait.}
\blue{We can observe from Figure~\ref{fig:saliency} that the network focuses mostly on the hand joints (observing arm swinging), the feet joints (observing stride), and the head and neck joints (observing head jerk). Based on the speed and frequency of the movements of these joints, the network decides the class labels. For example, the activation values on the joints for anger (Figure~\ref{fig:saliency} top row) are much higher than the ones for sadness (Figure~\ref{fig:saliency} bottom row), which matches with the psychological studies of how angry and sad gaits typically look. This shows that the features learned by the network are representative of the psychological features humans tend to use when perceiving emotions from gaits~\cite{karg2010recognition,michalak2009embodiment}. Additionally, as time advances, we can observe the pattern in which our network shifts attention to the different joints, i.e., the pattern in which it considers different joints to be more salient. For example, if the right leg is moved ahead, the network assigns high activation on the joints in the right leg and left arm (and vice versa).}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/cm_lstm_deep.png}
\caption{\textbf{Confusion Matrix}: For each emotion class, we show the percentage of gaits belonging to that class that were correctly classified by the LSTM+Random Forest classifier (green background) and the percentage of gaits that were misclassified into other classes (red background).}
\label{fig:cm_lstm}
\end{figure}
\section{Emotional Walk \textit{(EWalk)} Dataset}\label{sec:dataset}
In this section, we describe our new dataset of videos of individuals walking. We also provide details about the perceived emotion annotations of the gaits obtained from this dataset.
\subsection{Data}
The EWalk dataset contains $1384$ gaits with emotion labels from four basic emotions: happy, angry, sad, and neutral (Figure~\ref{fig:eWalk}). These gaits are either motion-captured or extracted from RGB videos. We also include synthetically generated gaits using state-of-the-art algorithms~\cite{xia2015realtime}. In addition to the emotion label for each gait, we also provide values of affective dimensions: valence and arousal.
\subsection{Video Collection}
We recruited $24$ subjects from a university campus. The subjects were from a variety of ethnic backgrounds and included $16$ male and $8$ female subjects. We recorded the videos in both indoor and outdoor environments. We requested that they walk multiple times with different walking styles. Previous studies show that non-actors and actors are both equally good at walking with different emotions~\cite{roether2009critical}. Therefore, to obtain different walking styles, we suggested that the subjects could assume that they are experiencing a certain emotion and walk accordingly. The subjects started $7$m from a stationary camera and walked towards it. The videos were later cropped to include a single walk cycle.
\subsection{Data Generation}
Once we collect walking videos and annotate them with emotion labels, we can also use them to train generator networks to generate annotated synthetic videos. Generator networks have been applied for generating videos and joint-graph sequences of human actions such as walking, sitting, running, jumping, etc. Such networks are commonly based on either Generative Adversarial Networks (GANs)~\cite{gan} or Variational Autoencoders (VAEs)~\cite{vae}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/gan.png}
\caption{\textbf{Generative Adversarial Networks (GANs)}: The network consists of a generator that generates synthetic data from random samples drawn from a latent distribution space. This is followed by a discriminator that attempts to discriminate between the generated data and the real input data. The objective of the generator is to learn the latent distribution space of the real data whereas the objective of the discriminator is to learn to discriminate between the real data and the synthetic data generated by the generator. The network is said to be learned when the discriminator fails to distinguish between the real and the synthetic data.}
\label{fig:gan}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/vae.png}
\caption{\textbf{Variational Autoencoders (VAEs)}: The encoder consists of an encoder that transforms the input data to a latent distribution space. This is followed by a discriminator that draws random samples from the latent distribution space to generate synthetic data. The objective of the overall network is then to learn the latent distribution space of the real data, so that the synthetic data generated by the decoder belongs to the same distribution space as the real data.}
\label{fig:vae}
\end{figure}
GANs (Figure~\ref{fig:gan}) are comprised of a generator that generates data from random noise samples and a discriminator that discriminates between real data and the data generated by the generator. The generator is considered to be trained when the discriminator fails to discriminate between the real and the generated data.
VAEs (Figure~\ref{fig:vae}), on the other hand, are comprised of an encoder followed by a decoder. The encoder learns a latent embedding space that best represents the distribution of the real data. The decoder then draws random samples from the latent embedding space to generate synthetic data.
For temporal data such human action videos or joint-graph sequences, two different approaches are commonly taken. One approach is to individually generate each point in the temporal sequence (frames in a video or graphs in a graph sequence) respectively and then fuse them together in a separate network to generate the complete sequence. The methods in~\cite{twostep_method1, twostep_method2}, for example, use this approach. The network generating the individual points only considers the spatial constraints of the data, whereas the network fusing the points into the sequence only considers the temporal constraints of the data. The alternate approach is to train a single network by providing it both the spatial and temporal constraints of the data. For example, the approach used by Sijie et al.~\cite{stgcn}. The first approach is relatively more lightweight, but it does not explicitly consider spatial temporal inter-dependencies in the data, such as the differences in the arm swinging speeds between angry and sad gaits. While the latter approach does take these inter-dependencies into account, it is also harder to train because of these additional constraints.
\subsection{Analysis}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{images/suppl_Results.png}
\caption{\textbf{EWalk Dataset:} We present the EWalk dataset containing RGB videos of pedestrians walking and the perceived emotion label for each pedestrian.}
\label{fig:eWalk}
\end{figure}
We presented the recorded videos to MTurk participants and obtained perceived emotion labels for each video using the method described in Section~\ref{sec:labeling}. Our data is widely distributed across the four categories with the \textit{Happy} category containing \blue{the largest number of gaits} ($32.07\%$) and the \textit{Neutral} category containing the smallest number of gaits with $16.35\%$ (Figure~\ref{fig:dataDistribution}).
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{images/dataDistribution.PNG}
\caption{\textbf{Distribution of Emotion in the Datasets:} We present the percentage of gaits that are perceived as belonging to each of the emotion categories (happy, angry, sad, or neutral). We observe that our data is widely distributed.}
\label{fig:dataDistribution}
\end{figure}
\subsubsection{Affective Dimensions}\label{sec:affect}
We performed an analysis of the affective dimensions (i.e. valence and arousal). For this purpose, we used the participant responses to the questions about the happy, angry, and sad emotions. We did not use the responses to the question about the neutral emotion because it corresponds to the origin of the affective space and does not contribute to the valence and arousal dimensions. We performed a Principal Component Analysis (PCA) on the participant responses $[r^{happy}_i, r^{angry}, r^{sad}]$ and observed that the following two principal components describe $94.66\%$ variance in the data:
\begin{eqnarray}
\begin{bmatrix} PC1 \\ PC2 \end{bmatrix} = \begin{bmatrix}
0.67 & -0.04 & -0.74 \\
-0.35 & 0.86 & -0.37
\end{bmatrix}\label{eq:affectiveDimensions}
\end{eqnarray}
We observe that the first component with high values of the \textit{Happy} and \textit{Sad} coefficients represents the \textit{valence} dimension of the affective space. The second principal component with high values of the \textit{Anger} coefficient represents the \textit{arousal} dimension of the affective space. Surprisingly, this principal component also has a negative coefficient for the \textit{Happy} emotion. This is because a calm walk was often rated as happy by the participants, resulting in low arousal.
\subsubsection{Prediction of Affect}
We use the principal components from Equation~\ref{eq:affectiveDimensions} to predict the values of the \textit{arousal} and \textit{valence} dimensions. Suppose, the probabilities predicted by the Random Forest classifier are $p(h), p(a)$, and $p(s)$ corresponding to the emotion classes $happy$, $angry$, and $sad$, respectively. Then we can obtain the values of $valence$ and $arousal$ as:
\begin{eqnarray}
valence = \begin{bmatrix}0.67 & -0.04 & -0.74 \end{bmatrix} \begin{bmatrix} p(h) & p(a) & p(s)\end{bmatrix} ^T \\
arousal = \begin{bmatrix}-0.35 & 0.86 & -0.37 \end{bmatrix} \begin{bmatrix} p(h) & p(a) & p(s)\end{bmatrix} ^T
\end{eqnarray}
\begin{figure}[b]
\centering
\includegraphics[width=\columnwidth]{images/deep_features_scatter_marked.png}
\caption{\textbf{Scatter Plot of the Learned Deep Features}: These are the deep features learned by the LSTM network from the input data points, projected in the 3 principal component directions. The different colors correspond to the different input class labels. We can see that the features for the different classes are well-separated in the 3 dimensions. This implies that the LSTM network learns meaningful representations of the input data for accurate classification.}
\label{fig:scatter_plot}
\end{figure}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.98\textwidth]{images/saliency.png}
\caption{\textbf{\blue{Saliency Maps}}: \blue{We present the saliency maps for one sample per emotion (angry, happy, and sad) as learned by the network for a single walk cycle.} The maps show activations on the joints during the walk cycle. Black represents no activation and red represents high activation. For all the emotion classes, the hand, feet and head joints have high activations, implying that the network deems these joints to be more important for determining the class. Moreover, the activation values on these joints for a high arousal emotion (\textit{e.g.}, angry) are higher than those for a low arousal emotion (\textit{e.g.}, sad), implying the network learns that higher arousal emotions lead to more vigorous joint movements.}
\label{fig:saliency}
\end{figure*}
\section{Application: Virtual Character Generation}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{images/Ryan.png}
\caption{\textbf{Application}: Our gaits and their perceived emotion labels can be used to generate virtual characters with different emotions. We show a character that is generated using our approach to convey basic emotions: angry, happy, sad, and neutral.}
\label{fig:ryan}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{images/ApplicationOverview.png}
\caption{\textbf{Virtual Character Generation}: We provide an overview of our end-to-end approach for simulating virtual characters. We represent the behavioral state of the virtual characters in a Behavioral Finite State Machine (BFSM) and use it to control their behavior based on the state of the environment, which consists of static and dynamic obstacles. We use our perceived emotion prediction to generate gaits for the virtual characters based on their desired emotions.}
\label{fig:ApplicationOverview}
\end{figure}
\begin{figure*}[!ht]
\centering
\begin{subfigure}[t]{\linewidth}
\centering
\includegraphics[width=\linewidth]{images/saliency_angry.png}
\caption{Angry}
\label{fig:sal_angry}
\end{subfigure}
\begin{subfigure}[t]{\linewidth}
\centering
\includegraphics[width=\linewidth]{images/saliency_happy.png}
\caption{Happy}
\label{fig:sal_happy}
\end{subfigure}
\begin{subfigure}[t]{\linewidth}
\centering
\includegraphics[width=\linewidth]{images/saliency_sad.png}
\caption{Sad}
\label{fig:sal_sad}
\end{subfigure}%
\caption{\textbf{Saliency Maps}: We present the saliency maps for examples from each of the for emotion classes, as learned by the network for a single walk cycle. The maps show activations on the joints during the walk cycle. Black represents no activation and red represents high activation. For all the emotion classes, the hand, feet and head joints have high activations, implying that the network deems these joints to be more important for determining the class. Moreover, the activation values on these joints for a high arousal emotion (\textit{e.g.}, angry) are higher than those for a low arousal emotion (\textit{e.g.}, sad), implying the network learns that higher arousal emotions lead to more vigorous joint movements.}
\label{fig:saliency}
\end{figure*}
In this section, we present an application of our method that generates virtual characters with given desired emotions (Figure~\ref{fig:ryan}).
\subsection{Overview}
We provide an overview of our end-to-end approach to simulating virtual characters in Figure~\ref{fig:ApplicationOverview}. We assume that the environment consists of static and dynamic obstacles. At the start of the simulation, we initialize the environment state with positions and dimensions of the static obstacles and the current positions and velocities of the dynamic obstacles. We also initialize a Behavioral Finite State Machine (BFSM) based on the user input and the intended tasks. We set up a 3D model for each virtual character that is rigged using automatic rigging software and associate a hierarchical skeleton with appropriate joint values.
\subsection{Behavioral Finite State Machine}
We represent the behavioral state of the virtual characters in a BFSM and use it to control their behaviors. At runtime, we consider the environment state and the context of the current task and update the state of the BFSM that determines the virtual characters' behavior. This state also computes a goal position for each virtual character.
\subsection{Global and Local Navigation}
If the goal positions of virtual characters are different from their current positions, then a navigation algorithm is used to compute the trajectories to the new positions. To provide collision-free navigation in the presence of obstacles or other virtual characters, we utilize the multi-agent simulation framework, \textit{Menge}~\cite{curtis2016menge}. In this framework, a global navigation step first breaks down the goal positions into intermediate goals that avoid collisions with the static obstacles in the environment. Next, a local navigation step uses a reciprocal collision avoidance (RVO) approach to avoid collisions with dynamic obstacles and provide navigation to the intermediate goals~\cite{van2011reciprocal}.
In this approach, we represent each agent on the 2D ground plane and generate smooth, stable, collision-free velocities. RVO is an agent-based approach that computes a collision-free 2D velocity for an agent given its preferred velocity, time horizon ($t_{max}$), and current positions and velocities of the all virtual agents in the environment. In other words, it computes a velocity that can generate a collision-free trajectory at time $t_{max}$. We update the character's location in the virtual world according to this collision-free trajectory at each frame.
\subsection{Gait Generation}\label{sec:gaitGeneration}
In addition to the goal position for each virtual character, the BFSM state also determines the desired emotion that each virtual character must convey. To achieve this, we use our gait-based approach to identify the perceived emotion. For each virtual character, we obtain a set of gaits that correspond to the desired emotion using our gait dataset and associated labels. We choose one of the gaits from this set and use it to update the joint positions of the agent in the virtual world. The selection of a gait can be made according to many criteria (such as personality or preferred walking speed).
\section{Conclusion, Limitations, and Future Work}
We presented a novel method for classifying perceived emotions of individuals based on their walking videos. Our method is based on learning deep features computed using LSTM and exploits psychological characterization to compute affective features. The mathematical characterization of computing gait features also has methodological implications for psychology research. This approach explores the basic psychological processes used by humans to perceive emotions of other individuals using multiple dynamic and naturalistic channels of stimuli. We concatenate the deep and affective features and classify the combined features using a Random Forest Classification algorithm. Our algorithm achieves an absolute accuracy of $80.07\%$, which is an improvement of $24.60\%$ over vanilla LSTM (i.e., using only deep features) and offers an improvement of $13.85\%$ over state-of-the-art emotion identification algorithms. Our approach is the first approach to provide a real-time pipeline for emotion identification from walking videos by leveraging state-of-the-art 3D human pose estimation. We also present a dataset of videos (EWalk) of individuals walking with their perceived emotion labels. The dataset is collected with subjects from a variety of ethnic backgrounds in both indoor and outdoor environments.
There are some limitations to our approach. The accuracy of our algorithm depends on the accuracy of the 3D human pose estimation and gait extraction algorithms. Therefore, emotion prediction may not be accurate if the estimated 3D human poses or gaits are noisy. Our affective computation requires joint positions from the whole body, but the whole body pose data may not be available if there are occlusions in the video. We assume that the walking motion is natural and does not involve any accessories (e.g., suitcase, mobile phone, etc.). As part of future work, we would like to collect more datasets and address these issues. We will also attempt to extend our methodology to consider more activities such as running, gesturing, etc. Finally, we would like to combine our method with other emotion identification algorithms that use human speech and facial expressions. \blue{We want to explore the effect of individual differences in the perception of emotions. We would also like to explore applications of our approach for simulating virtual agents with desired emotions using gaits.}
\section{Introduction}\label{sec:introduction}}
\input{1_introduction.tex}
\input{2_related.tex}
\input{3_approach.tex}
\input{4_LSTM.tex}
\input{5_results.tex}
\input{6_dataset.tex}
\input{8_conclusion.tex}
\bibliographystyle{IEEEtran}
|
1,477,468,750,632 | arxiv | \section*{Appendix}
The polarization function $\Pi_{\mathrm{v}}(q)$ shown in Fig.2 is
defined as
\begin{eqnarray}
\Pi_{\mathrm{v}}(q)
&=&-N\int\frac{d^3k}{(2\pi)^3}\int\frac{d^3p}{(2\pi)^3}
\mathrm{Tr}\left[\gamma_{0}G_{0}(k+q)\gamma_{0}G_{0}(k+p+q)\gamma_{0}\right.
\nonumber \\
&&\left.\times G_{0}(k+p)\gamma_{0}D(p)G_{0}(k)\right].
\end{eqnarray}
To calculate this function, we will follow the method of Franz
\emph{et} \emph{al.} \cite{Franz03}. We are mainly interested in the
leading behavior of $\Pi_{\mathrm{v}}(q)$ in the $q \rightarrow 0$
limit. In this limit, the above integral has singularities as
$k\rightarrow 0$ and $k\rightarrow -p$. Thus, we may evaluate the
whole integral by expanding the regular parts of the integrand near
these two singular points. Keeping only the leading terms, we have
\begin{eqnarray}
\Pi_{\mathrm{v}}(q)&=&-N\int\frac{d^3k}{(2\pi)^3}\int\frac{d^3p}{(2\pi)^3}
\mathrm{Tr}\left[\gamma_{0}G_{0}(k+q)\gamma_{0}G_{0}(p+q)
\gamma_{0}\right.\nonumber \\
&& \left.\times G_{0}(p)\gamma_{0} D(p)G_{0}(k)\right] \nonumber \\
&& -N\int\frac{d^3k}{(2\pi)^3}\int\frac{d^3p}{(2\pi)^3}
\mathrm{Tr}\left[\gamma_{0}G_{0}(-p+q)
\gamma_{0}G_{0}(k+p+q)\gamma_{0}\right.\nonumber \\
&& \left.\times G_{0}(k+p)\gamma_{0}D(p)G_{0}(-p)\right]
\end{eqnarray}
Perform a variable shift, $k \rightarrow k-p$, for the second term,
then
\begin{eqnarray}
\Pi_{\mathrm{v}}(q) &=& 2N\mathrm{Tr}
\left[X(q)\int\frac{d^3k}{(2\pi)^3}
G_{0}(k)\gamma_{0}G_{0}(k+q)\right], \label{eqn:VertexPolarization}
\end{eqnarray}
where
\begin{eqnarray}
X(q)=-\int\frac{d^3p}{(2\pi)^3}
\gamma_{0}G_{0}(p+q)\gamma_{0}G_{0}(p)\gamma_{0}D(p).
\end{eqnarray}
The most leading term is found to be
\begin{eqnarray}
X(q) &=&-\frac{\gamma_{0}}{4\pi^2}\ln\left(\frac{\Lambda}{q}\right)
\left(\frac{e^2}{2\epsilon_{0}v_{F}}\right)\int_{0}^{\pi}d\theta
\frac{\cos^2\theta-\sin^2\theta}{1+\frac{Ne^2}{16\epsilon_{0}v_{F}}\sin\theta}.
\end{eqnarray}
In the large $N$ limit, it is possible to use the approximation
\begin{eqnarray}
\int_{0}^{\pi}d\theta \frac{\cos^2\theta -
\sin^2\theta}{1+\frac{Ne^2}{16\epsilon_{0}v_{F}}\sin\theta} \approx
\frac{32\epsilon_{0}v_{F}}{Ne^2}
\ln\left(N\frac{e^2}{8\epsilon_{0}v_{F}}\right),
\end{eqnarray}
so that
\begin{eqnarray}
X(q) &=& -\frac{4\gamma_{0}}{\pi^2}
\frac{\ln\left(N\frac{e^2}{8\epsilon_{0}v_{F}}\right)
\ln\left(\frac{\Lambda}{q}\right)}{N}. \label{eqn:Xq}
\end{eqnarray}
After substituting Eq.(\ref{eqn:Xq}) to
Eq.(\ref{eqn:VertexPolarization}), we finally get
\begin{eqnarray}
\Pi_{\mathrm{v}}(q) &=& -\frac{8}{\pi^2}
\frac{\ln\left(N\frac{e^2}{8\epsilon_{0}v_{F}}\right)
\ln\left(\frac{\Lambda}{q}\right)}{N} \Pi(q).
\end{eqnarray}
|
1,477,468,750,633 | arxiv | \section{Introduction}
\label{sec:intro}
The several decades of investment in earth observation has made available many satellite and aerial sensors with always finer resolution and shorter revisiting time, marking remote sensing as a very important tool for remote earth investigation.
Synthetic Aperture Radar (SAR) are active sensors that acquire images continuously during day and night becoming a useful tool for several tasks such as monitoring, classification, segmentation etc.
Unfortunately, SAR images are affected by a multiplicative noise called speckle, that is due to the interference among the objects backscatterings inside a single sensor resolution cell. Hence, the resulting SAR image is composed of an alternation of bright and dark points that make its interpretation more challenging \cite{Argenti2013}.
In order to ease further tasks such as detection, classification, segmentation and 3D reconstruction \cite{Hossi2018}, \cite{ambrosanio2017},\cite{Mazza2019}, \cite{Budillon2019}, in the last decades many studies for the implementation of despeckling filters have been conducted.
Among all the filters there is a distinction in three main categories: local, non local (NL) and CNN based filters. NL filters look for similarity among patches in all the images according to a certain criteria \cite{Hossi2019b}. The difference between the NL filters is in the chosen similarity and combination criteria \cite{Deledalle2014}, \cite{Ferraioli2019bis},\cite{vitale2019bis}. Thanks to this rationale, NL filters ensure a better trade-off between noise reduction and edge preservation, with respect to the local filters that combine only the adjacent pixels \cite{Argenti2013}.
In the last years deep learning set a breakthrough in many fields of image processing such as classification, segmentation and detection reaching the State-of-Art performance \cite{He2017}. This improvement lured attraction in all the research fields and not less in the remote sensing community.
Several despeckling filters based on the use of convolutional neural networks have been proposed such as \cite{Wang2017}, \cite{Chierchia2017}, \cite{Vitale2019}, \cite{Ferraioli2019},\cite{vitale2020d}. In our last proposal \cite{Vitale2019} and its improved version proposed in \cite{vitale2020}, we focused on the implementation of a new cost function. Whereas, in this work we pay attention on the network's complexity (in the sense of the amount of trainable parameters) given by the number of layers and the number of extracted features.
Obviously, the network's complexity and the dataset dimension are related each other: the aim is to exploit the performance related to the network complexity given a fixed dataset. So, we train the same architecture varying the number of layers and features and an analysis of the results have been carried out on both simulated and real images.
\section{Method}
In order to carry out the aforementioned complexity analysis, we consider as baseline the solution proposed in \cite{vitale2020}, that is a CNN of ten layers trained on simulated data. Starting from this architecture, we train several variation with different number of layers and number of features.
\subsection{Baseline}
In \cite{vitale2020}, the simulation process has been held under the fully developed hypothesis: we simulated the speckle $N$ with a Gamma distribution $p(N)$ and number of look $L=1$
$$p(N) = \frac{1}{\Gamma(L)} L^L N^{L-1} e^{-NL}$$
Once the speckle has been simulated, we multiplied it to a noise-free intensity image $X$ in order to obtain a simulated SAR image $Y = X \cdot N$
In \cite{vitale2020}, we focused mainly on the definition of a cost function that take care of spatial and spectral properties of SAR images combining three terms $$\mathcal{L} = \mathcal{L}_2 + \lambda_{edge} \mathcal{L}_{edge} + \lambda_{KL} \mathcal{L}_{KL} $$
$$ \mathcal{L}_{2} = || \hat{X} - X ||^2$$
$$\mathcal{L}_{edge} =\left( \de{X}{u} - \de{\hat{X}}{u} \right) ^2 + \left( \de{X}{v} - \de{\hat{X}}{v} \right) ^2 $$
$$ \mathcal{L}_{KL} = D_{KL} \left( \frac{Y}{\hat{X}}, \frac{Y}{X} \right) = D_{KL} ( \hat N, N ) $$
where $X$,$\hat{X}$ and $\hat{N}$ are respectively the noise-free reference, the estimated noise-free image and the estimated noise. The couple $(u,v)$ indicate the horizontal and vertical direction of the image.
$\mathcal{L}_{2}$ is the MSE between the reference and the estimated noise-free image.
$\mathcal{L}_{edge}$ is a measure of the difference of the gradient along the horizontal and vertical directions of $X$ and $\hat{X}$.
$\mathcal{L}_{KL}$ is the Kullback-Leibler divergence (KLD) between the statistical distributions of the estimated noise $N$ and the theoretical one.
The first two terms are responsible of preserving spatial details and edges, respectively.
The last one involves the estimated noise in order to preserve statistical properties of the SAR image. The reader is invited to refer to \cite{vitale2020} for more details and a deeper insight.
\subsection{Proposed Analysis}
In this work we focus on the network architecture, more precisely on its depth and width: starting from the architecture proposed in \cite{vitale2020}, we train different networks with a fixed simulated dataset.
We consider the Merced Land Use dataset \cite{MercedLandUse} and use ($ 57526 \times 64 \times 64 $) patches for training and ($14336 \times 64 \times 64$) for validation.
\begin{figure}[ht]
\centering
\caption{Baseline Network: it is composed of a first convolutional layer followed by ReLU, several inner convolutional layer followed by Batch Normalization and ReLU and the output layer is a convolutional layer.}
\pgfuseimage{net}
\label{fig:net}
\end{figure}
In Fig.\ref{fig:net}, the basic architecture of the network is depicted. Generally, each trained network is composed of a first convolutional layer with $n_f$ extracted features followed by a ReLU as activation function. Following, there are several inner convolutional layers with $n_f$ features followed by batch normalization and by a ReLU as well. In the end, the last layer is a simple convolutional layer with a single channel output.
The kernel size ($K \times K$) of each convolutional layers is fixed to ($3 \times 3$).
The number of layers varies from a minimum of ten to a maximum of seventeen. We were not able to train deeper networks.
For each network, we exploit the influence of number of extracted features,
training the same architecture once with $n_f=32$ features and once again with $n_f=64$ features. Table \ref{tab: arcs} lists the trained solutions:
\begin{itemize}
\item $\mathbf{M\#_{t}}$ stays for thin network trained with $n_f=32$ features
\item $\mathbf{M\#_{l}}$ stays for large network trained with $n_f=64$ features
\end{itemize}
\begin{table}[ht]
\centering
\setlength{\tabcolsep}{2.5pt}
\caption{Architectures under test: network from 10 to 17 layers are considered. For each network M\#, two versions are trained: a thin version with 32 layers, named M\#$_{t}$, and a larger version with 64 features named M\#$_{l}$. In the last row the number of trainable parameters of each network expressed in base of $10^3$}.
\begin{tabular}{l |c|c|c|c|c|c|c|c}
& \labsty{$\mathbf{M1_t}$} & \labsty{$\mathbf{M1_l}$} & \labsty{$\mathbf{M2_t}$} & \labsty{$\mathbf{M2_l}$} & \labsty{$\mathbf{M3_t}$} &\labsty{$\mathbf{M3_l}$} & \labsty{$\mathbf{M4_t}$} & \labsty{$\mathbf{M4_l}$} \\
\hline
\# layers &10 &10 & 12 &12 &15 &15 &17 &17 \\
\# features &32 &64 &32 &64 & 32 &64 &32 &64\\
\# parameters &3.4 & 6.8& 4.2 & 8.3& 5.3 & 10.6 & 6.1 & 12.1 \\
\hline
\end{tabular}
\label{tab: arcs}
\end{table}
The losses evaluated on the validation dataset are shown in Fig. \ref{fig:loss}.
According to Fig. \ref{fig:loss} the M\#$_{l}$ networks are faster and have better optimization process than the M\#$_{t}$: regarding the KLD all the network have similar performance, a slight gain is visible on the MSE and a bigger improvement is on the edge loss where M4$_{l}$ shows the best performance on both. It seems that deeper and wider network is able to better catch the properties of the image and to better filter the noise preserving spatial details.
\begin{figure*}[ht]
\centering
\caption{Loss function evaluated on validation dataset, from left to right: Mean Square Error, Kullback-Leibler Divergence, Edge Loss. Dashed line correspond to M\#$_{t}$ features network. Solid to M\#$_{l}$ features network. }
\begin{tabular}{ccc}
\pgfuseimage{l2} & \pgfuseimage{kl} & \pgfuseimage{edge}\\
\end{tabular}
\label{fig:loss}
\end{figure*}
\section{Experiments and Results}
In order to compare the performance of these solutions, we carry out experiments on both simulated and real data. Numerical and visual results are shown in Figg. \ref{fig:sim-results}-\ref{fig:real-results}. We select ($50 \times 256 \times 256$) images from the simulated dataset and one real image from COSMO-SkyMed for testing.
The networks listed in Tab.\ref{tab: arcs} are trained for 130 epochs using the Adam optimizer \cite{Kingma14} with learning rate $\eta = 0.0003$.
Tab \ref{tab:simulated results} summarizes the numerical assessment on simulated and real data. Reference metrics, such as SNR, SSIM and MSE, are averaged on the full testing dataset. For assessment on real data we consider the M-index \cite{Gomez2017} that is a combination of the ENL and Haralick homogeneity. The ENL measures the ability of suppressing noise. The homogeneity quantifies left structures in the estimated noise, so gives an indication on the ability of the filter in removing noise without suppressing useful information. In fact, pure speckle should be random and should not highlight structures, producing an homogeneity equal to 0.
So we decide to extract the homogeneity from M-index in order to highlight such behaviour of the filters.\\
Regarding simulated evaluation, Tab\ref{tab:simulated results} shows how wider M\#$_{l}$ networks clearly outperforms the thinner M\#$_{t}$ ones on all the index, with the only M1$_{t}$ having competitive results. This behaviour is totally in line with the validation loss.
Among the wider networks M3$_{l}$ is the best one on all the index, even if all the solution are very close each other.
In order to clearly evaluate the quality of the filter, visual inspection need to accompany the numerical consideration (only the most competitive models are shown).
In Fig.\ref{fig:sim-results}, it is clear how all the M\#$_{l}$ are very close each other. It is important to notice that visually the best solution is M4$_{l}$ that shows a good edge preservation, while M3$_{l}$ and M2$_{l}$ tend to smooth results and M1$_{l}$ presents small artefacts. Indeed, M4$_{l}$ is the solution that better reconstruct the antenna on the top of the storage tank (first clip) and better try to recover the lines of tennis court.
\begin{table}[ht]
\centering
\setlength{\tabcolsep}{3.5pt}
\caption{Numerical results: on the left reference metrics for simulated images (SSIM, SNR, MSE); on the right no-reference metrics for real results (M-index, homogeneity (H)); computational time for an image of size 512 $\times $512. Best value in blue, second best in red }
\begin{tabular}{l||ccc||cc||c}
& \labsty{SSIM} & \labsty{SNR} & \labsty{MSE} & \labsty{M-index} & \labsty{H} \tiny ($10^{-1})$ & \textbf{Time}(sec) \\
\hline
\labsty{$\mathbf{M1_t}$} & 0.7279 & 8.2276 & 294 & 9.89 & 0.293 & \aa{4} \\
\labsty{$\mathbf{M2_t}$} & 0.7238 & 7.9600 & 311 & 10.47 & 0.276 & \bb{4.1} \\
\labsty{$\mathbf{M3_t}$} & 0.7180 & 7.4201 & 352 & 10.30 & 0.218 & 4.7 \\
\labsty{$\mathbf{M4_t}$} & 0.7114 & 7.2222 & 366 & 8.76 & 0.117 & 4.9 \\
\hline
\labsty{$\mathbf{M1_l}$} & 0.7344 & 8.3662 & 287 & 9.30 & 0.201 & 5.2 \\
\labsty{$\mathbf{M2_l}$} & 0.7341 & \bb{8.5677} & 273 & \bb{8.29} & \bb{0.139}& 5.4 \\
\labsty{$\mathbf{M3_l}$} & \aa{0.7389} & \aa{8.6098} & \aa{270}& 9.73 & 0.247 & 5.7 \\
\labsty{$\mathbf{M4_l}$} & \bb{0.7375} & 8.5635 & \bb{272}& \aa{7.46} &\aa{0.003} & 5.8 \\
\hline
\end{tabular}
\label{tab:simulated results}
\end{table}
\begin{figure}[ht]
\centering
\caption{Simulated results: detail of testing images selected by Merced Land Use dataset}
\begin{tabular}{cccccc}
\labsty{Noisy} & \labsty{Reference} & \labsty{$\mathbf{M4_l}$} & \labsty{$\mathbf{M3_l}$} & \labsty{$\mathbf{M2_l}$} & \labsty{$\mathbf{M1_l}$} \\
\pgfuseimage{sim1_noisy} & \pgfuseimage{sim1_ref} & \pgfuseimage{sim1_M4l} & \pgfuseimage{sim1_M3l} &\pgfuseimage{sim1_M2l} & \pgfuseimage{sim1_M1l}\\
\pgfuseimage{sim2_noisy} & \pgfuseimage{sim2_ref} & \pgfuseimage{sim2_M4l} & \pgfuseimage{sim2_M3l} &\pgfuseimage{sim2_M2l} & \pgfuseimage{sim2_M1l}\\
\end{tabular}
\label{fig:sim-results}
\end{figure}
In order to have a full view of the performance of this filters, comparison on real SAR images is shown in Fig. \ref{fig:real-results}. Also in this case, the solutions present very close results with slight differences. Observing the detail in the bottom of Fig.\ref{fig:real-results}, on homogeneous area all the filters produce very similar effect. the real difference is on the treatment of the objects that produce strong backscattering. Actually, the M4$_{l}$ is the solution that better preserve objects with a slight increasing of computational time, while the other tend to smooth them making difficult the detection of single scatterers and impair the properties of adjacent areas. For comparison with other methods the reader is invited to refer to \cite{vitale2020}, where visual and numerical comparison of the M1$_l$ network with other method has been carried out.
\begin{figure}[ht]
\centering
\caption{Real results: on the top image of Naples from CosmoSky-Med, in the bottom a detail of the full image}
\begin{tabular}{ccccc}
\labsty{Noisy} & \labsty{$\mathbf{M4_l}$} & \labsty{$\mathbf{M3_l}$} & \labsty{$\mathbf{M2_l}$} & \labsty{$\mathbf{M1_l}$} \\
\pgfuseimage{real_noisy} & \pgfuseimage{real_M4l} & \pgfuseimage{real_M3l} &\pgfuseimage{real_M2l} & \pgfuseimage{real_M1l}\\
\pgfuseimage{real_det_noisy} & \pgfuseimage{real_det_M4l} & \pgfuseimage{real_det_M3l} &\pgfuseimage{real_det_M2l} & \pgfuseimage{real_det_M1l}\\
\end{tabular}
\label{fig:real-results}
\end{figure}
\section{Conclusion}
In this work an analysis on the complexity of a network for SAR despeckling has been carried out.
Starting from an our previous solution, we train different variations of the model changing its depth (number of layers) and width (number of features).
As expected, deeper and wider trainable networks perform better.
Generally, even if there is not a big difference on the performance on simulated results, the deepest and widest network is the one that better can deal with real data. The bigger level of abstraction, the better generalization of the performance is ensured.
\label{sec:ref}
\bibliographystyle{IEEE}
|
1,477,468,750,634 | arxiv | \section{Introduction}
Let $H\subseteq \mathbb{R}^n$ be a finite set and denote by
\begin{equation}
\mathrm{int.cone}(H):=\{\lambda_1x_1+\cdots+\lambda_kx_k\mid x_1,\ldots,x_k\in H, \lambda_1,\ldots,\lambda_k\in \mathbb{Z}_{\geq 0}\}
\end{equation}
the integer cone generated by $H$. The \emph{Carath\'eodory rank} of $H$, denoted $\mathrm{cr}(H)$, is the least integer $t$ such that every element in $\mathrm{int.cone}(H)$ is the nonnegative integer combination of $t$ elements from $H$.
The set $H$ is called a \emph{Hilbert base} if $\mathrm{int.cone}(H)=\mathrm{cone}(H)\cap \mathrm{lattice}(H)$, where $\mathrm{cone}(H)$ and $\mathrm{lattice}(H)$ are the convex cone and the lattice generated by $H$, respectively.
Cook et al.\cite{CookFonluptSchrijver} showed that when $H$ is a Hilbert base generating a pointed cone, the bound $\mathrm{cr}(H)\leq 2n-1$ holds. This bound was improved to $2n-2$ by Seb\H o \cite{Sebo}. In the same paper, Seb\H o conjectured that $\mathrm{cr}(H)\leq n$ holds for any Hilbert base generating a pointed cone. A counterexample to this conjecture was found by Bruns et al.\cite{Brunsetal}.
Here we consider the case that $H$ is the set of incidence vectors of the bases of a matroid on $n$ elements. In his paper on testing membership in matroid polyhedra, Cunningham \cite{Cunningham} first asked for an upper bound on the number of different bases needed in a representation of a vector as a nonnegative integer sum of bases. It follows from Edmonds matroid partitioning theorem \cite{Edmonds} that the incidence vectors of matroid bases form a Hilbert base for the pointed cone they generate. Hence the upper bound of $2n-2$ applies. This bound was improved by de Pina and Soares \cite{dePina} to $n+r-1$, where $r$ is the rank of the matroid. Chaourar \cite{Chaourar} showed that an upper bound of $n$ holds for a certain minor closed class of matroids.
In this paper we show that the conjecture of Seb\H o holds for the bases of (poly)matroids. That is, the Carath\'eodory rank of the set of bases of a matroid is upper bounded by the cardinality of the ground set. More generally, we show that for an integer valued submodular function $f$, the Carath\'eodory rank of the set of bases of $f$ equals the maximum number of affinely independent bases of $f$.
\section{Preliminaries}
In this section we introduce the basic notions concerning submodular functions. For background and more details, we refer the reader to \cite{Fujishige,Schrijver}.
Let $E$ be a finite set and denote its power set by $\mathcal{P}(E)$. A function $f:\mathcal{P}(E)\to \mathbb{Z}$ is called \emph{submodular} if $f(\emptyset)=0$ and for any $A,B\subseteq E$ the inequality $f(A)+f(B)\geq f(A\cup B)+f(A\cap B)$ holds. The set
\begin{equation}
EP_f:=\{x\in \mathbb{R}^E\mid x(U)\leq f(U)\text{ for all $U\subseteq E$}\}
\end{equation}
is called the \emph{extended polymatroid} associated to $f$, and
\begin{equation}
B_f=\{x\in EP_f\mid x(E)=f(E)\}
\end{equation}
is called the \emph{base polytope} of $f$. Observe that $B_f$ is indeed a polytope, since for $x\in B_f$ and $e\in E$, the inequalities $f(E)-f(E-e)\leq x(e)\leq f(\{e\})$ hold, showing that $B_f$ is bounded.
A submodular function $f:\mathcal{P}(E)\to \mathbb{Z}$ is the rank function of a matroid $M$ on $E$ if and only if $f$ is nonnegative, nondecreasing and $f(U)\leq |U|$ for every set $U\subseteq E$. In that case, $B_f$ is the convex hull of the incidence vectors of the bases of $M$.
Let $f:\mathcal{P}(E)\to \mathbb{Z}$ be submodular. We will construct new submodular functions from $f$. The \emph{dual} of $f$, denoted $f^*$, is defined by
\begin{eqnarray}
f^*(U):=f(E\setminus U)-f(E).
\end{eqnarray}
It is easy to check that $f^*$ is again submodular, that $(f^*)^*=f$ and that $B_{f^*}=-B_f$. For $a:E\to \mathbb{Z}$, the function $f+a$ given by $(f+a)(U):=f(U)+a(U)$ is submodular and $B_{f+a}=a+B_f$. The \emph{reduction of $f$ by $a$}, denoted $f|a$ is defined by
\begin{equation}
(f|a)(U):=\min_{T\subseteq U}(f(T)+a(U\setminus T)).
\end{equation}
It is not hard to check that $f|a$ is submodular and that $EP_{f|a}=\{x\in EP_f\mid x\leq a\}$. Hence we have that $B_{f|a}=\{x\in B_f\mid x\leq a\}$ when $B_f\cap\{x\mid x\leq a\}$ is nonempty. We will only need the following special case. Let $e_0\in E$ and $c\in \mathbb{Z}$ and define $a:E\to \mathbb{Z}$ by
\begin{equation}
a(e):=\begin{cases}c&\text{ if $e=e_0$,}\\f(\{e\})&\text{ if $e\neq e_0$.}\end{cases}
\end{equation}
Denote $f|(e_0,c):=f|a$. If $x_{e_0}\leq c$ for some $x\in B_f$, we obtain
\begin{equation}
B_{f|(e_0,c)}=\{x\in B_f\mid x(e_0)\leq c\}.
\end{equation}
Our main tool is Edmonds' \cite{Edmonds} polymatroid intersection theorem which we state for the base polytope.
\begin{theorem}\label{edmonds}
Let $f,f':\mathcal{P}(E)\to \mathbb{Z}$ be submodular. Then $B_f\cap B_{f'}$ is an integer polytope.
\end{theorem}
We will also use the following corollary (see \cite{Edmonds}).
\begin{theorem}\label{idp}
Let $f:\mathcal{P}(E)\to \mathbb{Z}$ be submodular. Let $k$ be a positive integer and let $x\in (kB_f)\cap \mathbb{Z}^E$. Then there exist $x_1,\ldots,x_k\in B_f\cap \mathbb{Z}^E$ such that $x=x_1+\cdots+x_k$.
\end{theorem}
\begin{proof}
By the above constructions, the polytope $x-(k-1)B_f$ is the base polytope of the submodular function $f'=x+(k-1)f^*$. Consider the polytope $P:=B_f\cap B_{f'}$. It is nonempty, since $\frac{1}{k}x\in P$ and integer by Theorem \ref{edmonds}. Let $x_k\in P$ be an integer point. Then $x-x_k$ is an integer point in $(k-1)B_f$ and we can apply induction.
\end{proof}
Important in our proof will be the fact that faces of the base polytope of a submodular function are themselves base polytopes as the following proposition shows.
\begin{proposition}\label{faces}
Let $f:\mathcal{P}(E)\to \mathbb{Z}$ be submodular and let $F\subseteq B_f$ be a face of dimension $|E|-t$. Then there exist a partition $E=E_1\cup\cdots\cup E_t$ and submodular functions $f_i:\mathcal{P}(E_i)\to \mathbb{Z}$ such that $F=B_{f_1}\oplus\cdots\oplus B_{f_t}$. In particular, $F$ is the base polytope of a submodular function.
\end{proposition}
A proof was given in \cite{Schrijver}, but for convenience of the reader, we will also give a proof here.
\begin{proof}
Let $\mathcal{T}\subseteq \mathcal{P}(E)$ correspond to the tight constraints on $F$:
$$
\mathcal{T}=\{U\subseteq E\mid x(U)=f(U) \text{ for all $x\in F$}\}.
$$
It follows from the submodularity of $f$ that $\mathcal{T}$ is closed under taking unions and intersections.
Observe that the characteristic vectors $\{\chi^A\mid A\in \mathcal{T}\}$ span a $t$-dimensional space $V$.
Let $\emptyset=A_0\subset A_1\subset\cdots\subset A_{t'}=E$ be a maximal chain of sets in $\mathcal{T}$. We claim that $t'=t$. Observe that the characteristic vectors $\chi^{A_1},\ldots, \chi^{A_{t'}}$ are linearly independent and span a $t'$-dimensional subspace $V'\subseteq V$. Hence $t'\leq t$.
To prove equality, suppose that there exists an $A\in \mathcal{T}$ such that $\chi^A\not\in V'$. Take such an $A$ that is inclusionwise maximal. Now let $i\geq 0$ be maximal, such that $A_i\subseteq A$. Then $A_i\subseteq A_{i+1}\cap A\subsetneq A_{i+1}$. Hence by maximality of the chain, $A_{i+1}\cap A=A_i$. By maximality of $A$, we have $\chi^{A\cup A_{i+1}}\in V'$ and hence, $\chi^A=\chi^{A\cap A_{i+1}}+\chi^{A\cup A_{i+1}}-\chi^{A_{i+1}}\in V'$, contradiction the choice of $A$. This shows that $t'=t$.
Define $E_i=A_i\setminus A_{i-1}$ for $i=1,\ldots, t$. Define $f_i:\mathcal{P}(E_i)\to \mathbb{Z}$ by $f_i(U):=f(A_{i-1}\cup U)-f(A_{i-1})$ for all $U\subseteq E_i$. We will show that
\begin{equation}
F=B_{f_1}\oplus\cdots\oplus B_{f_t}.
\end{equation}
To see the inclusion `$\subseteq$', let $x=(x_1,\ldots,x_t)\in F$. Then $x(A_i)=f(A_i)$ holds for $i=0,\ldots,t$. Hence for any $i=1,\ldots,t$ and any $U\subseteq E_i$ we have
\begin{equation}
x_i(U)=x(A_{i-1}\cup U)-x(A_{i-1})\leq f(A_{i-1}\cup U)-f(A_{i-1})=f_i(U),
\end{equation}
and equality holds for $U=E_i$.
To see the converse inclusion `$\supseteq$', let $x=(x_1,\ldots,x_t)\in B_{f_1}\oplus\cdots\oplus B_{f_t}$. Clearly
\begin{equation}
x(A_k)=\sum_{i=1}^k x_i(E_i)=\sum_{i=1}^k (f(A_i)-f(A_{i-1}))=f(A_k),
\end{equation}
in particular $x(E)=f(E)$. To complete the proof, we have to show that $x(U)\leq f(U)$ holds for all $U\subseteq E$. Suppose for contradiction that $x(U)>f(U)$ for some $U$. Choose such a $U$ inclusionwise minimal. Now take $k$ minimal such that $U\subseteq A_k$. Then we have
\begin{eqnarray}
x(U\cup A_{k-1})&=&x(A_{k-1})+x_k(E_k\cap U)\nonumber\\
&\leq& f(A_{k-1})+f_k(E_k\cap U)=f(U\cup A_{k-1}).
\end{eqnarray}
Since $x(A_{k-1}\cap U)\leq f(A_{k-1}\cap U)$ by minimality of $U$, we have
\begin{eqnarray}
x(U)&=&x(A_{k-1}\cup U)+x(A_{k-1}\cap U)-x(A_{k-1})\nonumber\\
&\leq &f(A_{k-1}\cup U)+f(A_{k-1}\cap U)-f(A_{k-1})\leq f(U).
\end{eqnarray}
This contradicts the choice of $U$.
\end{proof}
\section{The main theorem}
In this section we prove our main theorem. For $B_f\subseteq \mathbb{R}^E$, denote $\mathrm{cr} (B_f):=\mathrm{cr} (B_f\cap \mathbb{Z}^E)$.
\begin{theorem}\label{main}
Let $f:\mathcal{P}(E)\to \mathbb{Z}$ be a submodular function. Then $\mathrm{cr} (B_f)=\dim B_f+1$.
\end{theorem}
We will need the following lemma.
\begin{lemma}\label{directsum}
Let $B_{f_1}, \ldots, B_{f_t}$ be base polytopes. Then $\mathrm{cr}(B_{f_1}\oplus\cdots\oplus B_{f_t})\leq \mathrm{cr}(B_{f_1})+\cdots+\mathrm{cr}(B_{f_t})-(t-1)$.
\end{lemma}
\begin{proof}
It suffices to show the lemma in the case $t=2$.
Let $k$ be a positive integer and let $w=(w_1,w_2)$ be an integer vector in $k\cdot(B_{f_1}\oplus B_{f_2})$. Let $w_1=\sum_{i=1}^r m_ix_i$ and $w_2=\sum_{i=1}^s n_iy_1$, where the $n_i,m_i$ are positive integers, the $x_i\in B_{f_1}$ and $y_i\in B_{f_2}$ integer vectors. Denote
\begin{eqnarray}
\{0,m_1,m_1+m_2,\ldots,m_1+\cdots+m_r\}&\cup&\nonumber\\
\{0,n_1,n_1+n_2,\ldots,n_1+\cdots+n_s\}&=&\{l_0,l_1,\ldots,l_q\},
\end{eqnarray}
where $0=l_0<l_1<\cdots<l_q=k$. Since $m_1+\cdots+m_r=n_1+\cdots+n_s=k$, we have $q\leq r+s-1$. For any $i=1,\ldots,q$, there exist unique $j,j'$ such that $m_1+\cdots+m_{j-1}<l_i\leq m_1+\cdots+m_j$ and $n_1+\cdots+n_{j'-1}<l_i\leq n_1+\cdots+n_{j'}$. Denote
$z_i:=(x_j,y_{j'})$. We now have the decomposition $w=\sum_{i=1}^q (l_i-l_{i-1})z_i$.
\end{proof}
We conclude this section with a proof of Theorem \ref{main}.
\begin{proof}[Proof of Theorem \ref{main}.]
The inequality $\mathrm{cr} (B_f)\geq \dim B_f+1$ is clear. We will prove the converse inequality by induction on $\dim B_f+|E|$, the case $|E|=1$ being clear. Let $E$ be a finite set, $|E|\geq 2$ and let $f:\mathcal{P}(E)\to \mathbb{Z}$ be submodular.
Let $k$ be a positive integer and let $w\in kB_f\cap \mathbb{Z}^E$. We have to prove that $w$ is the positive integer combination of at most $\dim B_f+1$ integer points in $B_f$. We may assume that
\begin{equation}
\dim B_f=|E|-1.
\end{equation}
Indeed, suppose that $\dim B_f=|E|-t$ for some $t\geq 2$. Then by Proposition \ref{faces}, there exist a partition $E=E_1\cup\cdots\cup E_t$ and submodular functions $f_i:\mathcal{P}(E_i)\to \mathbb{Z}$ such that $B_f=B_{f_1}\oplus\cdots\oplus B_{f_t}$. By induction, $\mathrm{cr} (B_{f_i})=\dim B_{f_i}+1$ for every $i$. Hence by Lemma \ref{directsum}
\begin{eqnarray}
\mathrm{cr} (B_f)&\leq&\mathrm{cr} (B_{f_1})+\cdots+\mathrm{cr} (B_{f_t})-(t-1)\nonumber\\
&=&\dim B_{f_1}+\cdots+\dim B_{f_t}+1=\dim B_f+1.
\end{eqnarray}
Fix an element $e\in E$. Write $w(e)=kq+r$ where $r,q$ are integers and $0\leq r\leq k-1$. Let $f'=f|(e,q+1)$.
By Theorem \ref{idp}, we can find integer vectors $y_1,\ldots, y_k\in B_{f'}$ such that $w=y_1+\cdots+y_k$. We may assume that $y_i(e)=q+1$ for $i=1,\ldots,r$. Indeed, if $y_i(e)\leq q$ would hold for at least $k-r+1$ values of $i$, then we would arrive at the contradiction $w(e)\leq (k-r+1)q+(r-1)(q+1)\leq kq+r-1<w(e)$.
Let $f'':=f|(e,q)$. Denote $w':=y_1+\cdots+y_r$. So we have decomposed $w$ into integer vectors
\begin{eqnarray}
w'&\in &rB_{f'}=B_{rf'}\nonumber\\
w-w'&=&y_{r+1}+\cdots+y_k\in (k-r)B_{f''}=B_{(k-r)f''}.
\end{eqnarray}
We may assume that $r\neq 0$, since otherwise $w\in kF$, where $F$ is the face $B_{f''}\cap \{x\mid x(e)=q, x(E)=f(E)\}$ of dimension $\dim F\leq |E|-2$ (since $|E|\geq 2$). Then by induction we could write $w$ as a nonnegative integer linear combination of at most $1+(\dim F)<\dim B_f+1$ integer vectors in $B_{f''}\subseteq B_f$.
Consider the intersection
\begin{equation}
P:=B_{rf'}\cap B_{w+(k-r)(f'')^*}.
\end{equation}
Observe that $P$ is nonempty, since it contains $w'$. Furthermore, by Theorem \ref{edmonds}, $P$ is an integer polytope. Hence taking an integer vertex $x'$ of $P$ and denoting $x'':=w-x'$, we have that $x'$ is an integer vector of $B_{rf'}$ and $x''$ is an integer vector of $B_{(k-r)f''}$.
Let $F'$ be the inclusionwise minimal face of $B_{rf'}$ containing $x'$ and let $F''$ be the inclusionwise minimal face of $B_{w+(k-r)(f'')^*}$ containing $x'$. Denote $H':=\mathrm{aff.hull}(F')$ and $H'':=\mathrm{aff.hull}(F'')$. Since $x'$ is a vertex of $P$, we have
\begin{equation}
H'\cap H''=\{x'\}.
\end{equation}
Indeed, every supporting hyperplane of $B_{rf'}$ containing $x'$ also contains $F'$ by minimality of $F'$, and hence contains $H'$. Similarly, every supporting hyperplane of $B_{w+(k-r)(f'')*}$ containing $x'$ also contains $H''$. Since $x'$ is the intersection of supporting hyperplanes for the two polytopes, the claim follows.
Observe that both $F'$ and $F''$ are contained in the affine space
\begin{equation}
\{x\in\mathbb{R}^n\mid x(E)=rf(E),\ x(e)=r(q+1)\},
\end{equation}
which has dimension $n-2$ since $|E|\geq 2$. It follows that
\begin{eqnarray}
\dim F'+\dim F''&=&\dim H'+\dim H''\nonumber\\
&=&\dim(\mathrm{aff.hull}(H'\cup H''))+\dim(H'\cap H'')\nonumber\\
&\leq& n-2.
\end{eqnarray}
Since $F''$ is a face of $B_{w+(k-r)(f'')^*}$ containing $x'$, we have that $w-F''$ is a face of $B_{(k-r)f''}$ containing $x''$. By induction we see that
\begin{eqnarray}
\mathrm{cr} (F')+\mathrm{cr} (w-F'')&\leq& (\dim F'+1)+(\dim (w-F'')+1)\nonumber\\
&=&\dim F'+\dim F''+2\leq n.
\end{eqnarray}
This gives a decomposition of $w=x'+x''$ using at most $n$ different bases of $B_f$, completing the proof.
\end{proof}
|
1,477,468,750,635 | arxiv | \section{Introduction}
\label{sec:intro}
With the rapid advancement of deep learning technologies,
various end-to-end learned image/video codecs have been developed~\cite{balle2018, minnen, cheng2020} to rival their handcrafted
counterparts such as JPEG, High Efficiency Video Coding (HEVC)~\cite{hevc}, and Versatile Video Coding (VVC)~\cite{VVC}. For instance, in the seminal work by Ball\'{e} \textit{et al.}~\cite{balle2018}, the variational autoencoders (VAE) were used to construct an end-to-end learned image compression system based on a context-adaptive entropy model. This model incorporates a hyperprior as side information to effectively capture dependencies in the latent representation, thereby improving entropy modeling. Many follow-up VAE-based models were then developed to further improve compression performance~\cite{minnen, lee, cheng2020}. A popular one
by Cheng \textit{et al.}~\cite{cheng2020} used discretized Gaussian mixture likelihoods to parameterize the latent distribution
for
entropy modeling, achieving high rate-distortion (RD) performance.
In fact, the results in~\cite{cheng2020} show that this model achieves superior performance on both PSNR and MS-SSIM quality metrics over
JPEG, JPEG2000, and HEVC (Intra), and comparable performance with
VVC (Intra).
Although VAEs have been proven to be effective for image compression, their ability to provide high-quality input reconstruction has been called into question~\cite{ae_limit}.
To address this issue, a learned lossy image compression method was proposed in~\cite{ae_limit} based on normalizing flows. Using augmented normalizing flows (ANF), Ho \textit{et al.}~\cite{anfic} developed ANF for image compression (ANFIC),
which combines both VAEs and normalizing flows to achieve the state-of-the-art performance for image compression, even better than \cite{cheng2020}.
Building on the success of learned image compression, learned video compression is catching up quickly. Lu \textit{et al.}~\cite{dvc} presented deep video compression (DVC)
as the first end-to-end learned video codec based on temporal predictive coding. Agustsson \textit{et al.}~\cite{ssf} proposed an end-to-end video coding model based on a learning-based motion compensation framework in which a warped frame produced by a learned flow map is used a predictor for coding the current video frame. Liu \textit{et al.}~\cite{liu} used feature-domain warping in a coarse-to-fine manner for video compression. Hu \textit{et al.}~\cite{hu} employed deformable convolutions for feature warping.
\begin{figure*}
\centering
\includegraphics[scale=0.45]{flowchart1_new}
\caption{The block diagram of the proposed learned video compression system. AE and AD are the encoder and decoder of the HyperPrior coder from~\cite{balle2018}, respectively, and $\odot$ is point-wise multiplication. What is shown is the encoding of the first P frame where we use the HyperPrior coder for motion coding. We use PWC-Net~\cite{pwc} as the FlowNet.}
\label{fig:flowchart1}
\end{figure*}
Most of the existing video codecs rely on residual coding. However, Ladune \textit{et al.}~\cite{theo2,ladune} argued that conditional coding relative to a predictor is more efficient than residual coding using the same predictor.
Building on this idea, Ho \textit{et al.}~\cite{canf} proposed conditional augmented normalizing flows for video coding (CANF-VC),
which achieves state-of-the-art performance among learned video codecs. CANF-VC uses conditional coding for both motion and inter-frame coding.
In this paper we extend these ideas further. First, we provide a more comprehensive theoretical justification for conditional coding relative to multiple predictors/coding modes. Then, using conditional coding engines from CANF-VC, we construct a codec called learned conditional coding modes for video coding (LCCM-VC). The results show that LCCM-VC outperforms CANF-VC on three commonly used video test sets, and even outperforms HM 16.22~\cite{hm} implementation of HEVC on two out of three of these datasets.
The paper is organized as follows.
The proposed system is presented in Section \ref{sec:proposed}, including its theoretical motivation in Section~\ref{sec:motivation} and implementation descriptionin Section~\ref{sec:codec_description}. Experiments are described and analyzed in Section~\ref{sec:experiments}. Finally, the conclusions are drawn in Section~\ref{sec:conclusions}.
\section{Proposed Method}
\label{sec:proposed}
\subsection{Motivation}
\label{sec:motivation}
We motivate the proposed codec via information theory. Let $X$ and $Y$ be two random variables and let $R=X-Y$, then
\begin{equation}
\begin{split}
H(X|Y) = H(R+Y|Y) &\stackrel{\text{(a)}}{=}
H(R|Y) \\
&\stackrel{\text{(b)}}{\leq} H(R) = H(X-Y),
\end{split}
\label{eq:conditional_vs_residual}
\end{equation}
where $H(\cdot)$ is the entropy, $H(\cdot|\cdot)$ is the conditional entropy, (a) follows from the fact that given $Y$, the only uncertainty in $R+Y$ is due to $R$, and (b) follows from the fact that conditioning does not increase entropy~\cite{Cover_Thomas_2006}.
Now consider the following Markov chain $X \to Y \to f(Y)$ where $f(\cdot)$ is an arbitrary function. By the data processing inequality~\cite{Cover_Thomas_2006}, we have $I(X;f(Y))\leq I(X;Y)$, where $I(\cdot;\cdot)$ is the mutual information. Expanding the two mutual informations as follows: $I(X;f(Y)) = H(X) - H(X|f(Y))$ and $I(X;Y)=H(X)-H(X|Y)$, and applying the data processing inequality, we conclude
\begin{equation}
H(X|Y) \leq H(X|f(Y)).
\label{eq:cond_entropy_f}
\end{equation}
In video compression, coding modes are constructed via predictors, for example inter coding modes use other frames to predict the current frame, while intra coding modes use information from the same frame to form a prediction. Let $X=\mathbf{X}_t$ be the current frame, $Y=\{\mathbf{X}^{(1)}, ..., \mathbf{X}^{(n)}\}$ be a set of $n$ candidate predictors, and $\mathbf{X}_p=f(\mathbf{X}^{(1)}, ..., \mathbf{X}^{(n)})$ be a predictor for $\mathbf{X}_t$ from $\{\mathbf{X}^{(1)}, ..., \mathbf{X}^{(n)}\}$. Function $f(\cdot)$ could, for example, use different combinations of $\{\mathbf{X}^{(1)}, ..., \mathbf{X}^{(n)}\}$ in different regions of the frame. Then, based on~(\ref{eq:conditional_vs_residual}) and~(\ref{eq:cond_entropy_f}),
\begin{equation}
\begin{split}
H(\mathbf{X}_t|\mathbf{X}^{(1)}, ..., \mathbf{X}^{(n)}) &\leq H(\mathbf{X}_t|f(\mathbf{X}^{(1)}, ..., \mathbf{X}^{(n)})) \\
&= H(\mathbf{X}_t|\mathbf{X}_p) \leq H(\mathbf{X}_t - \mathbf{X}_p).
\end{split}
\label{eq:cond_entropy_n_pred}
\end{equation}
In conventional video coding, frame prediction $\mathbf{X}_p$ is formed by using different predictors in different parts of the frame, and then coding the prediction residual $\mathbf{X}_t - \mathbf{X}_p$. However,~(\ref{eq:cond_entropy_n_pred}) says that a more efficient approach is coding $\mathbf{X}_t$ conditionally relative to the candidate predictors. In the next section we describe a codec that is built on these principles, utilizing conditional coding and a variety of predictors.
\subsection{Codec description}
\label{sec:codec_description}
Fig.~\ref{fig:flowchart1} depicts the structure of our proposed video compression system. It consists of three major components: 1) the motion coder, 2) the mode generator, and 3) the inter-frame coder. The exact functionality of each of these components is described below.
\textbf{The motion coder:} Given the current frame $\mathbf{X}_{t}$ and its reconstructed reference frame $\widehat{\mathbf{X}}_{t-1}$, we first feed them to a learned optical flow estimation network like PWC-Net~\cite{pwc}, to obtain a motion flow map $\mathbf{F}_{t}$. The obtained flow is then encoded by the encoder (AE) of the HyperPrior-based coder from~\cite{balle2018}, and the obtained motion bitstream is transmitted to the decoder. At the decoder side, the transmitted flow map is reconstructed by the decoder (AD) of the HyperPrior coder to obtain $\widehat{\mathbf{F}}_{t}$. Then, $\widehat{\mathbf{X}}_{t-1}$ is warped by $\widehat{\mathbf{F}}_{t}$ using bilinear sampling \cite{canf} to obtain a motion-compensated frame $\overline{\mathbf{X}}_{t}$.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.5]{extrapolation.png}
\caption{The overall structure of the motion extrapolation network for producing a conditional flow, $\mathbf{F}_c$, for encoding $\mathbf{F}_t$.}
\label{fig:extrapolation}
\end{figure}
The above-described motion coder is only used for the first P frame in each group of pictures (GOP). For the subsequent P frames, we use the CANF-based motion coder shown in Fig.~\ref{fig:extrapolation}. Here, the extrapolation network is used to extrapolate a flow map $\mathbf{F}_c$ from the three previously-decoded frames $\widehat{\mathbf{X}}_{t-3}$, $\widehat{\mathbf{X}}_{t-2}$, $\widehat{\mathbf{X}}_{t-1}$ and two decoded-flow maps $\widehat{\mathbf{F}}_{t-2}$, $\widehat{\mathbf{F}}_{t-1}$. $\mathbf{F}_c$ is then used as a conditioner for coding $\mathbf{F}_t$. The architecture of the motion extrapolation network is the same as the one used in CANF-VC~\cite{canf}.
\textbf{The mode generator:} The goal of the mode generator is to produce additional coding modes, which can then be used by the CANF-based conditional inter-frame coder to improve RD performance. For this purpose, the previous reconstructed frame $\widehat{\mathbf{X}}_{t-1}$, the motion-compensated frame $\widehat{\mathbf{X}}_{t}$, and the decoded flow map $\widehat{\mathbf{F}}_t$ are concatenated and fed to the mode generator (implemented as a convolutional network, details in Section~\ref{sec:implementation}) to produce two weight maps $\boldsymbol{\alpha}_t$ and $\boldsymbol{\beta}_t$. Since these maps are produced from previously (de)coded data, they can be regenerated at the decoder without any additional bits.
These two maps are then fed to a sigmoid layer to bound their values between 0 and 1. After that,
a frame predictor $\widetilde{\mathbf{X}}_{t}$ is generated as:
\begin{equation}
\widetilde{\mathbf{X}}_{t} = \boldsymbol{\beta}_t \odot \overline{\mathbf{X}}_{t} + (\mathbf{1}-\boldsymbol{\beta}_t) \odot \widehat{\mathbf{X}}_{t-1},
\end{equation}
where $\odot$ denotes Hadamard (element-wise) product and $\mathbf{1}$ is the all-ones matrix.
Moreover, $\boldsymbol{\alpha}_t$ is multiplied by both $\widetilde{\mathbf{X}}_{t}$ and $\mathbf{X}_{t}$, and the resultant two frames, $\boldsymbol{\alpha}_t \odot \widetilde{\mathbf{X}}_{t}$ and $\boldsymbol{\alpha}_t \odot \mathbf{X}_{t}$, are fed to the CANF-based conditional inter-frame coder for coding $\boldsymbol{\alpha}_t \odot \mathbf{X}_{t}$ conditioned on $\boldsymbol{\alpha}_t \odot \widetilde{\mathbf{X}}_{t}$. Note that $\boldsymbol{\alpha}_t$, $\boldsymbol{\beta}_t$, $\widehat{\mathbf{X}}_{t-1}$ and $\widetilde{\mathbf{X}}_{t}$ are available at the decoder.
\textbf{The inter-frame coder:} The inter-frame coder codes $\boldsymbol{\alpha}_t \odot \mathbf{X}_{t}$ conditioned on $\boldsymbol{\alpha}_t \odot \widetilde{\mathbf{X}}_{t}$ using the inter-frame coder of CANF-VC to obtain $\widecheck{\mathbf{X}}_t$ at the decoder. The final reconstruction of the current frame, i.e. $\widehat{\mathbf{X}}_{t}$, is then obtained by:
\begin{equation}
\widehat{\mathbf{X}}_{t} = \widecheck{\mathbf{X}}_t + (\mathbf{1}-\boldsymbol{\alpha}_t) \odot \widetilde{\mathbf{X}}_{t}.
\end{equation}
In the limiting case when $\boldsymbol{\beta}_t \to \mathbf{0}$, the predictor $\widetilde{\mathbf{X}}_{t}$ becomes equal to $\widehat{\mathbf{X}}_{t-1}$, but when $\boldsymbol{\beta}_t \to \mathbf{1}$, $\widetilde{\mathbf{X}}_{t}$ becomes equal to the motion-compensated frame $\overline{\mathbf{X}}_{t}$. For $\mathbf{0}<\boldsymbol{\beta}_t<\mathbf{1}$, the predictor $\widetilde{\mathbf{X}}_{t}$ is a pixel-wise mixture of $\widehat{\mathbf{X}}_{t-1}$ and $\overline{\mathbf{X}}_{t}$. Hence, $\boldsymbol{\beta}_t$ provides the system with more flexibility for choosing the predictor for each pixel within the current frame being coded. Also, for pixels where $\boldsymbol{\alpha}_t \to 0$, $\widehat{\mathbf{X}}_{t}$ becomes equal to $\widetilde{\mathbf{X}}_{t}$, so the inter-frame coder does not need to code anything. This resembles the SKIP mode in conventional coders, and depending on the value of $\boldsymbol{\beta}_t$, the system can directly copy from $\widehat{\mathbf{X}}_{t-1}$, $\overline{\mathbf{X}}_{t}$, or a mixture of these two, to obtain $\widehat{\mathbf{X}}_{t}$. When $\boldsymbol{\alpha}_t \to \mathbf{1}$, only the inter-frame coder is used to obtain $\widehat{\mathbf{X}}_{t}$.
In the limiting case when $\boldsymbol{\alpha}_t \to \mathbf{1}$ and $\boldsymbol{\beta}_t \to \mathbf{1}$, the proposed method would reduce to CANF-VC~\cite{canf}. Hence, the proposed system has more flexibility and a larger number of conditional coding modes than CANF-VC.
Note that a somewhat similar approach was proposed in~\cite{theo2}. However,~\cite{theo2} used only one weight map, which is similar to our~$\boldsymbol{\alpha}$ map, and this map was coded and transmitted to the decoder.
In our proposed system, two maps, $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$, are used to create a larger number of modes. Moreover, these two maps can be constructed using previously (de)coded information, so they can be regenerated at the decoder without any additional bits to signal
the coding modes.
\begin{figure*}
\centering
\includegraphics[scale=0.32]{results}
\caption{Comparing various methods on three datasets: HEVC Class B, UVG, and MCL-JCV.}
\label{fig:results}
\vspace{-5pt}
\end{figure*}
\vspace{-5pt}
\subsection{Implementation details}
\label{sec:implementation}
To implement the proposed system, conditional CANF-based coders from~\cite{canf} are used to implement the motion coder and the inter-frame coder. The proposed mode generator is implemented as a simple convolutional network with structure $[C_1,R_1,C_2]$, where $C_1$ and $C_2$ are convolutional layers with 32 kernels of size $3\times3$ (stride=1, padding=1), and $R_1$ is a \texttt{LeakyReLU} layer whose negative slope is $0.1$. Similar to CANF-VC, we use ANFIC \cite{anfic} for encoding the I-frames.
\section{Experiments}
\label{sec:experiments}
\vspace{-5pt}
\subsection{Training}
We trained the proposed LCCM-VC on the VIMEO-90K Setuplet dataset~\cite{vimeo}, which consists of 91,701 7-frame sequences with fixed resolution $448\times 256$, extracted from 39K selected video clips. We randomly cropped
these clips into $256\times 256$ patches, and used them for training LCCM-VC using a GOP of $N=5$ frames. We employed the Adam~\cite{adam} optimizer with the batch size of 4. We adopted a two-stage training scheme. In the first stage, we froze the CANF-based conditional coders with their pre-trained weights, and optimized the remainder of the model for 5 epochs with the initial learning rate of $10^{-4}$. In the second stage, we trained the entire system end-to-end for 5 more epochs with the initial learning rate of $10^{-5}$. Four separate models were trained for four different bitrates using the following loss function:
\begin{equation}
\mathcal{L}= \sum_{i=1}^N \frac{\eta_i}{\sum_{j} \eta_j} \cdot \mathcal{L}_i,
\end{equation}
where $\eta_i = i$, and $\mathcal{L}_i$ is the RD loss of the $i$-th training frame defined in~\cite{canf} with $\lambda_1 \in \{256, 512, 1024, 2048\}$ and $\lambda_2 = 0.01 \cdot \lambda_1$. Note that~\cite{canf} used $\mathcal{L}=\sum_i \mathcal{L}_i$ as the training loss, without weighting. In our experiments, we first trained the model with $\lambda_1=2048$ (highest rate), and all lower-rate models were then initialized from this model.
\vspace{-5pt}
\subsection{Evaluation methodology}
We evaluate the performance of LCCM-VC on three datasets commonly-used in learning-based video coding: UVG \cite{uvg} (7 sequences), MCL-JCV \cite{mcl} (30 sequences), and HEVC Class B \cite{classb} (5 sequences). Following the common test protocol used in the recent literature~\cite{canf}, we encoded only the first 96 frames of the test videos, with the GOP size of 32.
We used the following video codecs as benchmarks: x265 (`very slow' mode)~\cite{x265}, HEVC Test Model (HM 16.22) with LDP profile~\cite{hm} , M-LVC~\cite{mlvc}, DCVC~\cite{dcvc}, and CANF-VC~\cite{canf}. Note that CANF-VC can be considered as the current state-of-the-art learned video codec. As the quality of the I-frames has a significant role on the RD performance of video codecs, in order to have a fair comparison, we used ANFIC~\cite{anfic} as the I-frame coder for all learned codecs in the experiment. Note that ANFIC achieves the state-of-the-art performance for static image coding~\cite{anfic}. For HM and x265, we used the data points reported in~\cite{canf}, while for other codecs we used their original public code.
Similar to the existing practice in the learned video coding literature~\cite{canf, mlvc, dcvc}, to evaluate the RD performance of various methods, the bitrates were measured in bits per pixel (BPP) and the reconstruction quality was measured by RGB-PSNR. Then the RD performance is summarized into BD-Rate~\cite{Bjontegaard}. However, unlike related works that use x265 as the anchor, we used HM 16.22 as the anchor for computing BD-Rates. This is because HM 16.22 is a much stronger codec than x265, as will be seen in the results.
\vspace{-7pt}
\subsection{Results}
\label{sec:results}
In Fig. \ref{fig:results}, we plot RGB-PSNR vs. BPP curves of various codecs on the three datasets. It is notable that both LCCM-VC and CANF-VC achieve better performance than HM 16.22 at higher bitrates/qualities, whereas HM 16.22 has slight advantage at lower bitrates. All three codecs seem to offer comparable performance at medium bitrates.
Table~\ref{tab:bd} shows BD-rate (\%) relative to the HM 16.22 anchor, with negative values showing an average bit reduction (i.e., coding gain) relative to the anchor. The best result in each row of the table is shown in bold.
First, note that x265, even in its `very slow' mode used here, is 75-82\% less efficient than HM 16.22. This is the reason why we did not use it as the anchor, but opted for HM 16.22 instead.
The proposed LCCM-VC shows the best performance among the learning-based video codecs on all three datasets. In fact, on HEVC Class B and MCL-JCV datasets, it is even better than HM 16.22. The BD-Rate gains of LCCM-VC over CANF-VC, the second-best leared codec in this comparison, are 4.01\%, 0.67\%, and 3.60\% on HEVC Class B, UVG, and MCL-JCV, respectively. Averaging these according to the number of videos in each dataset gives the average BD-Rate gain of 3.16\%. Although LCCM-VC uses the same conditional coding engines as CANF-VC, it benefits from more flexible coding modes, as explained in Section~\ref{sec:proposed}, and thereby achieves better RD performance.
\begin{table}[t]
\centering
\caption{BD-Rate (\%) relative to HM 16.22.}
\vspace{7pt}
\footnotesize
\setlength{\tabcolsep}{5pt}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c||c|c|c|c|c|}
\hline
Dataset & x265 & DCVC & M-LVC & CANF-VC & LCCM-VC \\
\hline
\hline
HEVC-B & +80.80 & +27.11 & +28.73 & +1.23 & \textbf{--2.78} \\
\hline
UVG & +82.21 & +52.56 & +73.88 & +4.04 & \textbf{+3.37}\\
\hline
MCL-JCV & +75.62 & +23.83 & +61.16 & +2.51 & \textbf{--1.09} \\
\hline
\end{tabular}
\label{tab:bd}
\end{table}
\vspace{-5pt}
\section{Conclusions}
\label{sec:conclusions}
\vspace{-5pt}
In this paper, we proposed learned conditional coding modes for video coding (LCCM-VC), an end-to-end learned video codec that, to our knowledge, achieves state-of-the-art results among learning-based codecs. We also gave a theoretical justification for why conditional coding relative to multiple coding modes should be better than residual coding. LCCM-VC outperforms other learning-based video codecs on three commonly used test datasets, and even outperforms HM 16.22 on two of these datsets.
\vspace{-5pt}
\section{Acknowledgment}
We would like to thank the authors of CANF-VC~\cite{canf} for fruitful discussions and for sharing their code, which has formed the basis of the proposed method.
\small
\bibliographystyle{IEEEbib}
|
1,477,468,750,636 | arxiv | \section{Introduction}\label{sec:intro}
QCD is the accepted underlying theory of the strong interactions, and the properties of the spectrum and interactions of hadrons should be calculable from it using a suitable regularization scheme for the quark and gluon fields. A particularly convenient approach is to consider the theory on a finite lattice of space-time points so as to admit a numerical method of solution.
While significant progress has been made recently in determining the single particle spectrum of hadrons, describing the resonances seen in scattering experiments in terms of eigenstates of QCD has remained a challenge to lattice calculations. Direct access to the matrix elements related to decays is missing in the Euclidean formulations of lattice QCD. In principle, the relevant hadronic matrix elements can be inferred indirectly through a detailed study of the spectrum in a finite-volume lattice box \cite{DeWitt:1956be,Luscher:1991cf}. Within this approach, one can map the discrete spectrum of eigenstates of the finite volume theory to the infinite volume scattering parameters, and if present, observe resonant behavior.
Crucial to this approach is the high-precision determination of multiple excited eigenstate energies with a given quantum number. Determination of the discrete spectrum of finite-volume eigenstates follows from analysis of the time-dependence of two-point correlation functions featuring operators of the desired quantum numbers constructed from quark and gluon fields. For creation and annihilation at time $0$ and $t$ respectively we have
\begin{equation}
C_{ij}(t) = \big\langle 0 \big| {\cal O}_i(t) {\cal O}_j^\dag(0) \big| 0
\big\rangle . \nonumber
\end{equation}
Inserting a complete set of eigenstates of the Hamiltonian, this correlator has a spectral decomposition
\begin{equation}
C_{ij}(t) = \sum_\mathfrak{n} \big\langle 0 \big| {\cal O}_i \big| \mathfrak{n} \big\rangle
\big\langle \mathfrak{n} \big| {\cal O}_j^\dag \big| 0 \big\rangle\, e^{- E_\mathfrak{n} t} , \label{spectral}
\end{equation}
where the sum is over all states that have the same quantum numbers as the interpolating operators $ {\cal O}_i, \, \mathcal{O}_j$. Note that in a finite volume, this yields a discrete set of energies, $E_\mathfrak{n}$. It is these finite-volume energies that are related to infinite volume scattering amplitudes through the L\"uscher method \cite{Luscher:1991cf}.
A relatively straightforward sector in which to study hadron scattering in finite-volume is $\pi\pi$ in isospin-2. At low energies, this channel is experimentally observed to be non-resonant in low partial-waves \cite{Hoogland:1977kt,Cohen:1973yx,Zieminski:1974ex,Durusoy:1973aj} and this lack of resonances ensures a slow variation of phase-shifts with energy. This makes the problem of determining the phase-shift as a function of energy somewhat easier. A difficulty of this choice of channel is that the interaction between pions in isospin 2 is weak so that the discrete energies in finite-volume are shifted relatively little from the values relevant for non-interacting pions. This will require us to make precision measurements of the energy spectrum in order to resolve the differences. Within the field-theory, the correlators for this channel do not contain any annihilation contributions; the only Wick contractions featuring are those in which the four quark fields in the creation operator (at $t=0$) propagate in time to the annihilation operator (at $t$). The absence of quark propagation from $t$ to $t$ reduces the computational overhead for the calculation.
In a previous publication \cite{Dudek:2010ew} we presented the first lattice QCD study of the energy-dependence of $S$ and $D$-wave $\pi\pi$ scattering in isospin-2. We limited ourselves to the $\pi\pi$ system overall at rest and found only a handful of points below the $4\pi$ inelastic threshold on the lattice volumes considered. In this paper we will also consider the $\pi\pi$ system ``in-flight", that is with an overall momentum (satisfying the periodic boundary conditions of the finite cubic lattice). This allows us to determine the phase-shifts at a larger number of discrete energies below the $4\pi$ inelastic threshold and to map out the energy dependence of the scattering in more detail. The price to be paid is that the relevant symmetry group in the lattice calculation is significantly reduced. At rest the lattice has a cubic symmetry whose irreducible representations (``irreps") contain multiple angular momenta, e.g. the ``scalar" representation, $A_1^+$ contains as well as $\ell=0$, also $\ell=4$ and higher. In-flight, with two pions having total momentum, $\vec{P}$, the symmetry is restricted to rotations and reflections which leave the cubic lattice and the axis defined by $\vec{P}$ invariant. The irreps of this symmetry group are even less sparse in $\pi\pi$ scattering angular momentum; the ``scalar" representations typically contain $\ell=0,2,4\ldots$. In this work we will consider the effect these higher partial waves have on the determination of scattering phase-shifts for the lowest $\ell$ values.
In \cite{Dudek:2010ew}, we used only the simplest $\bar{\psi}\gamma_5\psi$ interpolators in construction of $\pi\pi$ correlators. Single-pion correlators constructed with these operators are saturated by the ground state only at rather large times, and similarly the $\pi\pi$ correlators receive significant contributions from excited $\pi^\star$ states. The need to consider correlators at large times increases the degree to which we feel the systematic effect of the finite temporal extent of the lattice ($T$). Limited account was taken of these effects in \cite{Dudek:2010ew}. In this paper we take steps to address finite-$T$ effects, firstly by using ``optimised" pion operators which are saturated by the ground state pion at earlier times, and secondly by explicitly attempting to remove the leading effects of finite-$T$ from the measured correlators. While these effects are small in absolute terms, determination of the rather weak $I=2$ interaction relies upon precise measurement of small energy shifts, and as such it is important to account for even small systematic effects.
Our approach to determining the finite-volume spectrum is to use a large basis of operators in each symmetry channel with which we form a matrix of correlation functions having all relevant operators at the source \emph{and} sink. This matrix can be analysed variationally\cite{Michael:1985ne,Luscher:1990ck,Blossier:2009kd}, extracting a spectrum of energy eigenstates which are orthogonal in the space of operators used. This orthogonality is particularly useful in cases where levels are close to degenerate and to extract relatively high-lying states whose contribution to any single correlation function may be small relative to the ground state. The excited single-hadron spectrum of isovector and isoscalar mesons \cite{Dudek:2009qf,Dudek:2010wm,Dudek:2011tt} and baryons \cite{Edwards:2011jj,Dudek:2012ag} has been extracted with some success using this procedure. In the present case we require a basis of operators capable of interpolating a pair of pions from the vacuum, constructed to transform irreducibly in the relevant symmetry group. The fact that $I=2$ is expected to have only relatively weak inter-pion interaction strength suggests a natural basis might be one resembling pairs of non-interacting pions, i.e. pions of definite momentum.
In general our $\pi\pi$ creation operators have the form
\begin{equation}
\big( \pi\pi \big)_{\vec{P}, \Lambda,\mu}^{[\vec{k}_1, \vec{k}_2]\dag} = \sum
_{\substack{\vec{k}_1, \vec{k}_2 \\ \vec{k}_1 + \vec{k}_2 = \vec{P} }} \mathcal{C}(\vec{P},\Lambda,\mu; \vec{k}_1; \vec{k}_2 )\; \pi^\dag(\vec{k}_1)\, \pi^\dag(\vec{k}_2) \nonumber
\end{equation}
here $\mathcal{C}$ are the Clebsch-Gordon coefficients for combining the two pion operators of definite momentum $\vec{k}_1$, $\vec{k}_2$ so that the operator overall transforms in the irrep $\Lambda$ of the relevant symmetry group for total momentum $\vec{P} = \vec{k}_1 + \vec{k}_2$. This involves summing over multiple values of momenta $\vec{k}_1$, $\vec{k}_2$ with the same magnitudes, $|\vec{k}_1|$, $|\vec{k}_2|$ and related by allowed lattice rotations. The basis is built up out of different magnitudes of pion momenta that can sum to give the same $\vec{P}$. Much greater detail will be presented later in this paper.
Using this basis we compute correlators within various irreps $\Lambda$ for various $\vec{P}$,
\begin{equation}
C^{\vec{P}, \Lambda, \mu}_{[\vec{k}'_1,\vec{k}'_2],[\vec{k}_1,\vec{k}_2]}\big( t \big) = \left\langle \big(\pi\pi\big)_{\vec{P},\Lambda,\mu}^{[\vec{k}'_1, \vec{k}'_2]}(t) \cdot \big(\pi\pi\big)_{\vec{P},\Lambda,\mu}^{[\vec{k}_1, \vec{k}_2]\dag}(0) \right\rangle \nonumber
\end{equation}
and for a fixed $\vec{P},\Lambda,\mu$ we perform variational analysis in a basis of operators labeled by $[\vec{k}_1,\vec{k}_2]$ leading to a finite-volume spectrum, $E_\mathfrak{n}(\vec{P},\Lambda; L)$. This spectrum, determined in the rest frame of the lattice, corresponds to a discrete set of scattering momenta, $p_\mathsf{cm}$, in the center-of-momentum frame.
The finite-volume spectrum so obtained is related through the L\"uscher formalism \cite{Luscher:1990ck,Luscher:1991cf} (as extended in \cite{Rummukainen:1995vs,Kim:2005gf,Christ:2005gi} to the case of moving frames) to the phase-shifts, $\delta_\ell(p_\mathsf{cm})$, for elastic $\pi\pi$ scattering in partial waves of angular momentum, $\ell$. As discussed earlier, a given irrep $\Lambda$ of momentum $\vec{P}$, contains multiple angular momenta, $\ell$, and the formalism relates the finite-volume spectrum to the scattering amplitudes for all relevant $\ell$ though the following formula:
\begin{equation}
\det\left[ e^{2i \boldsymbol{\delta}(p_\mathsf{cm})}- \mathbf{U}^{(\vec{P},\Lambda)}\big( p_\mathsf{cm}\tfrac{L}{2\pi} \big) \right] = 0 \label{luescher_intro}
\end{equation}
Here $\mathbf{U}^{(\vec{P},\Lambda)}\big( p_\mathsf{cm}\tfrac{L}{2\pi} \big)$ is a matrix of known functions and $e^{2i \boldsymbol{\delta}(p_\mathsf{cm})}$ is a diagonal matrix featuring the scattering phase-shifts $\{ \delta_\ell \}$. In both cases the rows and columns of the matrices are labelled by the angular momenta, $\ell$, relevant for the irrep $(\vec{P},\Lambda)$. These matrices are formally infinite, but we may take advantage of the hierarchy $\delta_0 \gg \delta_2 \gg \delta_4 \ldots$ relevant at low energies\footnote{near threshold, angular momentum conservation requires $\delta_\ell \sim p_\mathsf{cm}^{2\ell + 1}$}, that tends to reduce the effect of higher $\ell$ in Equation \ref{luescher_intro}.
We will explore two methods to extract the phase shifts. The first method, similar to the one used in \cite{Dudek:2010ew}, exploits the above hierarchy to determine the phase-shift in the lowest contributing partial wave and estimates a systematic uncertainty from plausible variation of the higher partial waves.\footnote{we note that Ref.~\cite{Leskovec:2012gb} has recently discussed a similar approach} The second method parameterizes the momentum-dependence of the phase-shifts in $\ell=0,2\ldots$ using effective range expansions, then by performing a global fit which attempts to describe many finite-volume momentum points in many irreps, finds the values of the effective range expansion parameters.
Computations were performed on anisotropic lattices with three dynamical flavors of Clover fermions~\cite{Edwards:2008ja,Lin:2008pr} with spatial lattice spacing $a_s\sim 0.12\,$fm, and a temporal lattice spacing approximately $3.5$ times smaller, corresponding to a temporal scale $a_t^{-1}\sim 5.6$ GeV. This fine temporal lattice spacing has proven useful in determining the spectrum of mesons and baryons, as well as the previous $\pi\pi$ $I=2$ results~\cite{Dudek:2010ew}. In this work, results are presented for the light quark $I=2$ spectrum at quark mass parameter $a_t m_l=-0.0840$ and $a_t m_s=-0.0743$ corresponding to a pion mass of $396$ MeV, and at lattice sizes of $16^3\times 128$, $20^3\times 128$ and $24^3\times 128$ with corresponding spatial extents $L \sim 2\,$fm, $\sim 2.5\,$fm and $\sim 3\,$fm. Some details of the lattices and propagators used for correlation constructions are provided in Table~\ref{tab:lattices}.
Recently, the NPLQCD collaboration \cite{Beane:2011sc} has determined the $\ell=0$ scattering phase-shift on the same ensembles as used in this study, plus an additional larger lattice volume $\sim 4 \,\mathrm{fm}$. Their calculation is limited in scope by the fact that their approach does not project pion operators of definite relative momentum at the source. We will compare the results of the different approaches later in this paper. Other studies of $\pi\pi$ $I=2$ scattering in lattice QCD (\cite{Beane:2007xs, Sasaki:2008sv, Feng:2009ij}) have largely limited themselves to the threshold behavior of the scattering amplitude in $S$-wave, as expressed by the scattering length.
Readers who are not concerned with the details of the calculation can skip to Section \ref{sec:results} where the results for elastic scattering are presented. The remainder of the paper is structured as follows:
Section \ref{sec:multi} outlines the construction of a basis of irreducible $\pi\pi$ operators at rest and in-flight from products of pion operators of definite momentum. Section \ref{sec:dist} describes the construction of correlators using the distillation framework. Section \ref{sec:projection} presents ``optimised" single pion operators constructed as linear combinations of composite QCD operators with pion quantum numbers. Section \ref{sec:dispersion} discusses the determination of the pion mass and anisotropy from measurements of the pion dispersion relation. Section \ref{sec:finiteT} considers the effects of the finite temporal extent of the lattice on $\pi\pi$ correlators and presents mechanisms for reducing the role of these effects in the determination of the discrete energy spectrum. Section \ref{sec:spectrum} presents the finite-volume spectrum obtained on three volumes. Section \ref{sec:luescher} discusses the extraction of elastic scattering phase-shifts from finite-volume spectra using the L\"uscher formalism including a study of the possible effect of sub-leading partial waves within a toy model. Section \ref{sec:results} presents our results for $\delta_{0,2}$ in the region of elastic scattering for 396 MeV pions. Section \ref{sec:summary} summarises our approach and results and suggests future applications of the methodology.
\begin{table}[t]
\begin{tabular}{c|ccc}
$(L/a_s)^3 \times (T/a_t)$ &$N_{\mathrm{cfgs}}$ & $N_{\mathrm{t_{srcs}}}$ & $N_{\mathrm{vecs}}$ \\
\hline
$16^3\times 128$ & 479 & 12 & 64 \\
$20^3\times 128$ & 601 & $\substack{5\; (\vec{P}=\vec{0}) \\ 3\; (\vec{P}\neq\vec{0}) }$ & 128\\%5 ($\vec{P}=\vec{0}$), 3 ($\vec{P} \ne \vec{0}$) & 128 \\
$24^3\times 128$ & 553 & 3 & 162 \\
\end{tabular}
\caption{The lattice ensembles and propagators used in this paper. The light and strange quark mass are $a_t m_l=-0.0840$ and $a_t m_s=-0.0743$ described in Ref.~\cite{Lin:2008pr}, corresponding to a pion mass of $396$ MeV. The lattice size and number of configurations are listed, as well as the number of time-sources and the number of distillation vectors $N_{\mathrm{vecs}}$ (to be described in Sec \ref{sec:dist}) featuring in the correlator construction.}
\label{tab:lattices}
\end{table}
\section{Operator construction}\label{sec:multi}
In order to calculate scattering amplitudes we must extract multi-hadron energy levels with high precision and so need interpolating operators that efficiently interpolate these multi-hadron states. To achieve this we consider operators constructed from the product of two hadron operators projected onto definite total momentum, $\vec{P}$, and transforming as a definite irreducible representation of the appropriate symmetry group, \emph{lattice irrep}\footnote{we use `lattice irrep' to refer to the octahedral group irrep for a particle at rest and the irrep of the appropriate little group, discussed later, for a particle at non-zero momentum} $\Lambda$, with irrep row, $\mu$,
\begin{eqnarray}
\left[\mathbb{O}_{\Lambda\mu}(\vec{P})\right]^{\dagger} &=&
\sum_{\substack{\mu_1,\mu_2 \\ \vec{k}_1 , \vec{k}_2 \\ \vec{k}_1 + \vec{k}_2 = \vec{P}}}
~\mathcal{C}(\vec{P}\Lambda\mu; \vec{k}_1\Lambda_1\mu_1; \vec{k}_2\Lambda_2\mu_2 ) \nonumber \\
&&\quad \times \left[\mathbb{O}_{\Lambda_1\mu_1}(\vec{k}_1)\right]^{\dagger}~
\left[\mathbb{O}_{\Lambda_2\mu_2}(\vec{k}_2)\right]^{\dagger} ~. \label{lat_two_part}
\end{eqnarray}
Here $\mathbb{O}_{\Lambda_1,\mu_1}(\vec{k}_1)$ and $\mathbb{O}_{\Lambda_2,\mu_2}(\vec{k}_2)$ are hadron operators (for example, fermion bilinear operators), each projected onto definite momentum, irrep and irrep row. The Clebsch-Gordan coefficients, $\mathcal{C}$, and the momenta appearing in the sum over $\vec{k}_1$ and $\vec{k}_2$ will be discussed later.
A conventional infinite volume continuum analogue of this construction (for total momentum zero with $\vec{p} = \vec{k}_1 = -\vec{k}_2$) would be
\begin{eqnarray}
\left[\mathcal{O}^{[S,\ell]}_{J,M}\right]^\dagger &\sim& \sum_{\lambda_1 \lambda_2} \vspace{-4mm} \int d\hat{p} ~\; C(J \ell S M; \vec{p}\,S_1 \lambda_1; -\vec{p}\,S_2 \lambda_2) \nonumber \\
&& \quad \times \left[\mathcal{O}^{S_1\lambda_1}(\vec{p})\right]^\dagger \left[\mathcal{O}^{S_2\lambda_2}(-\vec{p})\right]^\dagger ~ ,
\label{equ:cont_two_part}
\end{eqnarray}
with
\begin{equation*}
C = \big\langle S_1\lambda_1; S_2 -\!\!\lambda_2 \big| S\lambda\big\rangle \big\langle \ell0;S\lambda\big|J\lambda\big\rangle\, D_{M \lambda}^{(J)*}(\hat{p}) ~,
\end{equation*}
where $D(\hat{p})$ is a Wigner-$D$ matrix and $S_{1,2}$ and $\lambda_{1,2}$ are respectively the spins and helicities of hadron 1,2. The spins are coupled to $S = S_1 \otimes S_2$, $\ell$ is the partial wave, $J = \ell \otimes S$ is the total angular momentum and $M$ is its $z$ component. However, in all but the simplest cases, multi-hadron operators constructed by subducing Eq.~(\ref{equ:cont_two_part}) into irreducible representations of the lattice symmetry can mix single-hadron operators transforming in different lattice irreps. Therefore, we prefer Eq.~(\ref{lat_two_part}) where such mixings do not occur. The single-hadron operators transforming in definite lattice irreps can be optimised variationally, as shown for the pion in Section \ref{sec:projection}.
Here we concentrate on the operators to be used to study two-pion states; the generalisation to other multi-hadron states is given in Appendix \ref{app:operators}. The flavor structure of the operators, for example the projection of $\pi\pi$ onto definite overall isospin, $I$, determines which combinations of Wick contractions appear in the calculation of the correlators (Section \ref{sec:dist}). Because this flavor structure generally factorises from the spin and spatial structure we will not discuss it in detail here. However, because we are considering two identical pions, Bose symmetry requires the overall wavefunction to be symmetric under the interchange of the two pions. Therefore, in the $I=2$ case we are considering here or $I=0$, the symmetric flavor part requires a symmetric spatial part (even partial waves with positive parity). In contrast, $I=1$ requires an antisymmetric spatial piece (odd partial waves with negative parity). In addition, these operators have definite charge-conjugation parity, $C = +1$, for neutral combinations, generalising to $G$-parity for charged combinations; for brevity, in the following we omit the $C$-parity labels.
\subsection{Single-hadron operators}
\label{sec:singleops}
Respecting the reduced symmetry of a finite cubic lattice, the $J^{P} = 0^-$ pion at rest \emph{subduces} onto the one-dimensional $\Lambda^{P} = A_1^-$ irrep of the double-cover octahedral group with parity, $\text{O}^{\text{D}}_h$. In Refs.~\cite{Dudek:2009qf,Dudek:2010wm} we discussed how operators with a definite continuum $J^P$ and $J_z$-component $M$, $\mathcal{O}^{J^P,M}(\vec{k}=\vec{0})$, can be constructed out of fermion bilinears featuring gauge-covariant derivatives and Dirac gamma matrices; the extension to baryons was described in Ref.~\cite{Edwards:2011jj}. The appropriate lattice operators were formed by \emph{subducing} these continuum operators into octahedral group irreps. Table \ref{table:latticeirreps} summarises how different integer continuum $J$ subduce into octahedral group irreps -- here we focus on the irreps relevant for mesons but the discussion applies equally to the irreps appropriate for half-integer spin. In the case of a $J^P=0^-$ operator subducing to $\Lambda^P=A_1^-$ this subduction is trivial, $$\mathcal{O}^{[0^-]}_{A_1^-}(\vec{0}) = \mathcal{O}^{0^-}(\vec{0}) ~.$$
At non-zero momentum, $\vec{k}$, the symmetry is reduced further: the relevant symmetry group is the \emph{little group}, the subgroup of allowed transformations which leave $\vec{k}$ invariant~\cite{Moore:2005dw}. In an infinite volume continuum the little group is the same for each $\vec{k}$; with only the constraints arising from rotational symmetry, states are now labelled by the magnitude of helicity, $|\lambda|$, rather than $J$. On a finite cubic lattice with periodic boundary conditions the allowed momenta are quantised, $\vec{k} = \tfrac{2\pi}{L}(n,m,p)$ where $n,m,p$ are integers, and in general there are different little groups for different types of momentum. We denote the little group for $\vec{k}$ by $\mathrm{LG}(\vec{k})$ and for convenience define $\mathrm{LG}(\vec{0}) = \text{O}^{\text{D}}_h$. The pion subduces onto the one-dimensional $\Lambda = A_2$ irrep of the appropriate little group (at least for all $|\vec{k}|^2 < 14 \left(\tfrac{2\pi}{L}\right)^2$). Table \ref{table:latticeirreps} shows the pattern of subductions of the helicities into the little group irreps. In Ref.~\cite{Thomas:2011rh} we presented a method to construct subduced helicity operators, $\mathbb{O}^{[J^P,|\lambda|]}_{\Lambda,\mu}(\vec{k})$, and showed that these are useful for studying mesons with non-zero momentum on the lattice. For a $J^P=0^-$ operator subduced into the $A_2$ irrep the construction is again trivial,
$$\mathbb{O}^{[0^-,0]}_{A_2}(\vec{k}) = \mathcal{O}^{0^-}(\vec{k}) ~.$$
\begin{table
\begin{ruledtabular}
\begin{tabular}{c c | c l}
$\vec{P}$ & $\mathrm{LG}(\vec{P})$ & $\Lambda^{P}$ & \multicolumn{1}{c}{$J^{P}$} \\
\hline \hline
\multirow{5}{*}{$[0,0,0]$} & \multirow{5}{*}{$\text{O}^{\text{D}}_h$}
& $A_1^{\pm}$ & $0^{\pm},~ 4^{\pm},~ \ldots$ \\
& & $T_1^{\pm}$ & $1^{\pm},~ 3^{\pm},~ 4^{\pm},~ \ldots$ \\
& & $T_2^{\pm}$ & $2^{\pm},~ 3^{\pm},~ 4^{\pm},~ \ldots$ \\
& & $E^{\pm}$ & $2^{\pm},~ 4^{\pm},~ \ldots$ \\
& & $A_2^{\pm}$ & $3^{\pm},~ \ldots$ \\
\end{tabular}
\end{ruledtabular}
\vspace{.5cm}
\begin{ruledtabular}
\begin{tabular}{c c | c l}
$\vec{P}$ & $\mathrm{LG}(\vec{P})$ & $\Lambda$ & \multicolumn{1}{c}{$|\lambda|^{(\tilde{\eta})}$} \\
\hline \hline
\multirow{5}{*}{$[0,0,n]$} & \multirow{5}{*}{$\text{Dic}_4$}
& $A_1$ & $0^+,~ 4,~ \ldots$ \\
& & $A_2$ & $0^-,~ 4,~ \ldots$ \\
& & $E_2$ & $1,~ 3,~ \ldots$ \\
& & $B_1$ & $2,~ \ldots$ \\
& & $B_2$ & $2,~ \ldots$ \\
\hline
\multirow{4}{*}{$[0,n,n]$} & \multirow{4}{*}{$\text{Dic}_2$}
& $A_1$ & $0^+,~ 2,~ 4,~ \ldots$ \\
& & $A_2$ & $0^-,~ 2,~ 4,~ \ldots$ \\
& & $B_1$ & $1,~ 3,~ \ldots$ \\
& & $B_2$ & $1,~ 3,~ \ldots$ \\
\hline
\multirow{3}{*}{$[n,n,n]$} & \multirow{3}{*}{$\text{Dic}_3$}
& $A_1$ & $0^+,~ 3,~ \ldots$ \\
& & $A_2$ & $0^-,~ 3,~ \ldots$ \\
& & $E_2$ & $1,~ 2,~ 4,~ \ldots$ \\
\hline
$[n,m,0]$ & \multirow{2}{*}{$\text{C}_4$} & $A_1$ & $0^+,~ 1,~ 2,~ 3,~ 4,~ \ldots$ \\
$[n,n,m]$ & & $A_2$ & $0^-,~ 1,~ 2,~ 3,~ 4,~ \ldots$ \\
\end{tabular}
\end{ruledtabular}
\caption{The pattern of subductions of the continuum spin, $J \le 4$, (for $\vec{P} = \vec{0}$) and helicity, $|\lambda| \le 4$, (for $\vec{P} \ne \vec{0}$) into lattice irreps, $\Lambda$~\cite{Moore:2005dw}. Here $\tilde{\eta} \equiv P(-1)^J$, $\vec{P}$ is given in units of $\tfrac{2\pi}{L}$ and $n,m$ are non-zero integers with $n \ne m$. We show the double-cover groups but only give the irreps relevant for integer spin.}
\label{table:latticeirreps}
\end{table}
When we use the variational method to find the optimal linear combination of operators to interpolate a pion, we will include in the basis operators of other $J$ subduced into $A_1^-$ (for $\vec{k} = \vec{0}$) and other helicity $\lambda$ subduced into $A_2$ (for $\vec{k} \ne \vec{0}$). The pattern of subductions is given in Table \ref{table:latticeirreps}; the subduction coefficients for zero momentum are given in Ref.~\cite{Dudek:2010wm} and those for non-zero momentum are given in Appendix \ref{app:operators}. Henceforth, we will use $\pi(\vec{k})$ as a shorthand to represent $\mathcal{O}_{A_1^-}(\vec{k} = \vec{0})$ or $\mathbb{O}_{A_2}(\vec{k} \ne \vec{0})$ as appropriate.
\subsection{Multi-hadron operators}
In general, a $\pi\pi$ creation operator can be constructed from the product of two single-pion creation operators,
\begin{equation}
\label{equ:twopionop}
\left(\pi\pi\right)^{\left[\vec{k}_1,\vec{k}_2\right]\dagger}_{\vec{P},\Lambda,\mu} = \sum_{\substack{\vec{k}_1 \in \{\vec{k}_1\}^{\star} \\ \vec{k}_2 \in \{\vec{k}_2\}^{\star} \\ \vec{k}_1 + \vec{k}_2 = \vec{P}}}
\mathcal{C}(\vec{P},\Lambda,\mu;\; \vec{k}_1;\,\vec{k}_2)\; \pi^{\dagger}(\vec{k}_1)\; \pi^{\dagger}(\vec{k}_2) ~,
\end{equation}
where $\pi(\vec{k})$ is a single-pion operator and $\mathcal{C}$ is a Clebsch-Gordan coefficient for $\Lambda_1 \otimes \Lambda_2 \rightarrow \Lambda$ with $\Lambda_{1,2} = A_1^-$ of $\text{O}^{\text{D}}_h$ if $\vec{k}_{1,2} = \vec{0}$ and $\Lambda_{1,2} = A_2$ of $\mathrm{LG}(\vec{k}_{1,2})$ if $\vec{k}_{1,2} \neq \vec{0}$, and where $\Lambda$ is an irrep of $\mathrm{LG}(\vec{P})$. For present purposes, the particular construction of $\pi(\vec{k})$ from quark and gluon fields is not important. It is only necessary that $\pi(\vec{k})$ transforms in the appropriate lattice irrep.
The sum over $\vec{k}_{1,2}$ is a sum over all momenta in the \emph{stars} of $\vec{k}_{1,2}$, which we denote by $\{\vec{k}_{1,2}\}^\star$, and by which we mean all momenta related to $\vec{k}_{1,2}$ by an allowed lattice rotation. In other words, the sum is over $R\,\vec{k}_{1,2} ~ \forall ~ R \in \text{O}^{\text{D}}_h$; the restriction that $\vec{k}_1 + \vec{k}_2 = \vec{P}$ is equivalent to requiring $R \in \mathrm{LG}(\vec{P})$. We will write $\vec{k}_1$, $\vec{k}_2$ and $\vec{P}$ in units of $\tfrac{2\pi}{L}$, using square braces to indicate the suppression of the dimensionful factor, i.e. $\vec{P}=[1,0,0]$ denotes a momentum of $\tfrac{2\pi}{L}(1,0,0)$.
The Clebsch-Gordan coefficients, $\mathcal{C}$, can be determined by a group theoretic construction. When $\vec{P} = \vec{k}_1 = \vec{k}_2 = \vec{0}$, there is only one momentum direction in the sum and $\mathcal{C}$ are just the usual Clebsch-Gordan coefficients for $\text{O}^{\text{D}}_h$~\cite{Basak:2005aq}. In the case of two pions the only relevant Clebsch-Gordan is the trivial $A_1^- \otimes A_1^- \rightarrow A_1^+$, $\mathcal{C}=1$, giving a two-pion operator in the $A_1^+$ irrep.
For the two-pion system with $\vec{k}_1 \ne \vec{0}$ but overall at rest, $\vec{P} = \vec{k}_1 + \vec{k}_2 = \vec{0}$, $\vec{k}_2=-\vec{k}_1$, the Clebsch-Gordans required are those for $A_2 (\{\vec{k}_1\}^\star) \otimes A_2 (\{\vec{k}_2\}^\star) \rightarrow \Lambda^P$ with $A_2$ of $\mathrm{LG}(\vec{k}_1)$ and $\Lambda^P$ of $\text{O}^{\text{D}}_h$. The irreps, $\Lambda^P$, arising are given in Ref.~\cite{Moore:2006ng} and summarised in Table \ref{table:twopionops}. We discuss how to calculate the corresponding explicit Clebsch-Gordan coefficients using the induced representation and give values in Appendix \ref{app:operators}.
For the remaining case, $\vec{P} \ne \vec{0}$, we require the Clebsch-Gordan coefficients for $A_2 (\{\vec{k}_1\}^\star) \otimes A_2(\{\vec{k}_2\}^\star) \rightarrow \Lambda$, or if $\vec{k}_2 = \vec{0}$, $A_2 (\{\vec{k}_1\}^\star) \otimes A_1^- (\vec{0}) \rightarrow \Lambda$ and correspondingly for $\vec{k}_1 = \vec{0}$. Again, these are calculated using the induced representation as discussed in Appendix \ref{app:operators} and we give the irreps which arise in Table \ref{table:twopionops}.
In this work we restrict ourselves to $\vec{P} = [0,0,0]$, $[0,0,1]$, $[0,1,1]$ and $[1,1,1]$, and the various combinations of $\vec{k}_1$ and $\vec{k}_2$ used are given in Table \ref{table:twopionops}. Because the two pions are identical bosons, Bose symmetry requires them to be symmetric under interchange and we only use operators with the correct symmetry for isospin-2. However, for completeness, those operators with the wrong symmetry are shown in parentheses in the table.
We want to use these operator constructions at both the source and the sink of correlation functions. This requires us to be able to project single-pion operators onto a given momentum at arbitrary times, something that can be achieved efficiently using the \emph{distillation} methodology~\cite{Peardon:2009gh}.
\begin{table
\begin{ruledtabular}
\begin{tabular}{c | l l l}
$\vec{P}$ & \quad$\vec{k}_1$ & \quad$\vec{k}_2$ & \multicolumn{1}{c}{$\Lambda^{(P)}$} \\
\hline \hline
\multirow{5}{*}{$\begin{matrix}[0,0,0]\\ \text{O}^{\text{D}}_h \end{matrix}$}
& $[0,0,0]$ & $[0,0,0]$ & $A_1^+$ \\
& $[0,0,1]$ & $[0,0,\text{-}1]$ & $A_1^+$, $E^+$, ($T_1^-$) \\
& $[0,1,1]$ & $[0,\text{-}1,\text{-}1]$ & $A_1^+$, $T_2^+$, $E^+$, ($T_1^-$, $T_2^-$) \\
& $[1,1,1]$ & $[\text{-}1,\text{-}1,\text{-}1]$ & $A_1^+$, $T_2^+$, ($T_1^-$, $A_2^-$) \\
& $[0,0,2]$ & $[0,0,\text{-}2]$ & $A_1^+$, $E^+$, ($T_1^-$) \\
\hline
\multirow{7}{*}{$\begin{matrix}[0,0,1]\\ \text{Dic}_4 \end{matrix}$}
& $[0,0,0]$ & $[0,0,1]$ & $A_1$ \\
& $[0,\text{-}1,0]$ & $[0,1,1]$ & $A_1$, $E_2$, $B_1$ \\
& $[\text{-}1,\text{-}1,0]$ & $[1,1,1]$ & $A_1$, $E_2$, $B_2$ \\
& $[0,0,\text{-}1]$ & $[0,0,2]$ & $A_1$ \\
& $[0,\text{-}1,\text{-}1]$ & $[0,1,2]$ & $A_1$, $E_2$, $B_1$ \\
& $[\text{-}2,0,0]$ & $[2,0,1]$ & $A_1$, $E_2$, $B_1$ \\
& $[\text{-}1,\text{-}1,\text{-}1]$ & $[1,1,2]$ & $A_1$, $E_2$, $B_2$ \\
\hline
\multirow{8}{*}{$\begin{matrix}[0,1,1]\\ \text{Dic}_2 \end{matrix}$}
& $[0,0,0]$ & $[0,1,1]$ & $A_1$ \\
& $[0,1,0]$ & $[0,0,1]$ & $A_1$, ($B_1$) \\
& $[\text{-}1,0,0]$ & $[1,1,1]$ & $A_1$, $B_2$ \\
& $[1,1,0]$ & $[\text{-}1,0,1]$ & $A_1$, $A_2$, ($B_1$, $B_2$) \\
& $[0,1,\text{-}1]$ & $[0,0,2]$ & $A_1$, $B_1$ \\
& $[0,\text{-}1,0]$ & $[0,2,1]$ & $A_1$, $B_1$ \\
& $[1,\text{-}1,1]$ & $[\text{-}1,2,0]$ & $A_1$, $A_2$, $B_1$, $B_2$ \\
& $[1,\text{-}1,0]$ & $[\text{-}1,2,1]$ & $A_1$, $A_2$, $B_1$, $B_2$ \\
\hline
\multirow{5}{*}{$\begin{matrix}[1,1,1]\\ \text{Dic}_3 \end{matrix}$}
& $[0,0,0]$ & $[1,1,1]$ & $A_1$ \\
& $[1,0,0]$ & $[0,1,1]$ & $A_1$, $E_2$ \\
& $[2,0,0]$ & $[\text{-}1,1,1]$ & $A_1$, $E_2$ \\
& $[1,\text{-}1,0]$ & $[0,2,1]$ & $A_1$, $A_2$, $2E_2$ \\
& $[\text{-}1,0,0]$ & $[2,1,1]$ & $A_1$, $E_2$ \\
\end{tabular}
\end{ruledtabular}
\caption{The two-pion operators for each $\vec{P}$; also shown is $\mathrm{LG}(\vec{P})$ -- we show the double-cover groups but only give the irreps relevant for integer spin. Example momenta $\vec{k}_1$ and $\vec{k}_2$ are shown; all momenta in $\{\vec{k}_1\}^{\star}$ and $\{\vec{k}_2\}^{\star}$ are summed over in Eq.~\ref{equ:twopionop}. Swapping around $\vec{k}_1$ and $\vec{k}_2$ gives the same operators up to an overall phase. The irreps given in parentheses do not occur for two identical bosons with a symmetric flavour coupling (e.g. $\pi\pi$ in $I=0$ or $2$) because of the constraints arising from Bose symmetry.}
\label{table:twopionops}
\end{table}
\section{Distillation and correlator construction}\label{sec:dist}
Within \emph{distillation}~\cite{Peardon:2009gh}, we construct operators capable of interpolating a single pion of momentum $\vec{k}$ from the vacuum as
\begin{equation}
\pi^\dag(\vec{k},t) = \sum_{\vec{x}} e^{i \vec{k}\cdot\vec{x}}\left[\bar\psi\Box_\sigma\boldsymbol{\Gamma}^\dagger_t\Box_\sigma\psi\right](\vec{x},t),
\label{eq:distop}
\end{equation}
where the $\boldsymbol{\Gamma}_t$ are, in general, operators acting in space, color and Dirac spin on a time slice, $t$, whose explicit construction is described in detail in Ref.~\cite{Thomas:2011rh}. The quark fields $\psi$ in Equation \ref{eq:distop} are acted upon by a distillation smearing operator $\Box_\sigma$ that emphasizes the low momentum quark and gluon modes that dominate low mass hadrons. This smearing operator is defined as
\begin{equation}
\Box^{ij}_\sigma(\vec{x},\vec{y};t) = \sum_{n=1}^{N_{\mathrm{vecs}}} e^{\sigma^2\lambda_n/4}\xi^i_n(\vec{x},t)\xi^{j\dagger}_n(\vec{y},t)
\label{eq:box}
\end{equation}
where $\lambda_n$, $\xi^i_n(\vec{x},t)$ are the $n^\mathrm{th}$ eigenvalue and eigenvector (in color, $i$, and position, $\vec{x}$) of the gauge-covariant three-dimensional Laplacian operator on a time-slice, $t$. In the present study, the smearing weight $\sigma$ is set to 0 and the number of vectors used is $N_\mathrm{vecs}=64,\,128,\,162$ on the $L/a_s = 16,\,20,\,24$ lattices respectively (a shorthand $\Box$ is used to represent $\Box_{\sigma=0}$).
The outer-product nature of the distillation smearing operator is such that correlators can be factorized into products of factors containing only propagation and factors containing only operator construction. The propagation factors, $\tau$ (called ``perambulators''), and momentum projected operators, $\Phi$, are constructed as matrices in the space of the eigenvectors (the distillation space): where $\tau_{nm}(t',t) = \xi^\dagger_n(t') M^{-1}(t',t)\xi_m(t)$ and $\Phi_{nm}(t)=\xi^\dagger_n(t) \boldsymbol{\Gamma}_t\xi_m(t)$, and $M$ is the lattice representation of the Clover-Dirac operator for the light quarks used in this study.
As outlined in Section~\ref{sec:multi}, two-hadron operators
are constructed from sums over products of two single-hadron operators of definite momentum, as in Eq.~\ref{equ:twopionop}. The resulting correlators for the $\pi\pi$ operators are of the generic form
\begin{equation}
C_{ij}(t',t) = \langle 0|\big(\pi\pi\big)_i(t')\cdot \big(\pi\pi\big)^\dagger_j(t)|0\rangle,
\label{eq:corr}
\end{equation}
where each operator $\pi$ is of the bilinear form given in Equation \ref{eq:distop}. For isospin-2, quark integration leads to only Wick contractions featuring quark propagation from source time $t$ to sink time $t'$; there are no annihilation contributions. The resulting traces are over the set of eigenvectors used in Equation \ref{eq:box} which is much smaller than the full lattice space, allowing for the efficient computation of the correlation functions. In particular, it is the factorization of the smearing operator that allows for the projection of both the source and sink operators onto definite inter-pion momentum, something that is not possible in the traditional ``point-all'' method. This factorization allows for the construction of the full hermitian correlation matrix among source and sink operators in Eq.~\ref{eq:corr}, and hence makes possible the application of the variational method \cite{Michael:1985ne,Luscher:1990ck,Blossier:2009kd}. In this method, the manifest orthogonality among states provides the essential key for determining high lying excited states and separating nearly degenerate states.
To increase statistics, the correlation functions in Eq.~\ref{eq:corr} are averaged over multiple time sources. The number of time sources, along with the number of eigenvectors of the Laplacian, $N_\mathrm{vecs}$, and the number of configurations for each of the three volumes used in this study are shown in Table~\ref{tab:lattices}.
\section{Optimised pion operators}\label{sec:projection}
In our previous study of $\pi\pi$ isospin-2 scattering~\cite{Dudek:2010ew} we made use only of the simplest composite QCD operators capable of interpolating a pion, $\sim \bar{\psi} \Box_\sigma \gamma_5 \Box_\sigma \psi$ (Eq.~\ref{eq:distop} with $\boldsymbol{\Gamma} = \gamma_5$) where the distillation smearing operator $\Box_\sigma$ in Eq.~\ref{eq:box} took on two different values of the smearing weight $\sigma$. As well as interpolating the ground-state pion from the vacuum, this operator has significant amplitudes to interpolate various excited mesons with pion quantum numbers ($\pi^\star$).
In correlation functions, the contribution of the excited states will die away more rapidly than the ground-state (see the decomposition in Equation \ref{spectral}), but at modest times, the excited states are present to some degree, as shown in Figure \ref{pion_recon}. For consideration of $\pi\pi$ scattering, these excited-state contributions are an unwanted pollution in our correlators that ideally we would like to be absent. Their presence forces us to analyse $\pi\pi$ correlators only at large times where effects of the finite-temporal extent of the lattice are more keenly felt.
\begin{figure}[b]
\includegraphics[width=0.5\textwidt
]{recon_pi_alt.pdf}
\caption{Contributions of ground state ($\mathfrak{n}=0$) pion (red) and excited pion states (other colors) to the single pion correlator at zero momentum, $C(t) = \big\langle \big(\bar{\psi} \Box \gamma_5 \Box \psi\big)(t) \cdot \big(\bar{\psi} \Box \gamma_5 \Box \psi\big)(0) \big\rangle$ and $N_\mathrm{vecs}=162$ on the $24^3$ lattice. Summed contribution of all states indicated by the grey curve. Excited state pions are observed to contribute significantly until $t \gtrsim 20 \,a_t$. (Excited state contributions determined from the results of variational analysis using a large operator basis, see the text)
\label{pion_recon}}
\end{figure}
In principal if we could find an operator which has increased overlap onto the ground-state pion and reduced overlap onto low-lying excited states, its use would lead to $\pi\pi$ correlators that are truly dominated by $\pi\pi$ at smaller times, with the contribution of unwanted $\pi \pi^\star$ being reduced. Our approach to finding such an ``optimised" single-pion operator is to variationally diagonalise a matrix of single-hadron correlators in a basis of operators, taking as our optimised operator the linear combination of basis operators having lowest energy.
The basis of operators used is as described in Section \ref{sec:singleops} and presented in detail in Refs.~\cite{Dudek:2009qf,Dudek:2010wm,Thomas:2011rh}. It corresponds to fermion bilinears with Dirac gamma matrices and gauge-covariant derivatives\footnote{in this work we use all operators with the correct quantum numbers constructed from any possible gamma matrix and up to three derivatives (for an operator at rest) or up to one derivative (for an operator at non-zero momentum)} between them, constructed to be of definite spin or helicity in a continuum theory and then subduced into irreducible representations of the octahedral group or the appropriate little group. For the pion this is $A_1^{-}$ for zero momentum and $A_2$ for all the non-zero momenta that we consider.
The variational analysis corresponds to solution of the generalised eigenvalue problem \cite{Michael:1985ne,Luscher:1990ck,Blossier:2009kd}
\begin{equation}
C(t) v^{(\mathfrak{n})} = \lambda_\mathfrak{n}(t) C(t_0) v^{(\mathfrak{n})} \label{GEVP}
\end{equation}
where the state energies are obtained from fits to $\lambda_\mathfrak{n}(t) \sim e^{-E_\mathfrak{n}(t-t_0)}$. The optimal combination of operators, $\mathcal{O}_i$, to interpolate state $|\mathfrak{n}\rangle$ from the vacuum is $\Omega_\mathfrak{n}^\dag = \sum_i v^{(\mathfrak{n})}_i \mathcal{O}_i^\dag$. Our implementation of the variational method is described in Ref.~\cite{Dudek:2010wm}.
In Figure \ref{pion_projection} we show, for a range of momenta, the improvement obtained using the ``optimised" pion operators alongside the simple $\bar{\psi}\Box \gamma_5 \Box \psi$ operators, where clearly the correlators computed with the optimised operators relax to the ground state more rapidly that the simpler operators, typically at or before $10 a_t$ from the source (a time comparable with the values of $t_0$ found to be optimal in solution of equation \ref{GEVP}).
Use of these optimised operators will lead to some confidence when dealing with $\pi\pi$ correlators where for times $\gtrsim 10 a_t$ away from the source, we will be able to largely neglect the contribution of $\pi \pi^\star$ states.
\begin{figure}
\includegraphics[width=0.5\textwidt
]{pion_projection_meff.pdf}
\caption{
Effective masse
\footnote{throughout this paper we define the effective mass of a correlator $C(t)$ to be $m_\mathrm{eff} = \tfrac{1}{3a_t}\log \left[ \frac{C(t)}{C(t+3a_t)} \right] $} of single-pion correlators computed using $\bar{\psi}\Box \gamma_5 \Box \psi$ (darker shades, squares) and ``optimised" operators, $\Omega_{\mathfrak{n}=0}$ (lighter shades, circles). Shown for a range of momenta on the $L/a_s =24$ lattice.
\label{pion_projection}}
\end{figure}
\section{Pion mass and dispersion relation}\label{sec:dispersion}
As well as the volume dependence of energies of multi-hadron states owing to hadron interactions suggested by the L\"uscher formalism, there can also be exponential dependence of single-hadron energies on $L$. We can attempt to determine any such behavior for the pion by computing its mass on the three volumes at our disposal. In Figure \ref{pion_mass} we show the pion mass extracted on our three lattice volumes where there is seen to be very little volume dependence $\left(\frac{m_\pi(L/a_s =24)}{m_\pi(L/a_s=16)} = 0.990(4)\right)$. In \cite{Beane:2011sc}, NPLQCD suggest a $\chi$PT motivated form for the $L$ dependence,
\begin{equation}
m_\pi(L) = m_\pi + c \frac{e^{-m_\pi L}}{\left(m_\pi L \right)^{3/2}}. \label{voldep}
\end{equation}
Fitting to this form we find $a_t m_\pi = 0.06906(13)$ and $a_t c = 0.24(10)$ in good agreement with NPLQCD's $0.069073(63)(62)$, $0.23(12)(7)$ respectively. We use $a_t m_\pi = 0.06906(13)$ as our best estimate for the pion mass in all subsequent calculations\footnote{fitting the same data to a constant leads to $a_t m_\pi = 0.06928(18)$ with a somewhat poorer fit, $\chi^2/N_\mathrm{dof} = 3.0$.}.
\begin{figure}[b]
\includegraphics[width=0.45\textwidt
]{pion_mass_vol_dep.pdf}
\caption{Pion mass as a function of lattice spatial volume. Volume dependence fitted with Equation (\ref{voldep}).
\label{pion_mass}}
\end{figure}
A complication which arises from our use of an anisotropic lattice is the need to determine the precise value of the anisotropy, $\xi$, which relates the spatial lattice spacing $a_s$ to the temporal lattice spacing $a_t = a_s / \xi$. The anisotropy appears in the dispersion relation of a free-particle, where the periodic boundary conditions in space lead to allowed momenta $\vec{p} = \frac{2\pi}{L}\big(n_x, n_y, n_z\big)$ for integer $n_x,\,n_y,\,n_z$, so that
\begin{equation}
\big(a_t E_{n^2} )^2 = \big( a_t m \big)^2 + \frac{1}{\xi^2} \left( \frac{2\pi}{L/a_s} \right)^2 n^2, \label{disp}
\end{equation}
if we assume that mesons on the lattice have a continuum-like dispersion relation. Whether this is a good description will be determined by explicit fits to extracted pion energies at various momenta. In Figure \ref{pion_dispersion} we show pion energies on the three volumes along with a fit to Equation \ref{disp} with $\xi$ as a free parameter. The fit is acceptable leading to $\xi = 3.444(6)$. Using other parameterisations of the dispersion relation (adding a $p^4$ term, using cosh/sinh etc...), lead to fits which are indistinguishable within the thickness of the line in Figure \ref{pion_dispersion} and to compatible values of $\xi$. In the remainder of the paper we use $\xi = 3.444(6)$ as our best estimate\footnote{in correlated fitting to obtain $a_t m_\pi$ and $\xi$ simultaneously we find a relatively small correlation between the parameters and for error propagations in the remainder of the calculation we treat them as independent variables.}.
\begin{figure}
\includegraphics[width=0.5\textwidt
]{dispersion2.pdf}
\caption{
Pion dispersion relation. Fit as described in the text. Lower plot shows $\hat{\xi}(n^2,L) \equiv \frac{ \tfrac{2\pi}{L/a_s} \sqrt{n^2} }{\sqrt{(a_t E_{n^2})^2 - (a_t m)^2 }}$ and the fitted value of $\xi=3.444(6)$.
\label{pion_dispersion}}
\end{figure}
\section{Effects of finite temporal extent}\label{sec:finiteT}
Our extractions of finite-volume $\pi\pi$ energy spectra follow from analysis of the time-dependence of correlation functions and the form of these time-dependencies is affected by the finite temporal extent of the lattice. The size of finite-$T$ effects are generically determined by the size of $e^{-m_\pi T}$, which while small on these lattices, is large enough for its effects to be visible, particularly in the $\pi\pi$ sector.
As an explicit example of a systematic effect whose origin will turn out to be the finite temporal extent of the lattice, we show in Figure \ref{corr_rest_finiteT_shifting} the effective mass of a very simple ``$\pi\pi$" correlator. The same ``$\pi\pi$" operator, $\sum_{\vec{x}} \big[\bar{\psi}\Box \gamma_5 \Box \psi\big](\vec{x}) \cdot \sum_{\vec{y}} \big[\bar{\psi}\Box \gamma_5 \Box \psi\big](\vec{y})$ appears at source ($t_\mathrm{src}=0$) and sink ($t$). The effective mass of the raw correlator is observed to continue falling after appearing to briefly plateau near an energy equal to twice the pion mass. This behavior can occur if the correlator features, as well as a sum of exponentially decaying time-dependencies corresponding to discrete energy eigenstates, as in Equation \ref{spectral}, also a contribution that is \emph{constant in time}. Such a term can be eliminated by considering the \emph{shifted} correlator, $\widehat{C}_{\delta t}(t) \equiv C(t) - C(t+\delta t)$. An effective mass of this construction with $\delta t = 3 a_t$ is also shown in Figure \ref{corr_rest_finiteT_shifting} where it is observed to plateau to an energy slightly above twice the pion mass\footnote{the very slow relaxation to the plateau is mainly due to not using optimised pion operators in this construction.}. A direct estimate of the size of the constant term comes from fitting $C(t)$ to the form
\begin{equation}
\sum_\mathfrak{n} A_\mathfrak{n} e^{- E_\mathfrak{n} t} + c, \label{exp_and_const}
\end{equation}
where $\{ A_\mathfrak{n}\},\, \{E_\mathfrak{n} \}$ and $c$ are the fit parameters. A fit to the raw correlator over a time region $15 \to 43$ with two exponentials and a constant gives a $\chi^2/N_\mathrm{dof} = 0.7$ and $a_t E_0 = 0.13966(4)$, $A_0 = 8.76(8) \times 10^4$ and $c = 26.4(13)$, indicating a statistically significant constant term. We propose that the origin of the constant term is the finite temporal extent of the lattice and notice that $2 A_0 e^{- m_\pi T} = 25.4(2)$ is very close to the fitted value of $c$. In the remainder of this section we will attempt to describe the effect of finite-$T$ on our computed $\pi\pi$ correlators at rest and in-flight.
\begin{figure}
\includegraphics[width=0.5\textwidt
]{meff_pipi_gamma5_P000_shifting.pdf}
\caption{Effective masses of a ``$\pi\pi$" correlator as described in the text. Raw correlator (red squares) and shifted correlator (green diamonds).
\label{corr_rest_finiteT_shifting}}
\end{figure}
\subsection{Finite-$T$ effects for correlators at rest}
Let us begin by considering a correlator constructed using pion interpolating fields of definite momentum,
\begin{equation}
C^{\vec{k}_1',\vec{k}_2'}_{\vec{k}_1,\vec{k}_2}(t) = \big\langle \pi^-_{\vec{k}_1'}(t)\pi^-_{\vec{k}_2'}(t) \cdot \pi^+_{\vec{k}_1}(0)\pi^+_{\vec{k}_2}(0) \big\rangle, \nonumber
\end{equation}
where in this section the operator $\pi^+$ interpolates a positively charged pion from the vacuum. In practice we will always project these products into definite little group irreps, $\Lambda$, for a given $\vec{P}=\vec{k}_1+\vec{k}_2 = \vec{k}_1'+\vec{k}_2'$ as described in Section \ref{sec:multi}. With anti-periodic boundary conditions in the finite time direction, two-point correlators have the decomposition\footnote{see \cite{Beane:2009kya} for a discussion of these finite-$T$ effects on the spectrum of single particle systems and \cite{Detmold:2011kw} for discussion of many-hadron states.}
\begin{align}
C(t) &= \big\langle \mathcal{O}'(t) \mathcal{O}(0) \big\rangle \nonumber \\
&= \mathrm{tr}\big[ e^{-H T} \mathcal{O}'(t) \mathcal{O}^\dag(0) \big] / \mathrm{tr}\big[ e^{-H T} \big] \nonumber \\
&\propto \sum_{\mathfrak{n}, \mathfrak{m}} e^{-E_\mathfrak{n} T} e^{ (E_\mathfrak{n} - E_\mathfrak{m})t} \big\langle \mathfrak{n} \big| \mathcal{O}'(0) \big| \mathfrak{m} \big\rangle \big\langle \mathfrak{m} \big| \mathcal{O}^\dag(0) \big| \mathfrak{n} \big\rangle , \label{finT}
\end{align}
in terms of eigenstates of the Hamiltonian, $H|\mathfrak{n}\rangle = E_\mathfrak{n} |\mathfrak{n}\rangle$, which will be discrete in a finite spatial volume. The contribution to this sum we are interested in is the only one to survive in the limit $T\to \infty$ and is of the form
\begin{eqnarray}
\sum_\mathfrak{n} \big\langle 0 \big| \pi^- \pi^- \big| (\pi^+\pi^+)_\mathfrak{n} \big\rangle \big\langle (\pi^+\pi^+)_\mathfrak{n} \big| \pi^+ \pi^+ \big|0 \big\rangle e^{-E^\mathfrak{n}_{\pi\pi} t} \nonumber\\
= \sum_\mathfrak{n}\left(Z^\mathfrak{n}_{\pi\pi}\right)^2 e^{-E^\mathfrak{n}_{\pi\pi} t}\label{eq:wanted}
\end{eqnarray}
where $\big|(\pi^+\pi^+)_\mathfrak{n}\big\rangle$ are $I=2$ eigenstates.
At finite $T$ there are other terms in the sum in Equation \ref{finT}, the largest being of form
\begin{equation}
\sum_{\vec{p},\vec{q}} e^{-E_\pi(\vec{p}) \,T} \big\langle \pi^-_{\vec{p}} \big | \pi^-_{\vec{k}_1'}(t) \pi^-_{\vec{k}_2'}(t) \big| \pi^+_{\vec{q}} \big\rangle \big\langle \pi^+_{\vec{q}} \big| \pi^+_{\vec{k}_1}(0) \pi^+_{\vec{k}_2}(0) \big| \pi^-_{\vec{p}} \big\rangle \nonumber
\end{equation}
which has a time-dependence of
\begin{align}
z_{\vec{k}_1}^2 & z_{\vec{k}_2}^2 \big[\delta_{\vec{k}_1',\vec{k}_1} \delta_{\vec{k}_2', \vec{k}_2} + \delta_{\vec{k}_1',\vec{k}_2} \delta_{\vec{k}_2', \vec{k}_1} \big] \nonumber\\
&\times \big[ e^{-E_\pi(\vec{k}_1')\, T} e^{- \left(E_\pi(\vec{k}_2') - E_\pi(\vec{k}_1') \right) t} \nonumber \\
&\quad\quad + e^{-E_\pi(\vec{k}_2')\, T} e^{- \left(E_\pi(\vec{k}_1') - E_\pi(\vec{k}_2') \right) t} \big]
\end{align}
where $z_{\vec{k}} \equiv \big\langle \pi^+_{\vec{k}} \big| \pi^+_{\vec{k}} \big| 0 \big\rangle$. As a first example, consider the case of correlators in the $\pi\pi$ rest frame, $\vec{P}=\vec{0}$, $C^{\vec{k},-\vec{k}}_{\vec{k},-\vec{k}}(t)$, where this term becomes
\begin{equation}
2\, \left(z_{\vec{k}}\right)^4\, e^{-E_\pi(\vec{k})\, T} \label{constant}
\end{equation}
which is simply a constant in time.
\begin{figure}[h]
\includegraphics[width=0.4\textwidt
]{corr_P000_Ep_finiteT.pdf}
\includegraphics[width=0.4\textwidt
]{corr_P000_T2p_finiteT.pdf}
\caption{
Fits to diagonal $\pi\pi$ correlators with $\vec{P}=[0,0,0]$ using the lowest allowed $|\vec{k}|$ that gives rise to irrep $\Lambda^P$. Correlator is plotted via $e^{2 E_\pi(\vec{k}) t} \, C(t)$ such that in the limit of non-interacting pions and $T\to \infty$ we would have a horizontal line. The solid red line shows the result of the fit using equation (\ref{exp_and_const}) while the orange dashed line shows the result of excluding the constant contribution, which should correspond to the $T\to \infty$ behavior. Fit parameters given in Table \ref{tab:corr_rest_finiteT}.
\label{corr_rest_finiteT}}
\end{figure}
We may now address the observation made at the start of this section that the correlator constructed with $\vec{k}=\vec{k}'=[0,0,0]$ has a clear constant term. Our analysis above suggests that its magnitude would be $2\, \left(z_{[000]}\right)^4 \,e^{-m_\pi T}$, while the leading $T$-independent term is of form $\left(Z^{(0)}_{\pi\pi}\right)^2 e^{-E^{(0)}_{\pi \pi} t }$. In the limit of weakly-interacting pions we would expect $Z^{(0)}_{\pi\pi} \to \left(z_{[000]}\right)^2$ and as such $c \to 2 A_0 e^{-m_\pi T}$. This appears to hold true to a rather good approximation in the data.
We expect other finite-$T$ terms to be negligibly small in practice; in particular a term often considered in single-particle analysis, $Z^2 e^{-E T} e^{E t}$, which turns exponentially decaying time-dependence into cosh-like time-dependence can be ignored here. It is suppressed by at least $e^{- 2 m_\pi T}$ and only becomes relevant close to $t = T/2$ while we consider correlators only at earlier times\footnote{In practical terms, while the constant term could contribute $\sim 10\%$ of the correlator at $t=48$, the extra term ``in the cosh" would only be at the $1\%$ level. Other contributions featuring $\langle \pi^+ \pi^+ \pi^\pm | \pi^+ \pi^+ |\pi^\pm\rangle$ formally appear at $\mathcal{O}(e^{-m_\pi T})$, but their $t$-dependence ensures that they provide negligible contributions to the correlators.}.
In Figure \ref{corr_rest_finiteT} we show further evidence for the presence of the constant term in $\pi\pi$ correlator
. These correlators, evaluated on the $L/a_s=24$ lattice, using optimised pion operators, have $\vec{P}=[0,0,0]$ and use $\pi_{\vec{k}_1}\pi_{\vec{k}_2}$ products projected into definite irreps $\Lambda^P$ constructed from the lowest allowed $|\vec{k}_1|,\,|\vec{k}_2|$ as detailed in Section \ref{sec:multi}
$E^+ \to \vec{k} = [1,0,0]$, $T_2^+ \to \vec{k} = [1,1,0]$). The results of the correlator fits (of the form given in equation (\ref{exp_and_const})) are presented in Table \ref{tab:corr_rest_finiteT}, where we see that the size of the constant term is in rather good agreement with $4 A_0 e^{-E_\pi(\vec{k}) T}$, the value in a non-interacting theory (see the Clebsch-Gordan coefficients in Appendix \ref{app:operators} for the appropriate combination of pion momenta). Clearly the ``polluting" constant term plays a significant role in the correlator as early as $t \sim 25 a_t$ and if we want to use timeslices beyond this point in variational analysis, we will need to take some account of its presence
\begin{table}[t]
\begin{tabular}{cc|c| ccc}
irrep. & $\vec{k}$ & $\chi^2/N_\mathrm{dof}$ & $E_0/2E_\pi(\vec{k})$ & $c$ & $4 A_0 e^{-E_\pi(\vec{k}) T}$ \\
\hline
$E^+$ & $[1,0,0]$ & $0.9$ & $1.0014(17)$ & $7.9(5)\times 10^{-6}$ & $ 7.8 \times 10^{-6} $ \\
$T_2^+$ & $[1,1,0]$ & $0.9$ & $1.0002(16)$ & $6.9(11)\times 10^{-7}$ & $ 4.9 \times 10^{-7} $
\end{tabular}
\caption{
Fits, using two exponentials in equation (\ref{exp_and_const}), to diagonal $\pi\pi$ correlators with $\vec{P}=[0,0,0]$ using the lowest allowed $|\vec{k}|$ that gives rise to irrep $\Lambda^P$.
\label{tab:corr_rest_finiteT}}
\end{table}
Our solution is to completely remove the effect of all time-independent terms from correlators, by instead of considering $C(t)$, using \emph{shifted} correlators,
\begin{equation}
\widehat{C}_{\delta t}(t) = C(t) - C(t+\delta t), \label{shift}
\end{equation}
which exactly cancel contributions constant in time for any choice of $\delta t \neq 0$. The desired $\pi\pi$ contributions, equation (\ref{eq:wanted}), are changed only by a rescaling of the $Z^\mathfrak{n}_{\pi\pi}$ to
\begin{equation}
\widehat{Z}^\mathfrak{n}_{\pi\pi} = Z^\mathfrak{n}_{\pi\pi} \left[1 - e^{-E_{\pi\pi}^\mathfrak{n} \delta t} \right]^{1/2}. \label{Zscale}
\end{equation}
This is just a change in scale of overlaps that for a given state, $\mathfrak{n}$, is common to all operators.
Shifting then does not violate any of the conditions for carrying out a variational analysis and we can proceed with use of $\widehat{C}_{\delta t}(t)$ in equation (\ref{GEVP}) to yield the finite-volume energy spectrum $E^\mathfrak{n}_{\pi\pi}$.
\subsection{Finite-$T$ effects for correlators in-flight}
The unwanted contributions to correlators ``in-flight" ($\vec{P} \neq \vec{0}$) are not time-independent and cannot be removed by simply shifting in time. Following equation (\ref{finT}), they take the generic form
\begin{align}
(z_{\vec{k}_1})^2 \,& (z_{-\vec{k}_1+\vec{P}})^2 \,\big[\delta_{\vec{k}_1',\vec{k}_1} + \delta_{\vec{k}_1',-\vec{k}_1+\vec{P}} \big] \nonumber \\
&\times \big[ e^{-E_\pi(\vec{k}_1')\, T} e^{-\left( E_\pi(-\vec{k}_1'+\vec{P}) - E_\pi(\vec{k}_1') \right)t} \nonumber \\
&\quad\quad + e^{-E_\pi(-\vec{k}_1'+\vec{P})\, T} e^{-\left( E_\pi(\vec{k}_1') - E_\pi(-\vec{k}_1'+\vec{P}) \right)t} \big],\nonumber
\end{align}
where the contributions of largest magnitude occur if either $\vec{k}_1'$ or $-\vec{k}_1'+\vec{P}$ are equal to zero as then the finite-$T$ suppression factor is only $e^{-m_\pi T}$. The largest ``polluting" term in this case would not be a constant but rather have a time dependence $\sim e^{- \Delta E_\pi \, t}$ where $\Delta E_\pi $ is the energy gap between a single pion of momentum $\vec{k}$ and one with momentum $\vec{P}-\vec{k}$. In the case $\vec{P}=\vec{0}$ this reverts to a constant in time as expected.
\begin{figure}
\includegraphics[width=0.5\textwidt
]{in_flight_finiteT.pdf}
\caption{Simulated contributions to a correlator ($\vec{k}_1=[0,0,0],\, \vec{k}_2 = [1,0,0]$) of the desired ($T \to \infty$, red) term, equation \ref{flight_good}, and two ``polluting" (finite-$T$) terms from equation \ref{flight_bad} - the first term (leading, green dashed) and the sum of the two terms (leading plus subleading, blue, dot-dashed). Observe that in the time region we will consider, the leading term dominates over the subleading term.
\label{corr_flight_finiteT_model}}
\end{figure}
Consider the concrete example of a correlator with $\pi^+_{[000]} \pi^+_{[100]}$ at the source and $\pi^-_{[000]} \pi^-_{[100]}$ at the sink. In this case, as well as the desired term which is approximately\footnote{this would be exact for non-interacting pions, in $\pi\pi$ $I=2$ scattering the interaction is weak so the approximation should be a reasonable guide.}
\begin{equation}
\sim \left(z_{[000]}\right)^2 \left(z_{[100]}\right)^2 e^{-(m_\pi + E_\pi^{[100]}) t}, \label{flight_good}
\end{equation}
we would have ``polluting" terms
\begin{align}
\sim \left(z_{[000]}\right)^2 \left(z_{[100]}\right)^2 \Big( &e^{-m_\pi T} e^{-(E_\pi^{[100]} - m_\pi )t} \nonumber \\
&+ e^{-E_\pi^{[100]} T} e^{-(m_\pi - E_\pi^{[100]} )t} \Big),\label{flight_bad}
\end{align}
where as shown in Figure \ref{corr_flight_finiteT_model}, the first of these polluting terms is expected to dominate the pollution for the time regions we consider. We can observe the effect of this leading pollution term in fits to correlators having $\vec{P}=[1,0,0]$ computed on the $L/a_s = 24$ lattice using optimised pion operators - in Figure \ref{corr_flight_finiteT} we show the irreps $\Lambda = A_1,\, B_1,\, B_2$, constructed using the smallest allowed magnitudes of pion momentum. The fit form (which neglects the subleading pollution) is
\begin{equation}
\sum_\mathfrak{n} A_\mathfrak{n} e^{-E_\mathfrak{n} t} + c\, e^{- \big(E_\pi(\vec{k}_\mathrm{max}) - E_\pi(\vec{k}_\mathrm{min}) \big) t} \label{exp_flight}
\end{equation}
with $\{A_\mathfrak{n}\}$, $\{E_\mathfrak{n}\}$ and $c$ as fit variables, using fixed $E_\pi(\vec{k})$ obtained from the dispersion relation (equation \ref{disp}).
\begin{figure}
\includegraphics[width=0.4\textwidt
]{corr_P100_A1_finiteT.pdf}
\includegraphics[width=0.4\textwidt
]{corr_P100_B1_finiteT.pdf}
\includegraphics[width=0.4\textwidt
]{corr_P100_B2_finiteT.pdf}
\caption{
Fits to diagonal $\pi\pi$ correlators with $\vec{P}=[0,0,1]$ using the lowest allowed $|\vec{k}_1|,\, |\vec{k}_2|$ that gives rise to irrep $\Lambda$. Correlator is plotted via $e^{(E_\pi(\vec{k}_1) + E_\pi(\vec{k}_2) ) t} \, C(t)$ such that in the limit of non-interacting pions and $T\to \infty$ we would have a horizontal line. The solid red line shows the result of the fit using equation (\ref{exp_flight}) while the orange dashed line shows the result of excluding the contribution proportional to $c$, which should correspond to the $T\to \infty$ behavior.
\label{corr_flight_finiteT}}
\end{figure}
It would appear that these diagonal correlators can be reasonably well described by the fit form proposed indicating a small but statistically significant impact of finite-$T$ effects on the correlators. We will need to address these terms in any variational extraction of the in-flight $\pi\pi$ spectrum. Our approach is to remove the worst of the pollution exactly and settle for approximate reduction of less acute terms. The largest polluting term has a time-dependence $\propto e^{-E_\pi(\vec{k}_\mathrm{min}) T} e^{-\Delta E_\mathrm{min} \, t}$ where $\vec{k}_\mathrm{min}$ is the lowest momentum that appears in \emph{any} of the correlators making up our correlator matrix, and $\Delta E_\mathrm{min}$ is whatever positive energy gap appears in the corresponding time-dependence. This term can be converted into a constant by forming the following \emph{weighted} correlator,
\begin{align}
\widetilde{C}(t) &= e^{\Delta E_\mathrm{min} \, t} C(t) ,
\end{align}
and the constant term can be removed by then \emph{shifting} the weighted correlator,
\begin{align}
\widehat{\widetilde{C}}_{\delta t}(t) &= \widetilde{C}(t) - \widetilde{C}(t+ \delta t).
\end{align}
We refer to these as \emph{weighted-shifted} correlators. The exact same weighting and shifting procedure is applied to every element of the matrix of correlators such that the effect of the weighting is to shift \emph{all} energies down by a common $\Delta E_\mathrm{min}$. This can be corrected for by adding $\Delta E_\mathrm{min}$ to the variationally obtained spectrum.
In summary, while finite-$T$ effects are modest in our two-pion correlators, precision extraction of a $\pi\pi$ energy spectrum requires that we account for them in our analysis. Through appropriate weighting and shifting of correlators before applying the variational method, we believe that we are able to remove the leading systematic effects leaving only sub-leading effects that we find to be smaller than our level of statistical uncertainty.
\section{Finite-volume spectrum}\label{sec:spectrum}
We compute correlator matrices in each irrep $\vec{P},\,\Lambda$ using the basis of operators defined in Section \ref{sec:multi}. After modifying the correlator matrix with the appropriate weighting and/or shifting as described in the previous section, the spectrum is obtained by solution of the generalised eigenvalue problem, equation \ref{GEVP}. Each irrep is considered independently and the entire procedure is repeated on each of the three lattice volumes. The two-pion operators used are given in Table \ref{table:opsused} and the number of operators for each $\vec{P}$ and irrep are given in Table \ref{table:numopsirreps}. We illustrate the method here with the example of the $\vec{P}=[1,0,0]$, $\Lambda = A_1$ irrep on the $L/a_s = 24$ lattice.
\begin{table}[b]
\begin{ruledtabular}
\begin{tabular}{c | c | l l l}
$\vec{P}$ & Volumes & \quad $\vec{k}_1$ & \quad$\vec{k}_2$ & \multicolumn{1}{c}{$\Lambda^{(P)}$} \\
\hline \hline
\multirow{5}{*}{$\begin{matrix}[0,0,0] \\ \text{O}^{\text{D}}_h\end{matrix}$}
& \multirow{5}{*}{$16^3, 20^3, 24^3$}
& $[0,0,0]$ & $[0,0,0]$ & $A_1^+$ \\
& & $[0,0,1]$ & $[0,0,\text{-}1]$ & $A_1^+, E^+$ \\
& & $[0,1,1]$ & $[0,\text{-}1,\text{-}1]$ & $A_1^+, T_2^+, E^+$ \\
& & $[1,1,1]$ & $[\text{-}1,\text{-}1,\text{-}1]$ & $A_1^+, T_2^+$ \\
& & $[0,0,2]$ & $[0,0,\text{-}2]$ & $A_1^+, E^+$ \\
\hline
\multirow{7}{*}{$\begin{matrix}[0,0,1] \\ \text{Dic}_4 \end{matrix}$}
& \multirow{4}{*}{$16^3, 20^3, 24^3$}
& $[0,0,0]$ & $[0,0,1]$ & $A_1$ \\
& & $[0,\text{-}1,0]$ & $[0,1,1]$ & $A_1, E_2, B_1$ \\
& & $[\text{-}1,\text{-}1,0]$ & $[1,1,1]$ & $A_1, E_2, B_2$ \\
& & $[0,0,\text{-}1]$ & $[0,0,2]$ & $A_1$ \\
\cline{2-5}
& \multirow{3}{*}{$20^3, 24^3$}
& $[0,\text{-}1,\text{-}1]$ & $[0,1,2]$ & $A_1, E_2, B_1$ \\
& & $[\text{-}2,0,0]$ & $[2,0,1]$ & $A_1, E_2, B_1$ \\
& & $[\text{-}1,\text{-}1,\text{-}1]$ & $[1,1,2]$ & $A_1, E_2, B_2$ \\
\hline
\multirow{8}{*}{$\begin{matrix}[0,1,1] \\ \text{Dic}_2 \end{matrix}$}
& \multirow{5}{*}{$16^3, 20^3, 24^3$}
& $[0,0,0]$ & $[0,1,1]$ & $A_1$ \\
& & $[0,1,0]$ & $[0,0,1]$ & $A_1$ \\
& & $[\text{-}1,0,0]$ & $[1,1,1]$ & $A_1, B_2$ \\
& & $[1,1,0]$ & $[\text{-}1,0,1]$ & $A_1, A_2$ \\
& & $[0,1,\text{-}1]$ & $[0,0,2]$ & $A_1, B_1$ \\
\cline{2-5}
& \multirow{3}{*}{$20^3, 24^3$}
& $[0,\text{-}1,0]$ & $[0,2,1]$ & $A_1, B_1$ \\
& & $[1,\text{-}1,1]$ & $[\text{-}1,2,0]$ & $A_1, A_2, B_1, B_2$ \\
& & $[1,\text{-}1,0]$ & $[\text{-}1,2,1]$ & $A_1, A_2, B_1, B_2$ \\
\hline
\multirow{5}{*}{$\begin{matrix}[1,1,1] \\ \text{Dic}_3 \end{matrix}$}
& \multirow{3}{*}{$16^3, 20^3, 24^3$}
& $[0,0,0]$ & $[1,1,1]$ & $A_1$ \\
& & $[1,0,0]$ & $[0,1,1]$ & $A_1$ \\
& & $[2,0,0]$ & $[\text{-}1,1,1]$ & $A_1$ \\
\cline{2-5}
& \multirow{2}{*}{$20^3, 24^3$}
& $[1,\text{-}1,0]$ & $[0,2,1]$ & $A_1$ \\
& & $[\text{-}1,0,0]$ & $[2,1,1]$ & $A_1$ \\
\end{tabular}
\end{ruledtabular}
\caption{The two-pion operators used presented for each $\vec{P}$ on various volumes; also shown is $\mathrm{LG}(\vec{P})$. We give only the irreps that we considered in this work. Example momenta $\vec{k}_1$ and $\vec{k}_2$ are shown; all momenta in $\{\vec{k}_1\}^{\star}$ and $\{\vec{k}_2\}^{\star}$ are summed over in Eq.~\ref{equ:twopionop}.}
\label{table:opsused}
\end{table}
\begin{table}[t]
\begin{ruledtabular}
\begin{tabular}{c | c c c}
$\vec{P}$ & $\Lambda^{(P)}$ & $16^3$ & $20^3,24^3$ \\
\hline \hline
\multirow{3}{*}{$[0,0,0]$}
& $A_1^+$ & 5 & 5 \\
& $E^+$ & 3 & 3 \\
& $T_2^+$ & 2 & 2 \\
\hline
\multirow{4}{*}{$[0,0,1]$}
& $A_1$ & 4 & 7 \\
& $E_2$ & 2 & 5 \\
& $B_1$ & 1 & 3 \\
& $B_2$ & 1 & 2 \\
\hline
\multirow{4}{*}{$[0,1,1]$}
& $A_1$ & 5 & 8 \\
& $A_2$ & 1 & 3 \\
& $B_1$ & 1 & 4 \\
& $B_2$ & 1 & 3 \\
\hline
\multirow{1}{*}{$[1,1,1]$}
& $A_1$ & 3 & 5 \\
\end{tabular}
\end{ruledtabular}
\caption{The number of two-pion operators used for each $\vec{P}$ and irrep on the various lattice volumes.}
\label{table:numopsirreps}
\end{table}
\subsection{Example of $\vec{P}=[0,0,1]$, $\Lambda = A_1$}
\begin{figure*}
\includegraphics[width=0.9\textwidt
]{P100_A1_prin_corr.pdf}
\caption{
Principal correlators from solution of equation \ref{GEVP} applied to the weighted-shifted correlator matrix $\widehat{\widetilde{C}}(t)$ for $\vec{P}=[0,0,1]$, $\Lambda = A_1$ with $t_0 = 14 a_t$. Plotted is $e^{E (t-t_0)}\lambda(t)$ against $t/a_t$ along with fits to the time-dependence according to equation \ref{lambda_fit}. Also plotted in the bottom-right are the effective masses of the principal correlators (with the energy weighting $\Delta E_\mathrm{min}$ corrected) and the fit values $E$ superimposed as horizontal bands. All energies are those in the frame in which the lattice is at rest.
\label{P100_A1_prin_corr}}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.9\textwidt
]{P100_recon_alt.pdf}
\caption{
Diagonal elements of the weighted-shifted correlation matrix for $\vec{P}=[0,0,1]$, $\Lambda = A_1$: $\widehat{\widetilde{C}}_{[\vec{k}_1, \vec{k}_2]}^{[\vec{k}_1, \vec{k}_2]}(t)$ and their reconstruction using terms in the sum over states in equation \ref{eq:wanted}. Plotted is $e^{\widetilde{E}_{\pi\pi}^{(0)} t} \, \widehat{\widetilde{C}}(t)$.
\label{P100_A1_recon}}
\end{figure*}
\begin{figure}
\includegraphics[width=0.45\textwidt
]{P100_Zmatrix.pdf}
\caption{
``Matrix" plot of values of $Z^{(\mathfrak{n})}_{[\vec{k}_1, \vec{k}_2]}$ normalised according to $\frac{Z^{(\mathfrak{n})}_{[\vec{k}_1, \vec{k}_2]} }{ \mathrm{max}_\mathfrak{n}\left[ Z^{(\mathfrak{n})}_{[\vec{k}_1, \vec{k}_2]} \right]}$ so that the largest overlap across all states for a given operator $[\vec{k}_1, \vec{k}_2]$ is unity.
\label{Z_P100}}
\end{figure}
As an explicit example of our variational fitting procedure consider the $\vec{P}=[0,0,1]$, $A_1$ irrep evaluated on the $L/a_s = 24$ lattice. Our basis of operators here is obtained by applying equation \ref{equ:twopionop} and is thus of the form
\begin{equation}
\big( \pi\pi \big)_{[001], A_1}^{[\vec{k}_1, \vec{k}_2]\dag} = \sum_{\substack{\vec{k}_1 \in \{\vec{k}_1\}^\star \\ \vec{k}_2 \in \{\vec{k}_2\}^\star \\\vec{k}_1+\vec{k}_2 = [001] }} \mathcal{C}([0,0,1],A_1; \vec{k}_1; \vec{k}_2 )\; \pi^\dag(\vec{k}_1)\, \pi^\dag(\vec{k}_2) \label{p100ops}
\end{equation}
with constructions using the pion momenta given in Table \ref{table:opsused}. This gives a correlation matrix, $C(t)$, of dimension 7. In order to remove the largest finite-$T$ effects, as discussed in the previous section, the \emph{weighted-shifted} correlation matrix, $\widehat{\widetilde{C}}(t)$ is formed, using $\Delta E_\mathrm{min} = E_\pi([0,0,1]) - m_\pi$. This matrix is analysed using equation \ref{GEVP} - for $t_0 = 14 a_t$, the obtained principal correlators, $\lambda_\mathfrak{n}(t)$ are shown in Figure \ref{P100_A1_prin_corr} along with fits of the form,
\begin{equation}
\lambda(t) = (1 - A) e^{- E (t -t _0)} + A e^{-E' (t-t_0)} \label{lambda_fit}
\end{equation}
where $E$, $E'>E$ and $A \ll 1$ are the fit parameters. The second exponential allows for the excited state\footnote{by ``excited states" here we might have several types, including $\pi\pi$ with large relative momenta, $\pi \pi^\star$ and other inelastic contributions.} pollution expected to be present for $t \lesssim t_0$ (our reported spectra are just the values of $E$, $E'$ is discarded). The fits are very good and the absence of any significant upward curvature at larger $t$ (as in Figure \ref{corr_flight_finiteT_model}) suggests that our weighting-shifting procedure has removed the bulk of the finite-$T$ pollution\footnote{such upward curvature \emph{is} seen in variational analysis of the raw correlator matrix, $C(t)$.}.
The solution of equation \ref{GEVP} also provides eigenvectors $v^{(\mathfrak{n})}$ which can be converted into overlaps, $Z^{(\mathfrak{n})}_{[\vec{k}_1, \vec{k}_2]} \equiv \big\langle (\pi\pi)_\mathfrak{n};\,[0,0,1], \, A_1 \big| \big( \pi\pi \big)_{[001], A_1}^{\dag[\vec{k}_1, \vec{k}_2]} \big| 0 \big\rangle$ using $\widehat{Z}^{(\mathfrak{n})}_{[\vec{k}_1, \vec{k}_2]} = \big(\widehat{v}^{(\mathfrak{n})\dag} \widehat{\widetilde{C}}(t_0)\big)_{[\vec{k}_1, \vec{k}_2]} \,e^{\widetilde{E}_\mathfrak{n} t_0 / 2}$. Our method of solution of the generalised eigenvalue problem treats each timeslice independently such that we actually obtain $v^{(\mathfrak{n})}(t)$ and thus $\widehat{Z}(t)$. This time-dependence is fitted to a constant (or a constant plus an exponential if that is required to get a good fit) and the resulting constant is rescaled to undo the effect of the shifting of the correlators in the manner prescribed by equation \ref{Zscale}.
The overall quality of description of the correlators by the variational solution can be seen in Figure \ref{P100_A1_recon} along with an indication of how much each $\big|(\pi\pi)_\mathfrak{n}\big\rangle$ state contributes to each of the diagonal correlators. These contributions are reconstructed from the results of the variational analysis by building the sum in equation \ref{eq:wanted} state-by-state. The description, as one would expect, is excellent for $t > t_0$; indeed the ability to get a good description of the correlators using only the number of states equal to the basis size is our condition to determine an appropriate value of $t_0$ \cite{Dudek:2007wv}. That we are able to countenance a value as low as $t_0 = 14 a_t$ is due to our use of optimised pion operators so that $\pi\pi^\star$ contributions to the correlators are much reduced.
It is apparent in Figure \ref{P100_A1_recon} that the basis of operators, defined by equation \ref{p100ops}, is rather close to a diagonalising basis and this can be clearly seen in Figure \ref{Z_P100} which shows the $Z$ values for each state $\mathfrak{n}$ and each operator $[\vec{k}_1, \vec{k}_2]$. This indicates that the finite-volume $\pi\pi$ eigenstates are close to being states of definite pion momentum which agrees with the expectation that the $I=2$ interpion interaction strength is weak and the observation of only small shifts from non-interacting $\pi\pi$ energies. It is interesting to note that the largest deviations from diagonal behaviour, i.e. the largest mixing of the non-interacting state basis, occurs for levels which are very close in energy. This is precisely what we would expect from perturbation theory, where small energy denominators enhance mixing of near-degenerate states. That we are able to resolve this mixing with a high degree of confidence is an advantage of our use of a variational approach.
\subsection{Volume dependence of $\pi\pi$ spectra}
We perform this analysis procedure independently for each $\vec{P},\Lambda$ on each volume. The energies obtained are in the frame in which the lattice is at rest, and can be more usefully expressed in the $\pi\pi$ center-of-momentum frame,
\begin{align}
E_\mathsf{cm} &= \sqrt{ E_\mathsf{lat}^2 - |\vec{P}|^2} \nonumber \\
\big( a_t E_\mathsf{cm} \big) &= \bigg[ \big(a_t E_\mathsf{lat}\big)^2 - \tfrac{1}{\xi^2} \left(\tfrac{2\pi}{L/a_s} \right)^2 n^2_{\vec{P}} \bigg]^{1/2} \label{kinematics}
\end{align}
where we use the anisotropy, $\xi$, determined from the pion dispersion relation in Section \ref{sec:dispersion}. In Figures \ref{P000},\ref{P100},\ref{P110},\ref{P111} we show the volume dependence of the extracted center-of-momentum frame energy spectrum along with the energies of pairs of non-interacting pions carrying various allowed lattice momenta.
In all cases we observe small energy shifts, with the largest shifts in $A_1$ irreps, reflecting the expected strongest interaction in $S$-wave scattering.
\begin{figure}
\includegraphics[width=0.5\textwidt
]{P000_all.pdf}
\caption{
Extracted center-of-momentum frame energy spectra for $\vec{P}=[0,0,0]$ irreps $A_1^+,E^+,T_2^+$. Also shown (in red) are non-interacting pion pair energies, $\sqrt{m_\pi^2 + |\vec{k}_1|^2} + \sqrt{m_\pi^2 + |\vec{k}_2|^2}$ whose uncertainty is determined by the uncertainty on $a_t m_\pi$ and $\xi$ determined in Section \ref{sec:dispersion}. Grey area represents opening of inelastic ($4\pi$) threshold.
\label{P000}}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidt
]{P100_all.pdf}
\caption{
As Figure \ref{P000} for $\vec{P}=[0,0,1]$.
\label{P100}}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.5\textwidt
]{P110_all.pdf}
\caption{
As Figure \ref{P000} for $\vec{P}=[0,1,1]$.
\label{P110}}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.5\textwidt
]{P111_A1.pdf}
\caption{
As Figure \ref{P000} for $\vec{P}=[1,1,1]$.
\label{P111}}
\end{figure}
\clearpage
\section{Phase-shifts from finite-volume spectra}\label{sec:luescher}
The formalism to relate the amplitude for two-particle elastic scattering in partial waves labelled by angular momentum $\ell$, to the spectrum of states in a finite cubic spatial volume, is laid down in \cite{Luscher:1991cf}, with extensions to the case of a moving frame presented in \cite{Rummukainen:1995vs,Kim:2005gf,Christ:2005gi}. Because we are considering $\pi\pi$ scattering in isospin-2 where only even $\ell$ occur, there is a one-to-one correspondence between the irreps of the symmetry group relevant for the L\"uscher formalism in a moving frame and the little group irreps\footnote{We note that the symmetry group relevant for the L\"uscher formalism here is the subgroup of $\text{O}^{\text{D}}_h$ under which $\vec{P} \rightarrow \pm\vec{P}$ rather than the constraint for little groups that $\vec{P} \rightarrow \vec{P}$. The irreps are similar to those of the little groups but have an additional ``parity'' label.}
and so we will refer to the little group irreps.
The formalism can be compactly expressed in a single equation,
\begin{equation}
\det\left[ \mathbf{E}(p_\mathsf{cm}) - \mathbf{U}^{(\vec{P},\Lambda)}\Big( \big(\tfrac{p_\mathsf{cm} L}{2\pi} \big)^2 \Big) \right] = 0.\label{luescher}
\end{equation}
$\mathbf{U}$ is a formally infinite-dimensional matrix of known functions whose rows and columns are each labelled by the pair $(\ell, n)$, $U_{\ell n;\ell' n'}$. $\{\ell\}$ are the angular momenta which subduce into the irrep, $\Lambda$, and $n$ is an index indicating the $n^\mathrm{th}$ embedding of that $\ell$ into this irrep; the pattern of these subductions is given in Table \ref{table:pipiirreps}. $\mathbf{U}$ is a function of the dimensionless variable $q^2 = \big(\tfrac{p_\mathsf{cm} L}{2\pi} \big)^2$, featuring the center-of-momentum frame scattering momentum and the spatial length of the cubic lattice, $L$.
$\mathbf{E}$ is a diagonal matrix, independent of $L$, which encodes the scattering amplitude through the elastic scattering phase-shifts, $\delta_\ell(p_\mathsf{cm})$, as $E_{\ell n;\ell' n'} = e^{2i \delta_\ell(p_\mathsf{cm})} \delta_{\ell' \ell} \delta_{n'n}$.
$\mathbf{U}$ is conveniently expressed in terms of a matrix $\mathbf{M}$ as $\mathbf{U} = \big( \mathbf{M} +i \mathbf{1} \big) \big( \mathbf{M} -i \mathbf{1} \big)^{-1}$ where we can obtain the elements of $\mathbf{M}$ using
\begin{widetext}
\begin{equation}
\mathcal{M}^{(\vec{P},\Lambda, \mu)}_{\ell n; \ell' n'}(q^2) \;\delta_{\Lambda,\Lambda'} \delta_{\mu,\mu'} =
\sum_{\substack{\hat{\lambda}=\pm |\lambda| \\ m=-\ell \ldots \ell }} \;
\sum_{\substack{\hat{\lambda}'=\pm |\lambda'| \\ m'=-\ell' \ldots \ell' }} \;
\mathcal{S}_{\vec{P},\Lambda,\mu}^{\tilde{\eta},\lambda*} \, D^{(\ell)*}_{m \lambda}(R) \cdot \mathcal{M}^{(\vec{P})}_{\ell m; \ell' m'}(q^2) \cdot \mathcal{S}_{\vec{P},\Lambda,\mu'}^{\tilde{\eta},\lambda'} \, D^{(\ell')}_{m' \lambda'}(R).
\end{equation}
\end{widetext}
In this equation, $R$ is a rotation carrying the $J_z$ quantisation axis $(0,0,P)$ into $\vec{P}$, with $D^{(\ell)}_{m \lambda}(R)$ relating $J_z$ values, $m$, to helicities, $\lambda$. A convention for constructing $R$ is given in \cite{Thomas:2011rh}. $\mathcal{S}_{\vec{P},\Lambda,\mu}^{\tilde{\eta},\lambda}$ is the subduction from helicity $\lambda$ to the $\mu^\mathrm{th}$ row of the lattice irrep $\Lambda$ (see Appendix \ref{app:operators}). Different magnitudes of helicity, $|\lambda|,|\lambda'|$ give rise to the different embeddings $n,n'$. The ``reflection parity", $\tilde{\eta} \equiv P(-1)^\ell = +$ for a system of two pseudoscalars.
$\mathcal{M}^{(\vec{P})}_{\ell m; \ell' m'}$ is the same object defined in equation (89) of \cite{Rummukainen:1995vs} where it is expressed in terms of a known linear combination of generalised zeta functions of argument $q^2$.
One potential use of equation \ref{luescher} is to take a scattering problem where the amplitudes are known and find the corresponding spectrum of states in a certain finite-volume box. For a known set of scattering phase-shifts, $\{\delta_\ell(p_\mathsf{cm})\}$, the finite-volume spectrum on an $L\times L \times L$ spatial lattice can be obtained by solving equation \ref{luescher} for discrete values of $p_\mathsf{cm}$ which give discrete values of $E_\mathsf{lat}$. Of course in practice, for any given lattice irrep, $\Lambda$, we need to truncate the infinite $(\ell, n)$ basis to the set of phase-shifts $\{\delta_\ell(p_\mathsf{cm})\}$ known to us. Fortunately, at low scattering momentum there is a hierarchy in $\delta_\ell(p_\mathsf{cm})$ which follows from angular momentum conservation, $\delta_\ell(p_\mathsf{cm}) \sim p_\mathsf{cm}^{2\ell +1}$, such that $\delta_0 \gg \delta_2 \gg \delta_4 \ldots$, and we may be justified in making a finite truncation in $\ell$.
\begin{table}[!h]
\begin{ruledtabular}
\begin{tabular}{c c | c l }
$\vec{P}$ & $\mathrm{LG}(\vec{P})$ & $\Lambda^{(P)}$ & $\pi\pi ~ \ell^N$\\
\hline \hline
\multirow{5}{*}{$[0,0,0]$} & \multirow{5}{*}{$\text{O}^{\text{D}}_h$}
& $A_1^+$ & $0^1,~4^1$\\
& & $T_1^+$ & $4^1$\\
& & $T_2^+$ & $2^1,~4^1$\\
& & $E^+$ & $2^1,~4^1$\\
\hline
\multirow{5}{*}{$[0,0,n]$} & \multirow{5}{*}{$\text{Dic}_4$}
& $A_1$ & $0^1,~2^1,~4^2$\\
& & $A_2$ & $4^1$\\
& & $E_2$ & $2^1,~4^2$\\
& & $B_1$ & $2^1,~4^1$\\
& & $B_2$ & $2^1,~4^1$\\
\hline
\multirow{4}{*}{$[0,n,n]$} & \multirow{4}{*}{$\text{Dic}_2$}
& $A_1$ & $0^1,~2^2,~4^3$\\
& & $A_2$ & $2^1,~4^2$ \\
& & $B_1$ & $2^1,~4^2$\\
& & $B_2$ & $2^1,~4^2$\\
\hline
\multirow{3}{*}{$[n,n,n]$} & \multirow{3}{*}{$\text{Dic}_3$}
& $A_1$ & $0^1,~2^1,~4^2$\\
& & $A_2$ & $4^1$\\
& & $E_2$ & $2^2,~4^3$\\
\hline
$[n,m,0]$ & \multirow{2}{*}{$\text{C}_4$}
& $A_1$ & $0^1,~2^3,~4^5$\\
$[n,n,m]$ & & $A_2$ & $2^2,~4^4$\\
\end{tabular}
\end{ruledtabular}
\caption{The pattern of subductions of $I=2$ $\pi\pi$ partial waves, $\ell \leq 4$, into lattice irreps, $\Lambda$, where $N$ is the number of embeddings of this $\ell$ in this irrep. This table is derived from Table~\ref{table:latticeirreps} by considering the subductions of the $\ell$ for $\vec{P}=\vec{0}$ and the various helicity components for each $\ell$ for $\vec{P}\neq\vec{0}$. Here $\vec{P}$ is given in units of $\tfrac{2\pi}{L}$ and $n,m$ are non-zero integers with $n \ne m$. We show the double-cover groups but only give the irreps relevant for integer spin.
}
\label{table:pipiirreps}
\end{table}
\clearpage
\subsection{A toy model of $\pi\pi$ scattering}
\begin{figure*}[!t]
\includegraphics[width=0.9\textwidt
]{P100_A1_rev_eng.pdf}
\caption{
Finite-volume spectrum for the toy model of effective-range parameterisations in the irrep $\vec{P}=[0,0,1]$, $A_1$. Green squares indicate the spectrum including only $\ell=0$ scattering, blue circles include $\ell=0,2$ and black diamonds include $\ell=0,2,4$. Note that for many of the energy levels the squares, circles and diamonds lie on top of each other. Red curves show non-interacting energies of pion pairs with momenta $\vec{k}_1, \vec{k}_2$.
\label{P100_A1_rev_eng}}
\end{figure*}
In order to demonstrate the formalism, we will briefly break away from analysis of lattice QCD obtained finite-volume spectra to consider a simple toy-model of $\pi\pi$ scattering in which the scattering amplitudes are known to us.
The toy model is built from an effective range parameterisation of elastic scattering in $\ell = 0,2,4$ partial waves. We have
\begin{equation}
p_\mathsf{cm}^{2\ell +1} \cot \delta(p_\mathsf{cm}) = \frac{1}{a_\ell} + \frac{1}{2} r_\ell \,p_\mathsf{cm}^2, \label{effrange}
\end{equation}
with parameters
\begin{align}
a_0 &= -0.8\, \mathrm{GeV}^{-1}, \, r_0 = +2.5 \, \mathrm{GeV}^{-1}, \nonumber \\
a_2 &= -2.4 \, \mathrm{GeV}^{-5},\, r_2 \equiv 0, \nonumber \\
a_4 &= -5.0 \, \mathrm{GeV}^{-9}, \, r_4 \equiv 0, \nonumber \label{effrangevalues}
\end{align}
which happens to reasonably describe the experimental $\pi\pi$ $I=2$ scattering data up to a momentum $p_\mathsf{cm} \sim 0.6 \, \mathrm{GeV}$ \cite{Hoogland:1977kt,Cohen:1973yx,Zieminski:1974ex,Durusoy:1973aj}.
Given this parameterisation and the choice $m_\pi = 0.396 \,\mathrm{GeV}$ we solve equation \ref{luescher} for the finite-volume spectrum in several irreps, $(\vec{P},\Lambda)$, over a range of volumes, $L = 2.0\to 5.0 \, \mathrm{fm}$. In Figure \ref{P100_A1_rev_eng} we show the center-of-momentum frame finite-volume energy spectrum for one example irrep $\vec{P}=[0,0,1],\, \Lambda = A_1$.
At each volume we show the spectrum obtained from three different scattering parameterisations:
the green squares show the spectrum with only $S$-wave scattering ($\delta_2 = \delta_4 \equiv 0$), the blue circles include also $D$-wave scattering ($\delta_4 \equiv 0$), and the black diamonds correspond to all of $\delta_{0,2,4}$ being described by the effective range parameterisations given above. We observe that the contribution of higher partial waves to determining the finite-volume energy varies with excitation level.
The problem to be solved in lattice QCD is actually the inverse of that just described - we start with the finite-volume spectrum determined through analysis of correlation functions and want to find the phase-shifts as a function of scattering momentum. If a given irrep received contributions from only a single $\ell$ this would be relatively simple - we would solve equation \ref{luescher} for unknown $\delta_\ell(p_\mathsf{cm})$ by inputting the determined value of $p_\mathsf{cm}$ extracted from $E_\mathsf{lat}$ using equation \ref{kinematics}. The toy model construction indicates the potential difficulty with such a naive approach - equation \ref{luescher} depends on the value of many $\delta_\ell$ simultaneously and on the face of it this is an underconstrained problem.
Within the toy model we can explore the effect of the simplest possible assumption that higher partial waves contribute only negligibly - consider the spectrum in $\vec{P}=[0,0,1],\, \Lambda=A_1$ for $L = 3.5\,\mathrm{fm}$. In Figure \ref{delta2_sensitivity}(a) we show the extracted $\delta_0$ for the lowest four energy levels as a function of a supplied value\footnote{included in equation \ref{luescher} in $\mathbf{E}$ as a fixed parameter} of $\delta_2$ (and with $\delta_4 = 0$). The naive assumption of $\delta_2 = 0$ is seen to give reasonable estimates of $\delta_0$ for the lowest two levels, but to be significantly discrepant for the next two levels. Varying $\delta_2$ between $\pm2|\delta_2^\mathrm{exact}|$ (which we know because this is a toy model) gives the curves shown. Figure \ref{delta2_sensitivity}(b) shows the sensitivity to $\delta_4$ assuming that $\delta_2$ is known exactly.
In this exercise we explicitly see that the influence of higher partial waves can vary significantly between levels; for $\mathfrak{n}=0,1,3$ the influence of $\delta_{2,4}$ is modest and given a ``reasonable" estimate of their magnitude we could assign a systematic error on $\delta_0$ that would encompass the exact result. On the other hand, no information can be obtained from level $\mathfrak{n}=2$ without very precise knowledge of both $\delta_2$ and $\delta_4$ at the corresponding scattering momentum. This will not be possible in any practical calculation and we must be careful to identify those cases where an energy level shows such extreme sensitivity.
\begin{figure*}
\includegraphics[width=0.47\textwidt
]{delta2_sensitivity.pdf}
\includegraphics[width=0.47\textwidt
]{delta4_sensitivity.pdf}
\caption{Lowest four energy levels ($\mathfrak{n} = 0,1,2,3$) in toy model with volume $L=3.5\,\mathrm{fm}$ in irrep $\vec{P}=[0,0,1]$, $A_1$.\\
\hspace{0.5cm}(a) Sensitivity of $\delta_0$ extracted from equation \ref{luescher} as a function of assumed values of $\delta_2$ in range $\pm 2|\delta^\mathrm{exact}_2|$ with $\delta_4=0$\\
\hspace{0.5cm}(b) Sensitivity of $\delta_0$ extracted from equation \ref{luescher} as a function of assumed values of $\delta_4$ in range $\pm 2|\delta^\mathrm{exact}_4|$ with $\delta_2=\delta_2^\mathrm{exact}$\\
Boxes on far left indicate exact values of $\delta_0$ at the corresponding scattering momenta. Arrows on $x$-axis indicate exact values of $\delta_{2,4}$
\label{delta2_sensitivity}}
\end{figure*}
Thus even if our main aim is only to determine $\delta_0(p_\mathsf{cm})$ we see that it is incumbent upon us to also estimate $\delta_{\ell > 0}$. The easiest way to do this is to analyse the finite-volume spectra of irreps which receive no contribution from $\ell=0$, see Table \ref{table:pipiirreps}. Typically any irrep that features $\ell = 2$ will also feature $\ell = 4$ so we have a similar problem of estimating $\delta_2$ given no knowledge of $\delta_4$. Fortunately in the case under consideration where the interactions are weak we encounter situations in which energy levels in two different irreps have very similar energy values. For example with $\vec{P}=[0,0,1]$, the lowest level in $E_2$ and the lowest level in $B_1$ are both very close to the non-interacting $\big( \vec{k}_1= [0,\text{-}1,0],\,\vec{k}_2= [0,1,1] \big)$ level and correspond to $p_\mathsf{cm}$ values of $0.03934, 0.03950\, \mathrm{GeV}$ respectively. In this case, to the extent that $\delta_{2,4}(p_\mathsf{cm})$ do not change significantly over the small difference in $p_\mathsf{cm}$, and the functions in $\mathbf{U}$ are not rapidly varying over the corresponding range in $q^2$, we can solve the coupled system of two equations \ref{luescher} (for $E_2$ and $B_1$) for the two unknowns $\delta_2, \delta_4$. This is demonstrated in Figure \ref{coupled}(a) where the simultaneous solution of the two equations is seen to be reasonably close to the exact values. A similar extraction for two levels in $E_2, B_2$ is shown in Figure \ref{coupled}(b).
Several level pairings of this type can be identified and an estimate of a few discrete values of $\delta_2, \delta_4$ can be made as shown by the purple points in Figure \ref{toy_result}. Fitting these points with an effective-range parameterisation, or using some other method to interpolate between the discrete points, we obtain our desired estimates of $\delta_2, \delta_4$ for use in determination of $\delta_0$.
The extracted values of $\delta_0$ shown in Figure \ref{toy_result} correspond to solving equation \ref{luescher} for each energy level in the $A_1$ representation for $\vec{P}=[0,0,0],\,[0,0,1],\,[0,1,1],\,[1,1,1]$ with $\delta_2, \delta_4$ fixed at our best estimate from interpolation between the determined $\delta_2, \delta_4$ points. The error bars indicate the uncertainty in $\delta_0$ obtained by varying $\delta_2, \delta_4$ within an assumed $100\%$ uncertainty.
We observe that following such a procedure leads to a reasonable reproduction of the originally input toy-model phase-shifts. Note that we used only a single volume to obtain this result - using multiple volumes will further improve the determination.
\begin{figure*}
\includegraphics[width=0.95\textwidt
]{P100_toy_E2B1_E2B2.pdf}
\caption{Simultaneous solution of two equations \ref{luescher} for $\delta_{2,4}$. Open black square shows exact values with uncertainty indicating the variation in $\delta_{2,4}^\mathrm{exact}$ over the momentum region between the two determined $p_\mathsf{cm}$.\\
(a) $\vec{P}=[0,0,1]$, $\mathfrak{n}=0$ in $E_2$ and $\mathfrak{n}=0$ in $B_1$. (b) $\vec{P}=[0,0,1]$, $\mathfrak{n}=1$ in $E_2$ and $\mathfrak{n}=0$ in $B_2$.
\label{coupled}}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.85\textwidt
]{rev_eng_delta.pdf}
\caption{Phase shifts, $\delta_{0,2,4}(p_\mathsf{cm})$ extracted from $L=3.5\,\mathrm{fm}$ spectrum using the method described in the text. Uncertainty in $\delta_0$ indicates the effect of a conservative assumed uncertainty on $\delta_{2,4}$. Some points with very large uncertainty not shown. Toy model input phase shifts shown by the curves.
\label{toy_result}}
\end{figure*}
An alternative approach to dealing with the contribution of higher partial waves is to parameterise all $\delta_\ell(p_\mathsf{cm})$ one expects to contribute significantly in terms of a relatively small number of variable parameters. By performing a global fit to all energy levels simultaneously, varying the parameters, one can attempt to find a description of the finite-volume spectrum that is best in a least-squares sense.
Clearly in the case of this toy model, one could use the parameterisation given in equation \ref{effrange} and by varying parameters $a_0, r_0, a_2, a_4$ come to an \emph{exact} description of the spectrum. We do not present this trivial result here, but we will return to this ``global fitting" method in the next section when we consider the finite-volume spectrum obtained from lattice QCD computations.
\clearpage
\subsection{Lattice QCD data}
\begin{table*}
\begin{ruledtabular}
\begin{tabular}{cl c c c}
$L/a_s$ & \multicolumn{1}{c}{levels} & \multicolumn{1}{c}{$a_t p_\mathsf{cm}$} & $\delta_2\,/\,^\circ$ & $\delta_4\,/\,^\circ$\\
\hline \hline
\multirow{2}{*}{$24$} & $[0,0,0],\, E^+,\, \mathfrak{n}=1$ & $0.10766(23)(8)$ & \multirow{2}{*}{$-0.39(82)(67)$} & \multirow{2}{*}{$-0.17(32)(22)$} \\
& $[0,0,0],\, T_2^+,\, \mathfrak{n}=0$ & $0.10764(23)(8)$ & & \\
\hline
\multirow{2}{*}{$24$} & $[0,0,1],\, B_1,\, \mathfrak{n}=0$ & $0.08427(25)(11)$ & \multirow{2}{*}{$-0.40(47)(39)$} & \multirow{2}{*}{$-0.05(26)(16)$} \\
& $[0,0,1],\, E_2,\, \mathfrak{n}=0$ & $0.08418(25)(11)$ & & \\
\hline
\multirow{2}{*}{$24$} & $[0,0,1],\, B_2,\, \mathfrak{n}=0$ & $0.11412(29)(8)$ & \multirow{2}{*}{$-1.60(80)(64)$} & \multirow{2}{*}{$-0.78(69)(55)$} \\
& $[0,0,1],\, E_2,\, \mathfrak{n}=1$ & $0.11393(28)(8)$ & & \\
\hline
\multirow{2}{*}{$20$} & $[0,0,1],\, B_1,\, \mathfrak{n}=0$ & $0.10174(35)(9)$ & \multirow{2}{*}{$-1.59(54)(36)$} & \multirow{2}{*}{$-0.018(36)(17)$} \\
& $[0,0,1],\, E_2,\, \mathfrak{n}=0$ & $0.10131(37)(9)$ & & \\
\end{tabular}
\end{ruledtabular}
\caption{Levels with very similar $p_\mathsf{cm}$ values used in simultaneous solution of equations \ref{luescher}.
\label{tab:simult}}
\end{table*}
We now return to consideration of the finite-volume spectrum presented in Section \ref{sec:spectrum}. The first step in our ``level-by-level" approach is to solve for $\delta_{2,4}$ using pairs of simultaneous equations \ref{luescher}. Pairs of levels below inelastic threshold that can be used to yield estimates for $\delta_{2,4}$ are presented in Table \ref{tab:simult} and are displayed by the filled points in Figures \ref{delta2pointwise}, \ref{delta4pointwise}. $\delta_4$ is observed to be statistically compatible with zero throughout the elastic region. There are also levels in irreps whose leading contribution is from $\ell=2$ that do not pair and cannot be analysed using a simultaneous solution - these are considered in isolation, where the (small) role of $\delta_4$ is estimated and included as a systematic error, they are shown by the open points in Figure \ref{delta2pointwise}.
Each of these $\delta_2$, $\delta_4$ data sets can be described well by a scattering length fit, $p_\mathsf{cm}^{2\ell + 1} \cot \delta_\ell(p_\mathsf{cm}) = 1/a_\ell$, and the resulting fit function is used to estimate the size of $\delta_{2,4}$ at any $p_\mathsf{cm}$ in the elastic region when determining $\delta_0$ values from $A_1$ irreps. As indicated in the previous subsection, a systematic error on $\delta_0$ due to imperfect knowledge of $\delta_{2,4}$ is assigned by
assuming a $100\%$ error on the estimated values of $\delta_{2,4}$.
The resulting $\delta_0$ points are displayed in Figure \ref{delta0pointwise} where it is observed that the uncertainty from imperfect knowledge of $\delta_{2,4}$ is typically much smaller that the statistical uncertainty.
\begin{figure}[h]
\includegraphics[width=.5\textwidt
]{delta_2_pointwise.pdf}
\caption{$\delta_2$ values in elastic scattering region determined from finite-volume spectra. Filled points determined by simultaneous solution of equations \ref{luescher} (innermost errorbar statistical uncertainty, outermost errorbar reflects combined statistical uncertainty and uncertainty in $a_t m_\pi$, $\xi$ with all errors added in quadrature). Open points determined from single levels, with effect of $\delta_4$ estimated (innermost errorbar statistical uncertainty, outermost errorbar reflects combined statistical uncertainty and uncertainty in $a_t m_\pi$, $\xi$ and $\delta_4$).
\label{delta2pointwise}}
\end{figure}
\begin{figure}[h]
\includegraphics[width=.5\textwidt
]{delta_4_pointwise.pdf}
\caption{$\delta_4$ values in elastic scattering region determined from finite-volume spectra. Filled points determined by simultaneous solution of equations \ref{luescher} (innermost errorbar statistical uncertainty, outermost errorbar reflects combined statistical uncertainty and uncertainty in $a_t m_\pi$, $\xi$ with errors added in quadrature).
\label{delta4pointwise}}
\end{figure}
\begin{figure}[h]
\includegraphics[width=.5\textwidt
]{delta_0_pointwise.pdf}
\caption{$\delta_0$ values in elastic scattering region determined from finite-volume spectra. Innermost errorbar is the statistical uncertainty, middle errorbar combined statistical uncertainty and uncertainty in ($a_t m_\pi$, $\xi$), outermost errorbar reflects total uncertainty including imperfect knowledge of $\delta_{2,4}$ (all errors added in quadrature). Some points with very large uncertainty not shown.
\label{delta0pointwise}}
\end{figure}
\pagebreak
We now consider the second approach described above where the $\delta_\ell(p_\mathsf{cm})$ are parameterised and by varying a small number of parameters a best description of all the finite volume spectra is obtained in a ``global fit". Our procedure is to minimise a $\chi^2$ with respect to the variable parameters in the parameterisation, which we denote collectively by $\{a_i\}$. The $\chi^2$ describes the similarity between the extracted finite-volume spectrum and the spectrum predicted by the parameterisation on the appropriate volumes,
\begin{widetext}
\begin{equation}
\chi^2(\{a_i\}) =
\sum_L \sum_{\substack{\vec{P} \Lambda \mathfrak{n} \\ \vec{P}' \Lambda'\mathfrak{n}'}}
\left[ p_\mathsf{cm}(L;\vec{P}\Lambda\mathfrak{n}) - p^{\det}_\mathsf{cm}(L; \vec{P}\Lambda\mathfrak{n}; \{a_i \}) \right] \mathbb{C}^{-1}(L; \vec{P} \Lambda \mathfrak{n}; \vec{P}' \Lambda' \mathfrak{n}') \left[ p_\mathsf{cm}(L;\vec{P}'\Lambda'\mathfrak{n}') - p^{\det}_\mathsf{cm}(L; \vec{P}'\Lambda'\mathfrak{n}'; \{a_i \}) \right]. \label{chisq}
\end{equation}
\end{widetext}
Here we have $p^{\det}_\mathsf{cm}(L; \vec{P}\Lambda\mathfrak{n}; \{a_i \})$ which is the particular solution of equation \ref{luescher}
which is nearest to $p_\mathsf{cm}(L;\vec{P}\Lambda\mathfrak{n})$ (with the parameters set to the particular values $\{a_i\}$). The data covariance, $\mathbb{C}$, accounts for the correlation between determined energies computed on the same lattice configurations - different volumes correspond to independently generated lattice ensembles and hence are not correlated.
Statistical errors on the parameters, $\{a_i\}$, are determined by $\Delta \chi^2 = 1$. Errors from the imperfect knowledge of $a_t m_\pi$ and $\xi$ are estimated by repeating the $\chi^2$ minimisation varying the mass and anisotropy within their respective errors. We treat these as independent systematic errors, although they would naturally be reduced with increased numbers of gauge-field configurations at each lattice volume.
Fits with effective range and scattering length parameterisations (equation \ref{effrange}) were attempted. These fits never indicated the need to include significant strength in the $\ell=4$ wave. A successful fit to all energy levels with an effective range parameterisation of $\ell=0$ and scattering length in $\ell=2$ gives the following parameter values and correlations,
\nopagebreak
\vspace{0.25cm}
\begin{tabular}{rlr}
$a_{\ell=0}$ & $= (-4.45\pm0.18\pm0.06 )\, \cdot \,a_t$ & \multirow{3}{*}{ \;\;\;$\begin{bmatrix} 1 & 0.9 & 0.4 \\ & 1 & 0.2 \\ & & 1 \end{bmatrix}$ }\\
$r_{\ell=0}$ & $= (-3.7\pm1.8\pm0.7) \, \cdot \, a_t$ & \\
$a_{\ell=2}$ & $= (-1.20\pm0.29\pm0.17) \times 10^3 \, \cdot \, a_t^5$ & \\
\\
& $\chi^2/N_\mathrm{dof} = 116/46$, & \\
\end{tabular}
\vspace{0.25cm}
\\*
where the second set of uncertainties reflects variation of $a_t m_\pi$ and $\xi$ within their uncertainties. We see that the effective range in $\ell=0$ is barely significant and is strongly correlated with the scattering length. The degree of correlation between $\ell=0$ and $\ell=2$ is mild. Given the lack of significance for $r_0$, a fit with just a scattering length was attempted, yielding
\vspace{0.25cm}
\begin{tabular}{rlr}
$a_{\ell=0}$ & $= (-4.13\pm0.07\pm0.06 )\, \cdot \,a_t$ & \multirow{2}{*}{ \;\;\;$\begin{bmatrix} 1 & 0.5 \\ & 1 \end{bmatrix}$ }\\
$a_{\ell=2}$ & $= (-1.08\pm0.28\pm0.19) \times 10^3 \, \cdot \, a_t^5$ & \\
\\
& $\chi^2/N_\mathrm{dof} = 121/47$, & \\
\end{tabular}
\vspace{0.25cm}
\\*
where the quality of fit is insignificantly degraded. Clearly there is no need to invoke higher terms in the effective range expansion to describe the data. A fit to only those irreps where $\ell=2$ is leading yields $a_{\ell=2} = (-1.51\pm0.31\pm0.21) \times 10^3 \, \cdot \, a_t^5$ with $\chi^2/N_\mathrm{dof} = 31/16$, in reasonable agreement with the values obtained above. These various fits are shown in Figure \ref{delta0global}
along with the points determined using the ``level-by-level" approach described earlier where good agreement between the two methods is observed.
\begin{figure}
\includegraphics[width=.5\textwidt
]{delta_0_global.pdf}
\includegraphics[width=.5\textwidt
]{delta_2_global.pdf}
\caption{Upper: $\delta_0(p_\mathsf{cm})$ obtained through ``global fits" to finite-volume spectra using effective range and scattering length parameterisations.
Lower: $\delta_2(p_\mathsf{cm})$ obtained through ``global fits" to finite-volume spectra using a scattering length parameterisations.
Also shown for comparison, the ``level-by-level" analysis previously presented in Figures \ref{delta0pointwise}, \ref{delta2pointwise}.
\label{delta0global}}
\end{figure}
\pagebreak
\section{Results}\label{sec:results}
In Figure \ref{delta0summary} we show the $\pi\pi$ $\ell=0$ elastic scattering phase shift for $m_\pi = 396 \,\mathrm{MeV}$ as a function of center-of-momentum frame scattering momentum as extracted from finite-volume spectra. Discrete points correspond to a ``level-by-level" analysis in which the L\"uscher equation is solved for $\delta_0(p_\mathsf{cm})$ at each obtained $p_\mathsf{cm}$ with some justified assumptions made about the size of $\delta_{2,4}$ at this scattering momentum, and with the degree of uncertainty about the higher $\ell$ partial waves reflected in a systematic error. The curves are the result of ``global fits" to all the finite-volume energy levels assuming either an effective range parameterisation or just a scattering length, either of which are able to describe the energy spectrum well. The best estimates for the scattering length and effective range expressed in units of the pion mass on this lattice are
\begin{align}
m_\pi \cdot a_{\ell=0} &= -0.307 \pm 0.013 \nonumber \\
m_\pi \cdot r_{\ell=0} &= -0.26 \pm 0.13, \nonumber
\end{align}
but there is a very high degree of correlation ($0.9$) between these values, and a pure scattering length of $m_\pi \cdot a_{\ell=0} = -0.285 \pm 0.006$ can describe the data just as well.
Figure \ref{delta2summary} shows the $\pi\pi$ $\ell=2$ elastic scattering phase shift which can be well described by a scattering length of $m_\pi^5 \cdot a_{\ell=2} = (-1.89 \pm 0.53)\times 10^{-6}$. Statistically significant signals for elastic scattering in the $\ell=4$ wave were not observed and we estimate that $ m_\pi^9 \cdot |a_{\ell=4}| \lesssim 1 \times 10^{-4}$.
\begin{figure*}
\includegraphics[width=0.8\textwidt
]{delta_0_summary.pdf}
\caption{Extracted $I=2$ $\pi\pi$ elastic scattering phase-shift in $S$-wave, $\delta_0(p_\mathsf{cm})$, as obtained from analysis of finite-volume spectra with $m_\pi = 396\,\mathrm{MeV}$. Center-of-momentum frame scattering momentum expressed in units of the temporal lattice spacing. The momentum region plotted is entirely elastic, with the $4\pi$ threshold opening at $(a_t p_\mathsf{cm})^2 = 0.014$. Colored points correspond to an analysis treating each energy level independently. The innermost errorbar is the statistical uncertainty, the middle errorbar reflects combined statistical uncertainty and uncertainty in $(a_t m_\pi$, $\xi)$ and the outermost errorbar shows the total uncertainty including imperfect knowledge of $\delta_{2,4}$ (all errors added in quadrature). Curves indicate a global analysis of all energy levels describing the phase-shift by a scattering length or an effective range parameterisation.
\label{delta0summary}}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.8\textwidt
]{delta_2_summary.pdf}
\caption{Extracted $I=2$ $\pi\pi$ elastic scattering phase-shift in $D$-wave, $\delta_2(p_\mathsf{cm})$, as obtained from analysis of finite-volume spectra with $m_\pi = 396\,\mathrm{MeV}$. Center-of-momentum frame scattering momentum expressed in units of the temporal lattice spacing. Momentum region plotted is entirely elastic, with the $4\pi$ threshold opening at $(a_t p_\mathsf{cm})^2 = 0.014$. Colored points correspond to an analysis treating energy regions locally as described earlier in the manuscript. The inner errorbar is the statistical uncertainty, and the outer errorbar reflects the combined statistical uncertainty and uncertainty in $a_t m_\pi$, $\xi$ and the value of $\delta_4$ (errors added in quadrature). Curves indicate a global analysis of all energy levels describing the phase-shift by a scattering length.
\label{delta2summary}}
\end{figure*}
We note here that the same $L/a_s = 16,20,24$ lattice ensembles (plus a larger $L/a_s = 32$ ensemble) were used by NPLQCD to extract $\delta_0(p_\mathsf{cm})$ in \cite{Beane:2011sc}. They considered many of the same frames, but limited themselves to the ``scalar" irreps ($A_1^+$ for $\vec{P}=[0,0,0]$ and $A_1$ for $\vec{P} \neq [0,0,0]$), and they did not use a variational basis of operators. A comparison of results is shown in Figure \ref{NPLQCD} where low-lying levels are observed to have energies (and hence phase-shifts) that agree well, but where discrepancies appear at higher energies. The most significantly discrepant points (at $(a_t p_\mathsf{cm})^2 \sim 0.0017$ and $\sim 0.008$) in the NPLQCD analysis correspond to levels which are either nearly degenerate with another level (the $\vec{P}=[0,1,1]$ ground state\footnote{see Figure \ref{P110}}) or are highly excited ($\vec{P}=[0,0,0]$ second excited level). Since in our analysis we see no such discrepancies it may be that the variational method more reliably determines energies in cases where orthogonality of states is important.
\begin{figure*}
\includegraphics[width=\textwidt
]{NPLQCD.pdf}
\caption{Our $\delta_0$ extraction (colored points) compared with those of NPLQCD (grey points) over the elastic region (left) and zoomed in to small scattering momentum (right).
\label{NPLQCD}}
\end{figure*}
In our somewhat limited previous analysis of $\pi\pi$ $I=2$ scattering \cite{Dudek:2010ew}, we considered three pion masses and observed no significant dependence of the energy variation of $\delta_0$ on the pion mass, which appeared to agree rather well with the experimental data. In Figure \ref{expt} we show our $\delta_{0,2}(p_\mathsf{cm})$ obtained at $m_\pi = 396\,\mathrm{MeV}$ along with the experimental data taken from \cite{Hoogland:1977kt,Cohen:1973yx,Zieminski:1974ex,Durusoy:1973aj}. Our data points have the absolute energy scale of the scattering momentum set using the $\Omega$-baryon mass procedure suggested in \cite{Lin:2008pr}, $p_\mathsf{cm} = (a_t p_\mathsf{cm}) \frac{m_\Omega^\mathrm{phys}}{(a_t m_\Omega)}$ with $(a_t m_\Omega) = 0.2951(22)$ on these lattices \cite{Edwards:2011jj}. Also shown is the $\pi\pi$ $I=2$ $\ell=0$ phase-shift obtained using experimental information in multiple channels from a constrained analysis provided by the Roy equations, which implement manifestly crossing symmetry and the chiral behavior of the scattering amplitudes \cite{Roy:1971tc,Colangelo:2001df}.
\begin{figure*}
\includegraphics[width=0.8\textwidt
]{delta_summary_phys.pdf}
\caption{Extracted $I=2$ $\pi\pi$ elastic $S$-wave(red), $D$-wave(blue) scattering phase shift (for $m_\pi = 396\,\mathrm{MeV}$, all errors combined). Shown in grey the experimental data from \cite{Hoogland:1977kt,Cohen:1973yx,Zieminski:1974ex,Durusoy:1973aj} and the constrained analysis using Roy equations \cite{Roy:1971tc,Colangelo:2001df} (black line, grey band). For the heavy pion mass the entire region plotted is elastic while for the experimental data only $p_\mathsf{cm}^2 < 0.058\,\mathrm{GeV}$ is elastic.
\label{expt}}
\end{figure*}
\section{Summary}\label{sec:summary}
A crucial step in the extraction of hadronic resonance properties is the determination of their resonant scattering behavior. Within a Euclidean quantum field theory, the relevant elastic scattering matrix elements can be inferred indirectly through a systematic study of the spectrum within a finite volume.
In this paper, we extend our previous study~\cite{Dudek:2010ew} determining the $\ell=0$ and $\ell=2$ wave phase shifts in the $\pi\pi$ $I=2$ systems, investigating more thoroughly the effects of operator basis, finite temporal extent, as well as the role of higher partial waves.
With access to only modest lattice volumes, in order to map out the energy dependence with a significant number of points, we determined the excited state spectrum in moving frames. This was achieved by constructing a basis of $\pi\pi$ operators transforming irreducibly under the reduced symmetry of a moving particle in a cubic box. Variational analysis of matrices of correlators built in this operator basis leads to extraction of excited state energies with high precision.
The construction of a basis of operators with suitable quantum numbers corresponding to the $\pi\pi$ system in-flight
is a significant extension beyond the previous work, and has allowed for the determination of the phase shifts at many discrete energies below the $4\pi$ inelastic threshold. This increased operator basis, covering many irreducible representations, allows for more constraints on the contributions of higher partial waves.
However, the weakness of $\pi\pi$ scattering in the isospin-2 channel presents a particular challenge to extraction from finite-volume methods. The changes in energy with respect to non-interacting pions determine the phase-shift and since these are small it is important to take care over systematic effects that may be small in absolute terms but which could be large on the scale of the energy shifts.
We reduced the contribution of excited pion-like meson states to our $\pi\pi$ correlators by using optimised pion operators. These operators are constructed from a linear combination of composite QCD operators with pion quantum numbers and their important property is that they relax to the ground state faster than any single simple operator construction. The reduced contribution of $\pi\pi^\star$ states to our correlators allows analysis at earlier Euclidean times.
At larger Euclidean times, the effect of the finite temporal extent of the lattice can be observed, distorting the time-dependence from the desired sum of exponentials corresponding to discrete state energies. We have explicitly accounted for the largest unwanted finite-$T$ effects leaving sub-leading effects which are somewhat smaller than the statistical uncertainty.
The reduced symmetry of a cubic box at rest is such that $\delta_0$ always appears with some sensitivity to $\delta_4$, but the very small value of $\delta_4$ throughout the elastic region is such that the rest-frame spectrum is mostly independent of $\delta_4$. On the other hand, the symmetry of a cubic box is further reduced when placed in-flight and $\delta_0$ extractions become sensitive to the value of $\delta_2$, which is not necessarily negligibly small. We investigated the effects that non-zero values of $\delta_{2,4}$ can have on the finite-volume spectrum using a toy model showing that some energy levels can show significant sensitivity.
We attempted to account for the effects of higher partial waves on the extraction of $\delta_{0,2}$, finding that they are generally small (except in a limited number of sensitive cases identified in the toy model analysis). We associated a systematic error with our imperfect knowledge of them that was found to be always smaller than the statistical uncertainty. We found that the finite volume energies could be well described by a scattering length parameterization in both $\ell=0$ and $\ell=2$ over the elastic region. The fit could be moderately improved by adding an effective range in $\ell=0$, albeit with a significant correlation between the effective range and scattering length. The fits did not indicate the need for significant strength in the $\ell=4$ wave.
The calculations reported in this paper were performed at only a single pion mass of 396 MeV. While they demonstrate that the procedure outlined can indeed determine scattering phase shifts with a high degree of confidence, the obtained results cannot be directly compared with experimental data. Future calculations using lighter pion masses will be required, as will eventual consideration of other systematic effects such as the lattice spacing dependence. The results presented in this paper supersede those presented in \cite{Dudek:2010ew} which considered only rest-frame correlators using un-optimised pion operators and where finite-$T$ effects were not fully accounted for.
The techniques developed in this calculation are a necessary ingredient to future investigations of resonances in hadron-hadron scattering that arise from the strong interactions. At unphysical pion masses, the phase space available for decays can be small as seen in studies of the $I=1$ $\pi\pi$ sector~\cite{Aoki:2007rd, Feng:2010es, Lang:2011mn, Aoki:2011yj} giving rise to a rapid variation of phase-shift with energy. Thus, the formalism and construction of operators in-flight developed in this work will be necessary to compute a sufficient number of energies within the resonance region to allow for a reliable determination of resonance parameters. To compute these energies, the operator basis used in the variational method will feature both single and multi-hadron constructions. Annihilation diagrams will arise, which as shown in the isoscalar meson sector~\cite{Dudek:2011tt}, can be efficiently constructed using the ``distillation'' method.
\begin{acknowledgments}
We thank our colleagues within the Hadron Spectrum Collaboration. {\tt Chroma}~\cite{Edwards:2004sx} and {\tt QUDA}~\cite{Clark:2009wm,Babich:2010mu} were used to perform this work on clusters at Jefferson Laboratory under the USQCD Initiative and the LQCD ARRA project. Gauge configurations were generated using resources awarded from the U.S. Department of Energy INCITE program at Oak Ridge National Lab, the NSF Teragrid at the Texas Advanced Computer Center and the Pittsburgh Supercomputer Center, as well as at Jefferson Lab. RGE and JJD acknowledge support from U.S. Department of Energy contract DE-AC05-06OR23177, under which Jefferson Science Associates, LLC, manages and operates Jefferson Laboratory. JJD also acknowledges the support of the Jeffress Memorial Fund and the U.S. Department of Energy Early Career award contract DE-SC0006765. CET acknowledges support from a Marie Curie International Incoming Fellowship, PIIF-GA-2010-273320, within the 7th European Community Framework Programme.
\end{acknowledgments}
|
1,477,468,750,637 | arxiv | \section{Introduction}
Sampling zero-one constrained contingency tables finds its applications
in combinatorics \cite{Huber06}, statistics of social networks
\cite{chen2007, Snijders1991},
and regulatory networks \cite{Dinwoodie2008}.
In 2005, Chen et al.\ introduced a sequential importance sampling (SIS) procedure
to analyze zero-one two-way tables with given fixed marginal sums (row and
column sums) via the conditional Poisson (CP)
distribution \cite{chen2005}.
It proceeds by simply sampling cell entries of the zero-one contingency
table sequentially for each row
such that the final distribution approximates the target distribution. This
method will terminate at the last column and sample independently and
identically distributed (iid) tables from the
proposal distribution. Thus the SIS procedure does not require expensive or
prohibitive pre-computations, as is the case of computing Markov
bases for the Monte Carlo Markov Chain (MCMC)
approach. Also, when attempting to sample a single table,
if there is no rejection, the SIS procedure is guaranteed to sample a
table from the distribution, where in an
MCMC approach the chain may require a long time to run in order to
satisfy the independent condition.
In 2007, Chen extended their SIS procedure to
sample zero-one two-way tables with given fixed row and
column sums with structural zeros, i.e., some cells are
constrained to be zero or one \cite{chen2007}.
In this paper we also extended the results from
\cite{chen2005,chen2007} to zero-one
multi-way ($d$-way, $d \geq 2$) contingency tables under the no
$d$-way interaction model, i.e., with fixed $d - 1$ marginal sums.
This paper is organized as follows: In Section \ref{sec:sis} we
outline basics of the SIS procedure. In Section \ref{3dim} we focus on
the SIS procedure with CP distribution on three-way tables under no
three-way interaction model. This model is particularly important
since if we are able to count or estimate the number of tables under this
model then this is equivalent to estimating the number of {\em lattice
points} in any {\em polytope} \cite{deloera06}. This means that if
we can estimate the number of three-way zero-one tables under this model,
then we can estimate the number of any zero-one tables by using De Loera
and Onn's bijection mapping.
Let $\ve X=(X_{i j k})$ of size
$(m \mbox{, }n \mbox{, } l)$, where $m, n, l \in {\mathbb N} $ and $ {\mathbb N} = \{1, 2,
\ldots, \}$, be a table of counts whose entries are
independent Poisson random variables with canonical parameters
$\{\theta_{ijk}\}$. Here $X_{ijk} \in \{0, 1\}$. Consider the generalized linear model,
\begin{eqnarray}
\label{eq:RCmodel}
\theta_{ijk} = \lambda + \lambda^M_i + \lambda^N_j + \lambda^L_k +
\lambda_{ij}^{MN} + \lambda_{ik}^{ML} +\lambda_{jk}^{NL}\,
\end{eqnarray}
for $i=1,\ldots,m$, $j=1,\ldots,n$, and $k = 1, \ldots , l$ where $M$,
$N$, and $L$ denote the
nominal-scale factors. This model is called the {\em no three-way
interaction model}.
Notice that the sufficient statistics under the
model in \eqref{eq:RCmodel} are the {\em two-way marginals}, that is:
\begin{equation}\label{tableEqu}
\begin{array}{ll}
X_{+jk} := \sum _{i=1}^m X_{i j k} \mbox{, } (j=1,2,\ldots,n\mbox{, }
k=1,2,\ldots,l),\\
X_{i+k} := \sum _{j=1}^n X_{i j k}\mbox{, } (i=1,2,\ldots,m\mbox{, }
k=1,2,\ldots,l),\\
X_{ij+} := \sum _{k=1}^l X_{i j k}\mbox{, } (i=1,2,\ldots,m\mbox{, }
j=1,2,\ldots,n),\\
\end{array}
\end{equation}
Hence, the
conditional distribution of the table counts given the margins is the
same regardless of the values of the parameters in the model.
In Section \ref{4dim} we generalize the SIS procedure on zero-one
two-way tables in \cite{chen2005,chen2007} to zero-one
multi-way ($d$-way, $d \geq 2$) contingency tables under the no
$d$-way interaction model, i.e., with fixed $d - 1$ marginal sums.
In Sections \ref{comp} and \ref{sam} we show some simulation results with our
software which is available in \url{http://www.polytopes.net/code/CP}. Finally, we end with some discussions.
\section{Sequential importance sampling}\label{sec:sis}
Let $\Sigma$ be the set of all tables satisfying marginal
conditions. In this paper we assume that $\Sigma \not = \emptyset$.
Let $P({\bf X})$ for any ${\bf X}
\in \Sigma$ be the uniform distribution over $\Sigma$, so $p({\bf X}) =
1/|\Sigma|$. Let $q(\cdot)$ be a trial distribution such that $q({\bf X}) >
0$ for all ${\bf X} \in \Sigma$. Then we have
\[
{\mathbb E} \left[\frac{1}{q({\bf X})}\right] = \sum_{{\bf X} \in \Sigma} \frac{1}{q({\bf X})} q({\bf X}) = |\Sigma|.
\]
Thus we can estimate $|\Sigma |$ by
\[
\widehat{|\Sigma|} = \frac{1}{N} \sum_{i = 1}^N \frac{1}{q({\bf X_i})},
\]
where ${\bf X_1}, \ldots , {\bf X_N}$ are tables drawn iid from $q({\bf X})$.
Here, this proposed distribution $q({\bf X})$ is the distribution
(approximate) to sample tables via the SIS procedure.
We vectorized the table ${\bf X} = (x_1, \cdots , x_t)$ and by the
multiplication rule we have
\[
q({\bf X} = (x_1, \cdots , x_t)) = q(x_1)q(x_2|x_1)q(x_3|x_2, x_1)\cdots
q(x_t|x_{t-1}, \ldots , x_1).
\]
Since we sample each cell count of a table from an interval we can
easily compute $q(x_i|x_{i-1}, \ldots , x_1)$ for $i = 2, 3, \ldots ,
t$.
When we have rejections, this means that we are sampling tables from a
bigger set $\Sigma^*$ such that $\Sigma \subset \Sigma^*$. In this
case, as long as the conditional probability $q(x_i|x_{i-1}, \ldots ,
x_1)$ for $i = 2, 3, \ldots$ and $q(x_1)$ are normalized, $q({\bf X})$
is normalized over $\Sigma^*$ since
\[
\begin{array}{rcl}
\sum_{{\bf X} \in \Sigma^*} q({\bf X}) &=& \sum_{x_1, \ldots, x_t} q(x_1) q(x_2|x_1)q(x_3|x_2, x_1)\cdots
q(x_t|x_{t-1}, \ldots , x_1)\\
&= & \sum_{x_1} q(x_1) \left[ \sum_{x_2}
q(x_1|x_2) \left[ \cdots
\left[ \sum_{x_t} q(x_t|x_{t-1}, \ldots , x_1)\right]
\right]\right]\\
& = & 1.\\
\end{array}
\]
Thus we have
\[
{\mathbb E} \left[\frac{\mathbb{I}_{{\bf X} \in \Sigma}}{q({\bf X})}\right] = \sum_{{\bf
X} \in \Sigma^*} \frac{\mathbb{I}_{{\bf X} \in \Sigma}}{q({\bf X})}
q({\bf X}) = |\Sigma|,
\]
where
$\mathbb{I}_{{\bf X} \in \Sigma}$ is an indicator function for the set
$\Sigma$. By the law of large numbers this estimator is unbiased.
\section{Sampling from the conditional Poisson distribution}\label{3dim}
Let
\[
Z = (Z_1, \ldots ,Z_l)
\]
be independent Bernoulli trials with probability of successes
$p = ( p_1, \ldots , p_l)$. Then the random variable
\[
S_Z = Z_1 +\cdots +Z_l
\]
is a Poisson--binomial distribution.
We say the column of entries for the marginal $X_{i_0, j_0, +}$ of $\ve
X$ is the
$(i_0, j_0)$th column of $\ve X$ (equivalently we say $(i_0, k_0)$th column for the
marginal $X_{i_0 + k_0}$ and $(j_0, k_0)$th column for the marginal $X_{+j_0k_0}$).
Consider the $(i_0, j_0)$th
column of the table $\ve X$ for some $i_0 \in \{1, \ldots, m\}$, $j_0
\in \{1, \ldots , n\}$ with the marginal $l_0 = X_{i_0j_0+}$. Also we
let $r_k = X_{i_0+k}$ and $c_k = X_{+j_o k}$.
Now let $w_k = p_k/(1 - p_k)$ where $p_k \in (0, \, 1)$.
Then,
\begin{equation}\label{dist}
P(Z_1 = z_1, \ldots ,Z_l = z_l|S_Z = l_0) \propto \prod_{k = 1}^l w_k^{z_k}.
\end{equation}
Thus for sampling a zero-one table with fixed marginals $X_{+jk}, \,
X_{i+k}$ for $i=1,2,\ldots ,m\mbox{, }
j=1,2,\ldots ,n$, and $k = 1, 2, \ldots, l$, for
$X_{i_0j_0+}$ for each $i_0 \in \{1, \ldots, m\}$ and $j_0 \in \{1,
\ldots , n\}$, (or one can do each $X_{i_0+k_0}$ or $X_{+j_0k_0}$
instead by similar way) one just decides which entries are ones
(basically there are ${l \choose l_0}$ many choices) using
the conditional Poisson distribution above. We sample these
cell entries with ones (say $l_0$ many entries with ones) in the $(i_0,
j_0)$th column for the $L$ factor with the
following probability:
Let $A_k$, for $k = 1, \ldots , l_0$, be the set of selected entries.
Thus $A_0 = \emptyset$, and $A_{l_0}$ is the final sample that we obtain. At the
$k$th step of the drafting sampling $(k = 1, \ldots , l_0)$, a unit $j \in A^c_{k-1}$
is selected into the sample with probability
\[
P(j, A^c_{k-1}) = \frac{w_j R(l_0 - k, A^c_{k-1} - j)}{(l_0 - k +
1)R(l_0-k+1, A^c_{k-1})},
\]
where
\[
R(s, A) = \sum_{B \subset A, |B| = s} \left(\prod_{i \in B}w_i \right).
\]
For sampling a zero-one three-way table $\ve X$ with given two-way marginals
$X_{ij+}$, $X_{i+k}$, and $X_{+jk}$ for $i=1,2,\ldots ,m\mbox{, }
j=1,2,\ldots ,n$, and $k = 1, 2, \ldots, l$, we sample for the $(i_0, j_0)$th
column of the table $\ve X$ for each $i_0 \in \{1, \ldots , m\}$, $j_0
\in \{1, \ldots , n\}$. We set
\begin{equation}\label{pr}
p_k := \frac{r_k \cdot c_k}{r_k \cdot c_k + (n - r_k)(m - c_k)}.
\end{equation}
Thus we have
\begin{equation}\label{wt}
w_k = \frac{r_k \cdot c_k}{(n - r_k)(m - c_k)}.
\end{equation}
\begin{rem}
We assume that we do not have the trivial cases, namely, $1 \leq r_k
\leq n-1$ and $1 \leq c_k \leq m - 1$.
\end{rem}
\begin{thm}\label{main}
For the uniform distribution over all $m\times n \times l$ zero-one tables with given marginals $r_k = X_{i_0+k}, \, c_k = X_{+j_0k}$ for
$k = 1, 2, \ldots, l$, and a fixed marginal for
the factor $L$, $l_0$, the marginal distribution of the fixed marginal
$l_0$ is the
same as the conditional distribution of $Z$ defined by \eqref{dist} given
$S_Z = l_0$ with
\[
p_k := \frac{r_k \cdot c_k}{r_k \cdot c_k + (n - r_k)(m - c_k)}.
\]
\end{thm}
\begin{proof}
We start by giving an algorithm for generating tables uniformly
from all $m \times n \times l$ zero-one tables with given
marginals $r_k, \, c_k$ for
$k = 1, 2, \ldots, l$, and a fixed marginal for
the factor $L$, $l_0$.
\begin{enumerate}
\item\label{step1} For $k = 1, \ldots , l$ consider the $k$th layer of $m \times n$
tables. We randomly choose $r_k$ positions in the $(i_0, k)$th column
and $c_k$ positions in the $(j_0, k)$th column,
and put $1$’s in those positions. The choices of positions are independent across
different layers.
\item\label{step2} Accept those tables with given column sum $l_0$.
\end{enumerate}
It is easy to see that tables generated by this algorithm are uniformly
distributed over all $m\times n \times l$ zero-one tables with given
marginals $r_k, \, c_k$ for
$k = 1, 2, \ldots, l$, and a fixed marginal for
the factor $L$, $l_0$ for the $(i_0, j_0)$th column of the table $\ve X$. We can derive the marginal distribution
of the $(i_0, j_0)$th column of $\ve X$ based on this algorithm. At Step \ref{step1}, we choose
the cell at position $(i_0, \, j_0, \, 1)$ to put $1$ in
with the probability:
\[
\frac{{n - 1\choose r_1 - 1}{m - 1 \choose c_1 - 1}}{{n - 1
\choose r_1 - 1}{m - 1 \choose c_1 - 1} + {n - 1 \choose r_1}{m -
1 \choose c_1}} = \frac{r_1 \cdot c_1}{r_1 \cdot c_1 + (n - r_1)(m - c_1)}.
\]
Because the choices of positions are independent across different layers,
after Step \ref{step1} the marginal distribution of the $(i_0, j_0)$th column is the same as
the distribution of $Z$ defined by \eqref{dist} with
\[
p_k = \frac{{n - 1\choose r_k - 1}{m - 1 \choose c_k - 1}}{{n - 1
\choose r_k - 1}{m - 1 \choose c_k - 1} + {n - 1 \choose r_k}{m -
1 \choose c_k}} = \frac{r_k \cdot c_k}{r_k \cdot c_k + (n - r_k)(m - c_k)}.
\]
Step \ref{step2} rejects the
tables whose $(i_0, j_0)$th column sum is not $l_0$. This implies that after Step \ref{step2},
the marginal distribution of the $(i_0, j_0)$th column is the same as the conditional
distribution of $Z$ defined by \eqref{dist} with
\[
p_k =\frac{r_k \cdot c_k}{r_k \cdot c_k + (n - r_k)(m - c_k)}.
\]
\end{proof}
\begin{rem}
The sequential importance sampling via CP for sampling a two-way zero-one
table defined in \cite{chen2005} is a special case of our SIS procedure.
We can induce $p_k$ defined in \eqref{pr} and the weights defined in
\eqref{wt} to the weights for two-way zero-one contingency tables
defined in \cite{chen2005}. Note that when we
consider two-way zero-one contingency tables we have
$c_k = 1 $ for all $k = 1, \ldots, l$ and for all $j_0 = 1, \ldots, n$ (or
$r_k = 1 $ for all $ k = 1, \ldots, l$ and for all $i_0 = 1, \ldots,
m$), and $m = 2$ (or $n = 2$, respectively).
Therefore when we consider the two-way zero-one tables we get
\[
p_k = \frac{r_k}{n}, \, w_k = \frac{r_k}{n - r_k},
\]
or respectively
\[
p_k = \frac{c_k}{m}, \, w_k = \frac{c_k}{m - c_k}.
\]
\end{rem}
During the intermediary steps of our SIS procedure via CP on a three-way zero-one table
there will be
some columns for the $L$ factor with trivial cases. In that case we
have to treat them as structural zeros in the $k$th slice for some $k
\in \{1, \ldots , l\}$. In that case we have to use the
probabilities for the distribution in \eqref{dist} as follows:
\begin{equation}\label{pr3}
p_k := \frac{r_k \cdot c_k}{r_k \cdot c_k + (n - r_k - g_k^{r_0})(m - c_k
- g_k^{c_0})},
\end{equation}
where $g_k^{r_0}$ is the number of structural zeros in the $(r_0, k)$th
column and $g_k^{c_0}$ is the number of structural zeros in the $(c_0,
k)$th
column.
Thus we have weights:
\begin{equation}\label{wt3}
w_k = \frac{r_k \cdot c_k}{(n - r_k - g_k^{r_0})(m - c_k - g_k^{c_0})}.
\end{equation}
\begin{thm}\label{main_str1}
For the uniform distribution over all $m\times n \times l$
zero-one tables with structural zeros with given marginals $r_k = X_{i_0+k}, \, c_k = X_{+j_0k}$ for
$k = 1, 2, \ldots, l$, and a fixed marginal for
the factor $L$, $l_0$, the marginal distribution of the fixed marginal
$l_0$ is the
same as the conditional distribution of $Z$ defined by \eqref{dist} given
$S_Z = l_0$ with
\[
p_k := \frac{r_k \cdot c_k}{r_k \cdot c_k + (n - r_k - g_k^{r_0})(m - c_k
- g_k^{c_0})},
\]
where $g_k^{r_0}$ is the number of structural zeros in the $(r_0, k)$th
column and $g_k^{c_0}$ is the number of structural zeros in the $(c_0,
k)$th column.
\end{thm}
\begin{proof}
The proof is similar to the proof for Theorem \ref{main}, just replace the
probability $p_k$ with
\[
p_k = \frac{{n - 1 - g_k^{r_0}\choose r_k - 1}{m - 1 - g_k^{c_0}\choose c_k - 1}}{{n - 1
- g_k^{r_0} \choose r_k - 1 }{m - 1 - g_k^{c_0} \choose c_k - 1 } + {n - 1 - g_k^{r_0}\choose r_k}{m -
1 - g_k^{c_0}\choose c_k}} = \frac{r_k \cdot c_k}{r_k \cdot c_k
+ (n - r_k - g_k^{r_0})(m - c_k - g_k^{c_0})}.
\]
\end{proof}
\begin{rem}
The sequential importance sampling via CP for sampling a two-way zero-one
table with structural zeros defined in Theorem 1 in \cite{chen2007} is a special case of our SIS.
We can induce $p_k$ defined in \eqref{pr3} and the weights defined in
\eqref{wt3} to the weights for two-way zero-one contingency tables
defined in \cite{chen2007}. Note that when we
consider two-way zero-one contingency tables we have
$c_k = 1 $ for all $k = 1, \ldots, l$ and for all $j_0 = 1, \ldots, n$ (or
$r_k = 1 $ for all $ k = 1, \ldots, l$ and for all $i_0 = 1, \ldots,
m$), $m = 2$ (or $n = 2$, respectively), and $g_k^{c_0} = 0$ (or
$g_k^{r_0}$, respectively).
Therefore when we consider the two-way zero-one tables we get
\[
p_k = \frac{r_k}{n - g_k^{r_0}}, \, w_k = \frac{r_k}{n - r_k - g_k^{r_0}},
\]
or respectively
\[
p_k = \frac{c_k}{m - g_k^{c_0}}, \, w_k = \frac{c_k}{m - c_k - g_k^{c_0}}.
\]
\end{rem}
\begin{figure}[!htp]
\begin{center}
\scalebox{0.6}{
\includegraphics{cube.pdf}
}
\end{center}
\caption{An example of a $3\times 3\times 3$ table.}
\label{cube}
\end{figure}
\begin{alg}[Store structures in the zero-one table]\label{alg1}
This algorithm stores the structures, including zeros and ones, in
the observed table $\ve x_0$. The output will be used to avoid trivial
cases in sampling. The output $A$ and $B$ matrices both have the same
dimension with $\ve x_0$, so the cell value in $A$ will be $1$ if the
position is structured and $0$ if not. The matrix $B$ is only for structure $1$'s. We
consider sampling a table without structure $1$'s, that is,
a table with new marginals: $X^*_{ij+}=X_{ij+}-\sum_{k=1}^l
B_{ijk}=X_{ij+}-B_{ij+}$, $X^*_{i+k}=X_{i+k}-\sum_{j=1}^n
B_{ijk}=X_{i+k}-B_{i+k}$, and $X^*_{+jk}=X_{+jk}-\sum_{i=1}^m
B_{ijk}=X_{+jk}-B_{+jk}$ for $i=1,2,\ldots ,m\mbox{, } j=1,2,\ldots
,n$, and $k = 1, 2, \ldots, l$.
\begin{itemize}
\item[{\bf Input}] The observed marginals $X_{ij+}$, $X_{i+k}$, and $X_{+jk}$ for $i=1,2,\ldots ,m\mbox{, } j=1,2,\ldots ,n$, and $k = 1, 2, \ldots, l$.
\item[{\bf Output}] Matrix $A$ and $B$, new marginals $X^*_{ij+}$, $X^*_{i+k}$, and $X^*_{+jk}$ for $i=1,2,\ldots ,m\mbox{, } j=1,2,\ldots ,n$, and $k = 1, 2, \ldots, l$.
\item[{\bf Algorithm}]
\begin{enumerate}
\item
Check all marginals in direction I. For $i = 1, 2, \ldots, m$:\\
If $X_{+jk} = 0$, $A_{i'jk} = 1$, for all $i' = 1, 2, \ldots, m$ and $A_{i'jk}=0$;\\
If $X_{+jk} = 1$, $A_{i'jk} = 1$ and $B_{i'jk} = 1$, for all $i' = 1, 2, \ldots, m$ and $A_{i'jk}=0$.\\
\item
Check all marginals in direction J. For $j = 1, 2, \ldots, n$:\\
If $X_{i+k} = 0$, $A_{ij'k} = 1$, for all $j' = 1, 2, \ldots, n$ and $A_{ij'k}=0$;\\
If $X_{i+k} = 1$, $A_{ij'k} = 1$ and $B_{ij'k} = 1$, for all $j' = 1, 2, \ldots, n$ and $A_{ij'k}=0$.\\
\item
Check all marginals in direction K. For $k = 1, 2, \ldots, l$:\\
If $X_{ij+} = 0$, $A_{ijk'} = 1$, for all $k' = 1, 2, \ldots, l$ and $A_{ijk'}=0$;\\
If $X_{ij+} = 1$, $A_{ijk'} = 1$ and $B_{ijk'} = 1$, for all $k' = 1, 2, \ldots, l$ and $A_{ijk'}=0$.\\
\item
If any changes made in step (1), (2) or (3), come back to (1), else stop.
\item
Compute new marginals: \\
$X^*_{ij+}=X_{ij+}-B_{ij+}$, $X^*_{i+k}=X_{i+k}-B_{i+k}$, and
$X^*_{+jk}=X_{+jk}-B_{+jk}$ for $i=1,2,\ldots ,m\mbox{, }
j=1,2,\ldots ,n$, and $k = 1, 2, \ldots, l$.
\end{enumerate}
\end{itemize}
\end{alg}
\begin{alg}[Generate a two-way table with given marginals]\label{alg2}
This algorithm is used to generate a layer (fixed $i$) of the three-way
table, with the probability of the sampled layer.
\begin{itemize}
\item[{\bf Input}] Row sums $r^*_j$ and column sums $c^*_k$, $ j=1,2,\ldots
,n$, and $k = 1, 2, \ldots, l$; structures $A$; marginals on direction
I: $X_{+jk}$ for $i=1,2,\ldots ,m$.
\item[{\bf Output}] A sampled table and its probability. Return $0$ if the process fails.
\item[{\bf Algorithm}]
\begin{enumerate}
\item
Order all columns with decreasing sums.
\item
Generate the column (along the direction $K$) with the largest sum,
the weights used in CP are shown in equation \eqref{wt3}. Notice that
$k$ relates to each specific cell in the column, $r_k$ and $c_k$
which are
the row sums in the direction $J$ and $I$, respectively. $g^{r_0}_k$
and $g^{c_0}_k$ are the number of structures in the rows of the
direction $J$ and $I$, respectively. The probability of the generated
column will be returned if the process succeeds, while $0$ may be
returned in this step if it does not exist.
\item
Delete the generated column in (2), and for the remaining subtable, do
the following:
\begin{enumerate}
\item
If only one column is left, fill it with fixed marginals and go to (4).
\item
If (a) is not true, check all marginals to see if there are any new structures caused
by step (2). We need to avoid trivial cases by doing this. Go back to
(1) with new marginals and structures.
\end{enumerate}
\item
Return generated matrix as the new layer and its CP probability. If failed, return $0$.
\end{enumerate}
\end{itemize}
\end{alg}
\begin{alg}[SIS with CP for sampling a three-way zero-one table]\label{alg3}
We describe an algorithm to sample a three-way zero-one table $\ve X$ with given
marginals $X_{ij+}$, $X_{i+k}$, and $X_{+jk}$ for $i=1,2,\ldots ,m\mbox{, }
j=1,2,\ldots ,n$, and $k = 1, 2, \ldots, l$ via the SIS with CP.
\begin{itemize}
\item[{\bf Input}] The observed table $\ve x_0$.
\item[{\bf Output}] The sampled table $\ve x$.
\item[{\bf Algorithm}]
\begin{enumerate}
\item
Compute the marginals $X_{ij+}$, $X_{i+k}$, and $X_{+jk}$ for $i=1,2,\ldots ,m\mbox{, }
j=1,2,\ldots ,n$, and $k = 1, 2, \ldots, l$.
\item
Use Algorithm \ref{alg1} to compute the structure tables $A$ and
$B$. Consider the new marginals in the output as the sampling
marginals.
\item
For the sampling marginals, do the SIS:
\begin{enumerate}
\item
Delete the layers filled by structures; consider the left-over subtable.
\item
Consider the layers in direction $I$ ($i$ varies). Sum within all
layers and order them from the largest to smallest.
\item
Consider the layer with the largest sum and plug in the structure table
$A$ from Algorithm \ref{alg2} to generate a sample for this
layer. The algorithm may return $0$ if the sampling fails.
\item
Delete the generated layer in (c), and for the remaining subtable, do the following:
\begin{enumerate}
\item
If only one layer left, fill it with fixed marginals and go to (e).
\item
else, go back to (2) with new marginals.
\end{enumerate}
\item
Add the sampled table with table $B$ (the structure $1$'s table).
\end{enumerate}
\item
Return the table in (e) and the same probability with the sampled table. Return $0$ if failed.
\end{enumerate}
\end{itemize}
\end{alg}
\section{Four or higher dimensional zero-one tables}\label{4dim}
In this section we consider a $d$-way zero-one table under the no
$d$-way interaction model for $d \in {\mathbb N} $ and $d > 3$.
Let $\ve X=(X_{i_1 \ldots i_d})$ be a zero-one contingency table of size
$(n_1 \times \cdots \times n_d)$, where $n_i \in {\mathbb N} $ for $i = 1,
\ldots, d$.
The sufficient statistics under the no
$d$-way interaction model are
\begin{equation}\label{margin}
\begin{array}{l}
X_{+i_2\ldots i_d}, \, X_{i_1+i_3\ldots i_d}, \, \ldots , X_{i_1\ldots
i_{d-1}+},\\
\text{for } i_1 = 1, \ldots, n_1, \, i_2 = 1, \ldots, n_2, \ldots , i_d
= 1, \ldots, n_d.\\
\end{array}
\end{equation}
For each $i_1^0 \in \{1, \ldots, n_1\}, \ldots , i_{d-1}^0 \in \{1,
\ldots, n_{d}\}$, we say the column of the entries for a marginal
$X_{i_1 \ldots i_{j - 1}+ i_{j+1}\ldots i_d }$ the $(i_0, \ldots ,
i_{j-1}, i_{j+1}, \ldots , i_d)$th column of $\ve X$.
For each $i_1^0 \in \{1, \ldots, n_1\}, \ldots , i_{d-1}^0 \in \{1,
\ldots, n_{d-1}\}$, we consider the $(i_1^0, \ldots, i_{d-1}^0)$th
column for the $d$th factor. Let $l_0 = X_{i_1^0, \ldots, i_{d-1}^0+}$.
Let $r_k^j = X_{i_1^0 \ldots i_{j-1}^0+i_{j+1}^0\ldots i_{d-1}^0k}$ for fixed $k \in
\{1, \ldots , n_d\}$.
For sampling a zero-one $d$-way table $\ve X$, we set
\begin{equation}\label{pr2}
p_k := \frac{\prod_{j = 1}^{d-1} r_k^j}{\prod_{j = 1}^{d-1} r_k^j + \prod_{j
= 1}^{d-1}(n_j - r_k^j)}.
\end{equation}
\begin{rem}
We assume that we do not have trivial cases, namely, $1 \leq r_k^j
\leq n_j-1$ for $j = 1, \ldots , d$.
\end{rem}
\begin{thm}
For the uniform distribution over all $d$-way zero-one contingency
tables $\ve X=(X_{i_1 \ldots i_d})$ of size
$(n_1 \times \cdots \times n_d)$, where $n_i \in {\mathbb N} $ for $i = 1,
\ldots, d$ with marginals $l_0 = X_{i_1^0, \ldots, i_{d-1}^0+}$,
and $r_k^j = X_{i_1^0 \ldots i_{j-1}^0+i_{j+1}^0\ldots i_{d-1}^0k}$ for $k \in
\{1, \ldots , n_d\}$, the marginal distribution of the fixed marginal
$l_0$ is the
same as the conditional distribution of $Z$ defined by \eqref{dist} given
$S_Z = l_0$ with
\[
p_k := \frac{\prod_{j = 1}^{d-1} r_k^j}{\prod_{j = 1}^{d-1} r_k^j + \prod_{j
= 1}^{d-1}(n_j - r_k^j)}.
\]
\end{thm}
\begin{proof}
The proof is similar to the proof for Theorem \ref{main}, we just extend
the same argument to a $d$-way zero-one table under the no $d$-way
interaction model with the probability
\[
p_k = \frac{\prod_{j = 1}^{d-1}{n_j - 1\choose r_k^j - 1}}{\prod_{j = 1}^{d-1}{n_j - 1
\choose r_k^j - 1} + \prod_{j=1}^{d-1}{n_j - 1 \choose r_k^j}} =
\frac{\prod_{j = 1}^{d-1} r_k^j}{\prod_{j = 1}^{d-1} r_k^j + \prod_{j = 1}^{d-1}(n_j - r_k^j)}.
\]
\end{proof}
During the intermediary steps of our SIS procedure via CP on a three-way zero-one table
there will be
some columns for the $d$th factor with trivial cases. In that case we
have to treat them as structural zeros in the $k$th slice for some $k
\in \{1, \ldots , l\}$. In that case we have to use the
probabilities for the distribution in \eqref{dist} as follows:
\begin{equation}\label{pr4}
p_k := \frac{\prod_{j = 1}^{d-1} r_k^j }{\prod_{j = 1}^{d-1} r_k^j + \prod_{j
= 1}^{d-1}(n_j - r_k^j - g_k^j)}.
\end{equation}
where $g_k^{j}$ is the number of structural zeros in the $(i_1^0, \ldots ,
i_{j-1}^0, i_{j+1}^0, \ldots , i_{d-1}^0k)$th column of $\ve X$.
Thus we have weights:
\begin{equation}\label{wt4}
w_k = \frac{\prod_{j = 1}^{d-1}r_k^j}{\prod_{j
= 1}^{d-1}(n_j - r_k^j - g_k^j)}.
\end{equation}
\begin{thm}\label{main_str2}
For the uniform distribution over all $d$-way zero-one contingency
tables $\ve X=(X_{i_1 \ldots i_d})$ of size
$(n_1 \times \cdots \times n_d)$, where $n_i \in {\mathbb N} $ for $i = 1,
\ldots, d$ with marginals $l_0 = X_{i_1^0, \ldots, i_{d-1}^0+}$,
and $r_k^j = X_{i_1^0 \ldots i_{j-1}^0+i_{j+1}^0\ldots i_{d-1}^0k}$ for $k \in
\{1, \ldots , n_d\}$, the marginal distribution of the fixed marginal
$l_0$ is the
same as the conditional distribution of $Z$ defined by \eqref{dist} given
$S_Z = l_0$ with
\[
p_k := \frac{\prod_{j = 1}^{d-1} r_k^j}{\prod_{j = 1}^{d-1} r_k^j + \prod_{j
= 1}^{d-1}(n_j - r_k^j - g_k^j)}
\]
where $g_k^{j}$ is the number of structural zeros in the $(i_1^0, \ldots ,
i_{j-1}^0, i_{j+1}^0, \ldots , i_{d-1}^0k)$th column of $\ve X$.
\end{thm}
\begin{proof}
The proof is similar to the proof for Theorem \ref{main_str1}, we
just extend
the same argument to a $d$-way zero-one table under the no $d$-way
interaction model with the probability
\[
p_k = \frac{\prod_{j = 1}^{d-1}{n_j - 1 - g^j_k\choose r_k^j - 1}}{\prod_{j = 1}^{d-1}{n_j - 1 - g^j_k
\choose r_k^j - 1} + \prod_{j=1}^{d-1}{n_j - 1 - g^j_k\choose r_k^j}} =
\frac{\prod_{j = 1}^{d-1} r_k^j}{\prod_{j = 1}^{d-1} r_k^j + \prod_{j
= 1}^{d-1}(n_j - r_k^j - g^j_k)}.
\]
\end{proof}
\section{Computational examples}\label{comp}
For our simulation study we used the software package {\tt R} \cite{Rproj}.
We count the {\em exact} numbers of tables
via the software {\tt LattE} \cite{latte-1.2} for small examples in
this section (Examples \eqref{firstex} to \eqref{lastex}).
When the contingency tables are large and/or the models are complicated, it
is very difficult to obtain the exact number of tables. Thus we need a good
measurement of accuracy in the estimated number of tables.
In \cite{chen2005}, they used the coefficient of variation
($cv^2$):
\[
cv^2 = \frac{var_q\{p({\bf X})/q({\bf X})\}}{ {\mathbb E} ^2_q\{p({\bf X})/q({\bf X})\}}
\]
which is equal to $var_q\{1/q({\bf X})\}/ {\mathbb E} ^2_q\{1/q({\bf X})\}$ for
the problem of estimating the number of tables. The value of $cv^2$ is simply
the chi-square distance between the
two distributions $p'$ and $q$, which means the smaller it is, the closer the two
distributions are.
In \cite{chen2005} they estimated $cv^2$ by:
\[
cv^2 \approx \frac{\sum_{i = 1}^N \{1/q({\bf X_i}) - \left[\sum_{j =1}^N 1/q({\bf X_j}) \right] / N \}^2 /(N - 1)}{\left\{\left[ \sum_{j = 1}^N 1/q({\bf X_j})\right] / N\right\}^2},
\]
where ${\bf X_1}, \ldots , {\bf X_N}$ are tables drawn iid from
$q({\bf X})$. When we have rejections, we compute the variance using
only accepted tables. In this paper we also investigated
relations with the
exact numbers of tables and $cv^2$ when we have rejections.
In this section, we define the three two-way marginal matrices as following:\\
Suppose we have an observed table $\ve x=(x_{ijk})_{m \times n \times l}$, $i=1,2,\ldots ,m\mbox{, }
j=1,2,\ldots ,n$, and $k = 1, 2, \ldots, l$;\\
Define:
$si=(X_{+jk})_{n\times l}$, $sj=(X_{i+k})_{m\times l}$, and
$sk=(X_{ij+})_{m\times n}$.
\begin{ex}[The 3-dimension Semimagic Cube]\label{ex_1}
Suppose $si$, $sj$, and $sk$ are all $3\times 3$ matrices with all 1's inside, that is:
\[
si=sj=sk=
\begin{array}{|c|c|c|}\hline
1 & 1 & 1\\\hline
1 & 1 & 1\\\hline
1 & 1 & 1\\\hline
\end{array}
\]
The real number of tables is $12$. We took $114.7$ seconds to run
$10,000$ samples in the SIS, the estimator is $12$, acceptance rate is $100$\%. Actually, we found that if the acceptance rate is $100$\%, then sample size does not matter in the estimation.
\end{ex}
We used {\tt R} to produce more examples. Examples \eqref{firstex} to \eqref{lastex}
are constructed by the same code but with different values for
parameters. We used the {\tt R} package ``Rlab'' for the following code.
\begin{verbatim}
seed=6; m=3; n=3; l=4; prob=0.8; N=1000; k=200
set.seed(seed)
A=array(rbern(m*n*l,prob),c(m,n,l))
outinfo=tabinfo(A)
numtable(N,outinfo,k)
\end{verbatim}
Here prob is the probability of getting $1$ for every Bernoulli
variable, and $N$ is the sample size (the total number of tables
sampled, including both acceptances and rejections).
Notice that
$cv^2$ is defined as $\frac{Var}{Mean^2}$.
\begin{ex}[seed=6; m=3; n=3; l=4; prob=0.8]\label{firstex}
Suppose $si$, $sj$, and $sk$ are as following, respectively:
\[
\begin{array}{|c|c|c|c|}
\hline
2&2&2&2\\
\hline
1&3&2&2\\
\hline
2&3&3&2 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
2&3&2&2\\
\hline
1&3&3&3\\
\hline
2&2&2&1 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|}
\hline
3&3&3\\
\hline
3&3&4\\
\hline
2&2&3 \\
\hline
\end{array}\, .
\]
The real number of tables is $3$. An estimator is $3.00762$ with
$cv^2=0.0708$. The whole process took $13.216$ seconds (in {\tt R})
with a $100$\% acceptance rate.
\end{ex}
\begin{ex}[seed=60; m=3; n=4; l=4; prob=0.5]
Suppose $si$, $sj$, and $sk$ are as following, respectively:
\[
\begin{array}{|c|c|c|c|}
\hline
2&2&2&1\\
\hline
1&1&1&0\\
\hline
1&1&1&2 \\
\hline
1&1&2&3 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
3&3&2&1\\
\hline
1&0&2&2\\
\hline
1&2&2&3 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
3&2&2&2\\
\hline
1&0&2&2\\
\hline
3&1&1&3 \\
\hline
\end{array}\, .
\]
The real number of tables is $5$. An estimator is $4.991026$ with
$cv^2=0.1335$. The whole process took $17.016$ seconds (in {\tt R})
with a $100$\% acceptance rate.
\end{ex}
\begin{ex}[seed=61; m=3; n=4; l=4; prob=0.5]
Suppose $si$, $sj$, and $sk$ are as following, respectively:
\[
\begin{array}{|c|c|c|c|}
\hline
1&2&2&1\\
\hline
0&1&1&2\\
\hline
1&0&2&1 \\
\hline
0&1&3&2 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
1&2&3&2\\
\hline
1&1&2&3\\
\hline
0&1&3&1 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
3&1&1&3\\
\hline
1&2&2&2\\
\hline
2&1&1&1 \\
\hline
\end{array}\, .
\]
The real number of tables is $8$. An estimator is 8.04964 with
$cv^2=0.2389$. The whole process took $16.446$ seconds (in {\tt R})
with a $100$\% acceptance rate.
\end{ex}
\begin{ex}[seed=240; m=4; n=4; l=4; prob=0.5]
Suppose $si$, $sj$, and $sk$ are as following, respectively:
\[
\begin{array}{|c|c|c|c|}
\hline
2&3&3&2\\
\hline
1&3&2&1\\
\hline
1&2&3&0 \\
\hline
4&2&2&2 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
2&2&4&1\\
\hline
3&2&2&2\\
\hline
2&3&3&1 \\
\hline
1&3&1&1 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
2&2&3&2\\
\hline
3&2&1&3\\
\hline
3&2&2&2 \\
\hline
2&1&0&3 \\
\hline
\end{array}\, .
\]
The real number of tables is $8$. An estimator is 8.039938 with
$cv^2=0.2857$. The whole process took $23.612$ seconds (in {\tt R})
with a $100$\% acceptance rate.
\end{ex}
\begin{ex}[seed=1240; m=4; n=4; l=4; prob=0.5]\label{ex6}
Suppose $si$, $sj$, and $sk$ are as following, respectively:
\[
\begin{array}{|c|c|c|c|}
\hline
2&3&2&3\\
\hline
1&2&3&2\\
\hline
2&2&3&2 \\
\hline
3&2&3&2 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
1&4&1&3\\
\hline
4&2&4&2\\
\hline
1&2&4&3 \\
\hline
2&1&2&1 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
2&2&2&3\\
\hline
3&3&3&3\\
\hline
3&2&2&3 \\
\hline
2&1&2&1 \\
\hline
\end{array}\, .
\]
The real number of tables is $28$. An estimator is $26.89940$ with
$cv^2=1.0306$. The whole process took $29.067$ seconds (in {\tt R})
with a $100$\% acceptance rate. It converges even better for sample
size 5000: the estimator becomes $28.0917$, with $cv^2=1.2070$.
\end{ex}
\begin{ex}[seed=2240; m=4; n=4; l=4; prob=0.5]
Suppose $si$, $sj$, and $sk$ are as following, respectively:
\[
\begin{array}{|c|c|c|c|}
\hline
1&2&3&1\\
\hline
2&3&2&3\\
\hline
2&4&2&1 \\
\hline
2&1&4&1 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
2&3&2&0\\
\hline
3&2&3&2\\
\hline
1&3&3&1 \\
\hline
1&2&3&3 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
2&1&2&2\\
\hline
3&2&3&2\\
\hline
1&4&2&1 \\
\hline
1&3&2&3 \\
\hline
\end{array}\, .
\]
The real number of tables is $4$. An estimator is $3.98125$ with
$cv^2=0.0960$. The whole process took $26.96$ seconds (in {\tt R})
with a $100$\% acceptance rate.
\end{ex}
\begin{ex}[seed=3340; m=4; n=4; l=4; prob=0.5]
Suppose $si$, $sj$, and $sk$ are as following, respectively:
\[
\begin{array}{|c|c|c|c|}
\hline
2&4&1&3\\
\hline
1&2&1&2\\
\hline
1&1&0&3 \\
\hline
4&1&0&2 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
2&1&1&2\\
\hline
3&1&1&3\\
\hline
1&2&0&2 \\
\hline
2&4&0&3 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
3&1&1&1\\
\hline
3&1&2&2\\
\hline
1&2&1&1 \\
\hline
3&2&1&3 \\
\hline
\end{array}\, .
\]
The real number of tables is $2$. An estimator is $2$ with
$cv^2=0$. The whole process took $15.214$ seconds (in {\tt R}) with a
$100\%$ acceptance rate.
\end{ex}
\begin{ex}[seed=3440; m=4; n=4; l=4; prob=0.5]\label{eg5_9}
Suppose $si$, $sj$, and $sk$ are as following, respectively:
\[
\begin{array}{|c|c|c|c|}
\hline
1&3&1&3\\
\hline
1&1&2&2\\
\hline
2&3&1&0 \\
\hline
3&2&2&3 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
2&2&2&2\\
\hline
2&1&2&1\\
\hline
1&3&1&2 \\
\hline
2&3&1&3 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
3&1&1&3\\
\hline
1&2&1&2\\
\hline
2&0&3&2 \\
\hline
2&3&1&3 \\
\hline
\end{array}\, .
\]
The real number of tables is $12$. An estimator is $12.04838$ with
$cv^2=0.7819733$. The whole process took $27.074$ seconds (in {\tt R})
with a $85.9$\% acceptance rate.
\end{ex}
\begin{ex}[seed=5440; m=4; n=4; l=4; prob=0.5]\label{eg5_10}
Suppose $si$, $sj$, and $sk$ are as following, respectively:
\[
\begin{array}{|c|c|c|c|}
\hline
2&1&0&1\\
\hline
2&3&1&2\\
\hline
3&1&2&1 \\
\hline
1&3&2&2 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
2&3&2&1\\
\hline
2&1&2&3\\
\hline
2&1&0&1 \\
\hline
2&3&1&1 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
1&2&2&3\\
\hline
1&1&3&3\\
\hline
1&3&0&0 \\
\hline
1&2&2&2 \\
\hline
\end{array}\, .
\]
The real number of tables is $9$. An estimator is $8.882672$ with
$cv^2=0.7701368$. The whole process took $30.171$ seconds (in {\tt R})
with a $100$\% acceptance rate. Another result for the same sample
size is: an estimator is $8.521734$, $cv^2=0.6695902$. You can find
that the latter has a slightly better $cv^2$ but a slightly worse
estimator. We'll discuss more in Section \ref{dis}.
\end{ex}
\begin{ex}[seed=122; m=4; n=4; l=5; prob=0.2]
Suppose $si$, $sj$, and $sk$ are as following, respectively:
\[
\begin{array}{|c|c|c|c|c|}
\hline
2&0&3&3&2\\
\hline
0&0&1&0&0\\
\hline
1&0&1&1&1 \\
\hline
0&1&0&1&0 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|c|}
\hline
1&0&0&2&1\\
\hline
1&0&2&1&1\\
\hline
1&1&1&1&1 \\
\hline
0&0&2&1&0 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
3&0&0&1\\
\hline
4&1&0&0\\
\hline
1&0&3&1 \\
\hline
2&0&1&0 \\
\hline
\end{array}\, .
\]
The real number of tables is $5$. An estimator is $4.93625$ with
$cv^2=0.2035$. The whole process took $21.325$ seconds (in {\tt R})
with a $100$\% acceptance rate.
\end{ex}
\begin{ex}[seed=222; m=4; n=4; l=5; prob=0.2]
Suppose $si$, $sj$, and $sk$ are as following, respectively:
\[
\begin{array}{|c|c|c|c|c|}
\hline
1&0&1&1&1\\
\hline
2&1&0&1&2\\
\hline
0&1&1&1&0 \\
\hline
1&1&1&1&1 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|c|}
\hline
2&1&0&0&2\\
\hline
1&2&1&2&1\\
\hline
1&0&1&1&1 \\
\hline
0&0&1&1&0 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
2&3&0&0\\
\hline
1&3&2&1\\
\hline
0&0&1&3 \\
\hline
1&0&0&1 \\
\hline
\end{array}\, .
\]
The real number of tables is $2$. An estimator is $2$ with
$cv^2=0$. The whole process took $19.064$ seconds (in {\tt R}) with a
$100$\% acceptance rate.
\end{ex}
\begin{ex}[seed=322; m=4; n=4; l=5; prob=0.2]\label{lastex}\label{eg5_13}
Suppose $si$, $sj$, and $sk$ are as following, respectively:
\[
\begin{array}{|c|c|c|c|c|}
\hline
1&1&1&1&1\\
\hline
1&1&1&1&1\\
\hline
1&2&0&0&1 \\
\hline
2&0&1&1&2 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|c|}
\hline
0&0&1&1&0\\
\hline
1&0&1&0&1\\
\hline
2&2&0&1&2 \\
\hline
2&2&1&1&2 \\
\hline
\end{array}\, ,
\ \ \
\begin{array}{|c|c|c|c|}
\hline
0&2&0&0\\
\hline
1&0&0&2\\
\hline
1&3&1&2 \\
\hline
3&0&3&2 \\
\hline
\end{array}\, .
\]
The real number of tables is $5$. An estimator is $4.992$ with
$cv^2=0.2179682$. The whole process took $23.25$ seconds (in {\tt R})
with a $85.2$\% acceptance rate.
\end{ex}
\begin{summ}[Summarize the results from Example \eqref{firstex} to
Example \eqref{lastex}]
This is only a summary of main results of those examples in Table
\ref{tab_res}. For all results
here, we set the sample size $1,000$. We will discuss these results in Section
\ref{dis}.
\begin{table}[!htp]
\begin{center}
\begin{tabular}{ccrrrr}
\toprule[1.2pt] %
Dimension & Example & \# tables & Estimation & $cv^2$ & Acceptance rate \\\midrule[1.2pt]
$3\times 3\times 4$ & 5.2 & 3 & 3.00762 & 0.0708 & 100\% \\\midrule
$3\times 4\times 4$ & 5.3 & 5 & 4.991026 & 0.1335 & 100\% \\
& 5.4 & 8 & 8.04964 &0.2389 & 100\% \\\midrule
$4\times 4\times 4$ & 5.5 & 8 & 8.039938 & 0.2857 & 100\% \\
& 5.6 & 28 & 26.89940 & 1.0306 & 100\% \\
& 5.7 & 4 & 3.98125 & 0.0960 & 100\% \\
& 5.8 & 2 & 2 & 0 & 100\% \\
& 5.9 & 12 & 12.04838 & 0.7820 & 85.9\% \\
& 5.10 & 9 & 8.882672 & 0.7701 & 100\% \\\midrule
$4\times 4\times 5$ & 5.11 & 5 & 4.93625 & 0.2035& 100\% \\
& 5.12 & 2 & 2 & 0 & 100\% \\
& 5.13 & 5 & 4.992 & 0.2180 & 85.2\% \\\bottomrule[1.2pt]
\end{tabular}
\vskip 0.1in
\caption{Summary of Examples \eqref{firstex} - \eqref{lastex}}\label{tab_res}
\end{center}
\end{table}
\end{summ}
\begin{ex}[High-dimension Semimagic Cubes]\label{lastex2}\label{eg5_15}
In this example, we consider $m \times n\times l$ tables for $m=n=l = 4,
\ldots, 10$ such that each marginal sum equals to $1$. The results are
summarized in Table \ref{tab_res2}.
\begin{table}[!htp]
\begin{center}
\begin{tabular}{ccrrrr}
\toprule[1.2pt] %
Dimension $m$ & $N$ &CPU time (sec)& Estimation & $cv^2$ & Acceptance rate \\\hline
$4$ & $1000$ & $32.44$ & 568.944 & 0.26 & $100\%$\\
\ & $10000$ & $324.18$ & 571.1472 &0.27 & $100\%$\\\hline
$5$ & $1000$ & $60.39$ & 161603.5 & 0.18 & $99\%$\\
\ & $10000$ & $605.45$ & 161439.3 &0.18 & $99.2\%$\\\hline
$6$ & $1000$ & $102.66$ & 801634023 & 0.58 & $98.3\%$\\
\ & $10000$ & $1038.46$ & 819177227 &0.45 & $98.8\%$\\\hline
$7$ & $1000$ & $158.55$ & 6.08928e+13 & 0.60 & $97\%$\\
\ & $10000$ & $1590.84$ & 6.146227e+13 & 0.64 & $97.7\%$\\\hline
$8$ & $1000$ & $234.53$ & 1.080208e+20 & 1.07 & $95.6\%$\\
\ & $10000$ & $2300.91$ &1.099627e+20 & 1.00 & $96.5\%$\\\hline
$9$ & $1000$ & $329.17$ & 5.845308e+27 & 1.46 & $94\%$\\
\ & $10000$ & $3238.1$ & 5.684428e+27 &1.59& $95.3\%$ \\\hline
$10$ & $1000$ & $451.24$ & 9.648942e+36 & 1.44 & $93.3\%$\\
\ & $10000$ & $4425.12$& 9.73486e+36& 1.73& $93.3\%$\\\bottomrule[1.2pt]
\end{tabular}
\vskip 0.1in
\caption{Summary of computational results on $m \times n\times l$ tables for $m=n=l = 4,
\ldots, 10$.
The all marginal sums are equal to one in this example.}\label{tab_res2}
\end{center}
\end{table}
\end{ex}
\begin{ex}[High-dimension Semimagic Cubes continues]\label{lastex3}\label{eg5_16}
In this example, we consider $m \times n\times l$ tables for $m=n=l = 4,
\ldots, 10$ such that each marginal sum equals to $s$. The results are
summarized in Table \ref{tab_res3}. In this example, we set the sample size $N = 1000$.
\begin{table}[!htp]
\begin{center}
\begin{tabular}{ccrrrr}
\toprule[1.2pt] %
Dimension $m$ & $s$ &CPU time (sec)& Estimation & $cv^2$ & Acceptance rate \\\hline
$4$ & $2$ & $27.1$ & 51810.36 & 0.66 & $97.7\%$\\\hline
$5$ & $2$ & $58.1$ & 25196288574 & 1.69 & $97.5\%$\\\hline
$6$ & $2$ & $97.1$ & 6.339628e+18 & 2.56 & $94.8\%$\\
\ & $3$ & $99.3$ & 1.269398e+22&2.83 & $96.5\%$\\\hline
$7$ & $2$ & $150.85$ & 1.437412e+30 & 4.76 & $93.1\%$\\
\ & $3$ & $166.68$ & 2.365389e+38 & 25.33 & $96.7\%$\\\hline
$8$ & $2$ & $ 229.85$ & 5.369437e+44 & 6.68 & $89.8\%$\\
\ & $3$ & $ 256.70$ & 3.236556e+59 & 7.05 & $94.5\%$\\
\ & $4$ & $328.52$ &2.448923e+64 & 11.98 & $94.3\%$\\\hline
$9$ & $2$ & $319.32$ & 4.416787e+62 & 8.93 & $85.7\%$\\
\ & $3$ & $376.67$ & 7.871387e+85& 15.23 & $91.6\%$\\
\ & $4$ & $549.73$ & 2.422237e+97 &14.00 & $93.4\%$\\\hline
$10$ & $2$ & $429.19$ & 2.166449e+84 & 10.46 & $83.3\%$\\
\ & $3$ & $527.14$ & 6.861123e+117 &26.62& $90\%$ \\
\ & $4$ & $883.34$ & 3.652694e+137& 33.33 & $93.8\%$\\
\ & $5$ & $1439.50$& 1.315069e+144& 46.2& $91.3\%$\\\bottomrule[1.2pt]
\end{tabular}
\vskip 0.1in
\caption{Summary of computational results on $m \times n\times l$ tables for $m=n=l = 4,
\ldots, 10$.
The all marginal sums are equal to $s$ in this example. The sample
$N = 1000$ in this example. }\label{tab_res3}
\end{center}
\end{table}
\end{ex}
\begin{ex}[Bootstrap-t confidence interval of Semimagic Cubes]\label{lastex4}\label{eg5_17}
As we can see that in Table \ref{tab_res3}, generally we have larger
$cv^2$ when the number of tables is larger, and in this case, the
estimator we get via the SIS procedure might vary greatly in different
iterations. Therefore, we might want to compute a $(1-\alpha)100\%$ confidence interval
for each estimator via a non-parametric bootstrap method (see
Appendix \ref{nonpara} for a pseudo code for a non-parametric
bootstrap method to get the $(1-\alpha)100\%$ confidence interval for
$|\Sigma|$).
See Table \ref{tab_res5} for some results of Bootstrap-t $95\%$
confidence intervals ($\alpha = 0.05$).
\begin{table}[!htp]
\begin{center} {\footnotesize
\begin{tabular}{|c|c|rrr|rrr|r|}
\toprule[1.2pt]
& & \multicolumn{3}{c|}{Estimation} & \multicolumn{3}{c|}{$cv^2$} & \\\cline{3-8}
Dim & s & \multicolumn{1}{c|}{$\widehat{|\Sigma|}$} & \multicolumn{1}{c}{Lower $95\%$} &
\multicolumn{1}{c|}{Upper $95\%$} & \multicolumn{1}{c|}{$\widehat{cv^2}$} & \multicolumn{1}{c}{Lower $95\%$} & \multicolumn{1}{c|}{Upper $95\%$} & \multicolumn{1}{c|}{Acceptance Rate} \\\midrule[1.2pt]
7 & 2 & 1.306480e+30 & 1.156686e+30 & 1.468754e+30 & 3.442306 & 2.678507 & 4.199513 & $93.3\%$ \\
& 3 & 3.033551e+38 & 2.245910e+38 & 4.087225e+38 & 22.84399 & 8.651207 & 35.080408 & $96.2\%$ \\
\hline
8 & 2 & 5.010225e+44 & 4.200752e+44 & 5.902405e+44 & 6.712335 & 4.539368 & 8.590578 & $90.4\%$ \\
& 3 & 2.902294e+59 & 2.389625e+59 & 3.484405e+59 & 9.047914 & 5.680128 & 12.797488 & $93.1\%$ \\
& 4 & 2.474874e+64 & 1.847911e+64 & 3.295986e+64 & 21.53559 & 5.384647 & 32.166086 & $94.6\%$ \\
\hline
9 & 2 & 4.548401e+62 & 3.682882e+62 & 5.593370e+62 & 10.07973 & 4.886817 & 15.406899 & $87.1\%$ \\
& 3 & 9.702672e+85 & 7.189849e+85 & 1.250875e+86 & 18.65302 & 11.33462 & 23.77980 & $92.5\%$ \\
& 4 & 2.023034e+97 & 1.547951e+97 & 2.561084e+97 & 14.96126 & 10.20331 & 19.09515 & $92.2\%$ \\
\hline
10 & 2 & 2.570344e+84 & 1.908609e+84 & 3.339243e+84 & 17.83684 & 9.785778 & 24.231544 & $84.8\%$ \\
& 3 & 8.68783e+117 & 5.92233e+117 & 1.22271e+118 & 29.67200 & 18.64549 & 37.64892 & $90.2\%$ \\
& 4 & 4.12634e+137 & 2.94789e+137 & 5.52727e+137 & 23.36831 & 15.32719 & 31.02614 & $92\%$ \\
& 5 & 1.54956e+144 & 9.85557e+143 & 2.24043e+144 & 39.06521 & 20.23674 & 53.60838 & $91.8\%$
\\\bottomrule[1.2pt]
\end{tabular} }
\end{center}
\vskip 0.1in
\caption{Summary of confidence intervals. Dimensions and marginals$=s$
are defined same with Table \ref{tab_res3}. $\widehat{|\Sigma|}$
means an estimator of ${|\Sigma|}$ and $\widehat{cv^2}$ is an
estimator of $cv^2$. The sample size for the SIS procedure
is $N = 1000$ and the sample size for bootstraping is $B=5000$. Only cases with
relatively large $cv^2$ are involved.}
\label{tab_res5}
\end{table}
\end{ex}
\section{Experiment with Sampson's data set}\label{sam}
Sampson recorded the social interactions among a group of monks
when he was visiting there as an experimenter on vision. He collected numerous
sociometric rankings \cite{Breiger,sampson}. The data is organized
as a $18 \times 18\times
10$ table and one can find the full data sets at
\url{http://vlado.fmf.uni-lj.si/pub/networks/data/ucinet/UciData.htm#sampson}.
Each layer of $18 \times 18$ table represents a social relation
between 18 monks at some time point.
Most of the present data are retrospective, collected after the
breakup occurred. They concern a period during which a new cohort
entered the monastery near the end of the study but before the major
conflict began. The exceptions are ``liking'' data gathered at three
times: SAMPLK1 to SAMPLK3 - that reflect changes in group sentiment
over time (SAMPLK3 was collected in the same wave as the data
described below).
In the data set four relations are coded, with separate matrices for positive and
negative ties on the 10 relation: esteem (SAMPES) and
disesteem (SAMPDES); liking (SAMPLK which are SAMPLK1 to SAMPLK3) and
disliking (SAMPDLK); positive
influence (SAMPIN) and negative influence (SAMPNIN); praise (SAMPPR)
and blame (SAMPNPR).
In the original data set they listed top three choices and recorded as
ranks. However, we set these ranks as an indicator (i.e., if they are
in the top three choices, then we set one and else, zero).
We ran the SIS procedure with $N = 100000$ and a bootstrap sample size
$B = 50000$. An estimator was 1.704774e+117 with its $95\%$
confidence interval, [1.119321e+117 2.681264e+119]
and $cv^2 = 621.4$ with its $95\%$
confidence interval, [324.29, 2959.65]. The CPU time was $70442$ seconds. The acceptance
rate is 3\%.
\section{Discussion}\label{dis}
In this paper we do not have a
sufficient and necessary condition for the existence of the three-way
zero-one table so we cannot avoid rejection. However, since the SIS
procedure gives an unbiased estimator, we may only need a
small sample size as long as it converges. For example, in Table
\ref{tab_res}, all estimators with
sample size $1000$ are exactly the same as the true numbers of tables because they
all converge very quickly. Also note
that an acceptance rate does not depend on a sample size. Thus, it would be
interesting to investigate the convergence rate of the SIS procedure with
CP for zero-one three-way tables.
It seems that the convergence rate is slower when we have a ``large''
table (here ``large'' means in terms of $|\Sigma|$ rather than
its dimension, i.e., the number of cells). A large estimator
$\widehat{|\Sigma|}$ usually corresponds to a larger $cv^2$, and this
often comes with large
variations of $\widehat{|\Sigma|}$ and $cv^2$. This means that if we
have a large $|\Sigma|$, more likely we get extremely
larger $\widehat{|\Sigma|}$ and $cv^2$ and different iterations can give very different
results. For example, we ran three iterations for the $8\times 8\times 8$
semimagic cube with all marginals
equal to $3$ and we got the following results: estimator =3.236556e+59 with
$cv^2=7.049114$; estimator =2.902294e+59
with $cv^2=9.047914$; and estimator =3.880133e+59 with
$cv^2=55.59179$. Fortunately, even though we have a
large $|\Sigma|$, our acceptance rate is still high and a
computational time seems to still be
attractive. Thus, when one finds a large estimation or a large $cv^2$,
we recommend to
apply several iterations and pick the result with the smallest $cv^2$. We should
always compare $cv^2$ in a large scale. However, a small improvement does
not necessarily mean a
better estimator (see Example \ref{eg5_10}).
For calculating the bootstrap-t confidence
intervals, we often have a larger confidence interval when we have a larger
$cv^2$, and this confidence interval might be less informative and less
reliable. Therefore we suggest to use the result with the smallest
$cv^2$ for bootstraping
procedure. In Table \ref{tab_res5} we showed only confidence intervals
for
semimagic cubes with $m=n = l = 7, \ldots ,10$ in Example
\ref{eg5_17} because of the following reason: When
$cv^2$ is very small, computing bootstrap-t confidence interval
does not make much sense, since the estimation has already converged.
For an experiment with Sampson's data set, we have observed a very low
acceptance rate compared with experimental studies on simulated data
sets. We are investigating why this happens and how to increase the
acceptance rates.
In \cite{chen2005}, the Gale--Ryser Theorem was used to obtain an SIS
procedure without rejection for two-way zero-one tables. However,
for three-way table cases, it seems very difficult because we
naturally have structural zeros and trivial cases on a process of
sampling one table. In \cite{chen2007} Chen showed a version of
Gale--Ryser Theorem for structural zero for two-way zero-one tables, but
it assumes that there is at most one structural zero in each row and
column. In general there are usually more than one in each row and column.
In this paper the target distribution is the uniform distribution. We
are sampling a table from the set of all zero-one tables satisfying
the given marginals as close as uniformly via the SIS procedure with CP.
For a goodness-of-fit test one might want to sample a table from the
set of all zero-one tables satisfying the given marginals with the
hypergeometric distribution. We are currently working on how to
sample a table via the SIS procedure with CP for the hypergeometric
distribution.
\section{Acknowledgement}
The authors would like to thank Drs.~Stephen Fienberg and Yuguo Chen for useful
conversations.
\bibliographystyle{plain}
|
1,477,468,750,638 | arxiv | \section{Introduction}
\label{intro}
One of the first steps toward understanding the
formation and evolution of galaxies is a determination of the
radial distribution
of mass within a representative sample of systems. The rotational
balance of stars and gas in the plane of a disk galaxy offers a
powerful probe of its mass distribution, and has
been widely exploited \citep{sofue01}.
When the motions of these tracers are consistent with small departures
from circular orbits, the determination of the rotation curve (more precisely,
the circular orbital speed profile) is
straightforward. However, it has long been known \citep[e.g.][]{bosma78}
that large non-circular motions driven by bar-like or oval
distortions, warps, or lopsidedness are common features in galaxy
velocity maps, which complicate the determination of the radial mass
profile. Yet the observed flow pattern contains a great deal of
information about the mass distribution, which we wish to extract from
the data.
Since galaxies with closely flat, nearly axisymmetric disks are the
exception, it is desirable to be able
to estimate a mass profile in
the more general cases. A number of techniques, which we review in
\S\ref{theory}, already exist for this purpose. A procedure for
dealing with a warped disk has been successfully developed \citep[e.g.][]{begeman87} from the first simple tilted ring analyses
\citep[e.g.][]{rogstad74}, and is now widely used.
Non-axisymmetric distortions to the planar flow can always be
described by an harmonic analysis. But the approach pioneered by
\citet{franx94} for interpreting the resulting coefficients
embodies epicycle theory, which is valid only for
small departures from circular orbits and may give misleading results
if the observed non-circular motions are not small compared with the
circular orbital speed. A number of authors (see \S\ref{theory} for
references) appear to find significant radial flows with speeds that
rival the inferred mean orbital motion. Such flows violate the
assumption of small
departures from circular motion, are physically not well motivated,
and the results are hard to interpret.
We therefore propose here a new technique for fitting a general
non-axisymmetric model to the velocity field of a galaxy that allows for
large non-circular motions. We develop and apply the method
specifically for the case of bar-like or oval distortions, but the
procedure is readily generalized for potentials having other azimuthal
periodicities.
Our simple kinematic model, which we describe in \S\ref{technique},
yields both the mean orbital speed and the amplitudes of the
non-circular streaming velocities. It is successful because (1) we
invoke a straight bar-like distortion to the potential, (2) we do not
need to assume small departures from circular motion, and (3) we fit
to the entire velocity field at once.
We apply our method (\S\ref{n2976}) to the high-quality velocity maps
of NGC~2976\ that were previously presented by \citet[][hereafter SBLB]{s03},
and find that it suggests a significantly different radial mass
profile from that deduced by those authors. We show (\S\ref{discuss})
the reason for this difference, and argue that a bisymmetric
distortion is both a more reasonable physical model, and that it is
supported by independent evidence of a bar in this galaxy.
\section{Modeling Non-Axisymmetric Flows}
\label{theory}
\subsection{Mathematical preliminaries}
\label{math}
The velocity of a star or an element of gas in the plane
of the disk of a galaxy
generally has two components at each point: tangential, $V_t$, and
radial, $V_r$, relative to any arbitrary center, most conveniently the
kinematic center. Without approximation, each component can be
expressed as a Fourier series around a circle of radius $r$ in the
disk plane:
\begin{equation}
V_t(r,\theta) = \ensuremath{\Vrot(r)} + \sum_{m=1}^\infty V_{m,t}(r)\cos \left[
m\theta + \theta_{m,t}(r) \right]
\end{equation}
and
\begin{equation}
V_r(r,\theta) = \ensuremath{\Vrad(r)} + \sum_{m=1}^\infty V_{m,r}(r)\cos \left[
m\theta + \theta_{m,r}(r) \right],
\end{equation}
where the coefficients, $V_{m,t}$ and $V_{m,r}$, and phases relative
to some convenient axis, $\theta_{m,t}$ and $\theta_{m,r}$, are all
functions of $r$. The quantity $\ensuremath{\Vrot(r)}$ is the mean streaming speed
of the stars or gas about the center;
throughout, we refer to this quantity as the
mean orbital speed. The axisymmetric term of the radial
motion, $\ensuremath{\Vrad(r)}$, represents a mean inflow or outflow in the
disk plane, which gives rise to a ``continuity problem'' if it is large \citep{s05}.
Galaxies are observed in projection, with inclination $i$, about a
major axis, which we choose to define as $\theta=0$ in the above
expansions. The line-of-sight velocity is the sum of the projected
azimuthal and radial velocities: $V_{\rm obs} = \ensuremath{V_{\rm sys}} + \sin i
(V_t\cos\theta + V_r\sin\theta)$, where $\ensuremath{V_{\rm sys}}$ is the systemic
velocity of the galaxy. In terms of our Fourier series,
\begin{eqnarray}
\nonumber
V_{\rm obs} & = & \ensuremath{V_{\rm sys}} \\
\nonumber
& &+ \sin i\left\{ \ensuremath{\bar V_t}\cos\theta + \sum_{m=1}^\infty
V_{m,t} \cos\theta \cos\left[ m\theta + \theta_{m,t} \right] \right. \\
& &\left. + \; \ensuremath{\bar V_r}\sin\theta + \sum_{m=1}^\infty
V_{m,r}\sin\theta \cos\left[ m\theta + \theta_{m,r} \right] \right\}.
\label{defVobs}
\end{eqnarray}
Using standard trigonometric relations, this expression can be
rewritten as
\begin{eqnarray}
\nonumber
{V_{\rm obs} - \ensuremath{V_{\rm sys}} \over \sin i}
& = & \ensuremath{\bar V_t}\cos\theta \\
\nonumber
& & + \sum_{m=1}^\infty {V_{m,t} \over 2}
\left\{ \cos\left[ (m+1)\theta + \theta_{m,t} \right] \right. \\
\nonumber
& & + \left. \cos\left[
(m-1)\theta + \theta_{m,t} \right]\right\} + \; \ensuremath{\bar V_r}\sin\theta \\
\nonumber
& &+ \sum_{m=1}^\infty {V_{m,r} \over 2} \left\{
\sin\left[ (m+1)\theta + \theta_{m,r} \right] \right. \\
& & - \left. \sin\left[ (m-1)\theta + \theta_{m,r} \right] \right\}.
\label{defVobs2}
\end{eqnarray}
As is well known \citep[e.g.][]{canzian93,schoen97,ca97,frid01}, projection
therefore causes velocity distortions with intrinsic sectoral harmonic
$m$ to give rise to azimuthal variations of orders
$\ensuremath{m^{\prime}}=m\pm1$ in the corresponding line-of-sight velocities. Thus
intrinsic distortions at two different sectoral harmonics give rise to
projected velocity features of the same angular periodicity in the
data, complicating the determination of all coefficients in the
expansion.
\subsection{Previous approaches}
\label{previous}
The principal scientific objective of most spectroscopic observations
of disk galaxies is to extract
the function $\ensuremath{\Vrot(r)}$, which should be a good approximation to the
circular orbital speed if all other coefficients on the right-hand side of
eq.~\ref{defVobs} are small.
With a single slit spectrum along the major axis of the galaxy, one
generally sets $V_{\rm obs} = \ensuremath{V_{\rm sys}} + \ensuremath{\Vrot(r)}\sin i$, implicitly
assuming all other terms to be negligible. In this case, the
inclination must be determined from other data (e.g.\ photometry).
Differences larger than measurement errors between the approaching and
receding sides flag the existence of non-circular motions, but
measurements along a single axis do not yield enough information to
determine any other coefficient. This and other uncertainties
inherent in such deductions are well-rehearsed
\citep{vbd01,dbb03,swaterscuspcore,rhee04,spekkens05,hayashi06}.
A two-dimensional velocity map, on the other hand, provides much more
information.
Software packages, such as {\it rotcur} \citep{begeman87}, allow one to
fit the velocity field with a single velocity function $\ensuremath{\Vrot(r)}$ in
a set of annuli whose centers, position angles (PAs) and inclinations are
allowed, if desired, to vary with radius. This package is ideal for
the purpose for which it was designed: to determine the mean orbital
speed even when the plane of the disk may be warped. It works well when
non-circular motions are small, but yields spurious variations of the
parameters when the underlying flow contains non-axisymmetric,
especially bisymmetric, distortions.
\citet{bs03} adopted a different approach. They assumed the plane of the disk to
be flat, and determined the rotation center, inclination, and PA by fitting a
non-parametric circular flow pattern to the entire velocity map. Their method
averages over velocity distortions caused by spiral arms, for example, but again
may yield spurious projection angles and mean orbital speeds if there is a
bar-like or oval distortion to the velocity field over a wide radial range.
The {\it rotcurshape} program, recently added to the NEMO \citep{teuben95} package,
suffers from the same drawback because it also assumes a flat, axisymmetric disk.
Furthermore, it fits multiple parametric components to a velocity field and thus has
less flexibility than the \citet{bs03} technique.
\citet{franx94} and \citet{schoen97} pioneered efforts to measure and
interpret the non-axisymmetric coefficients that describe an observed
velocity field, and expansions up to order $\ensuremath{m^{\prime}}\sim 3$ are now
routinely carried out \citep[e.g.][]{wong04,chemin06,s05,gentile06}.
Their
approach assumes departures from circular motion to be small so that
the radial and tangential perturbations for any sectoral harmonic can
be related through epicycle theory. The technique is therefore
appropriate only when all fitted coefficients are small and the mean
orbital speed is close to the circular orbital speed that balances the
azimuthally averaged central attraction. \citet{wong04}
present an extensive discussion of this technique and conclude
that it is difficult to work backwards from the
derived Fourier coefficients to distinguish
between different physical models.
\citet{swaters03}, \citetalias{s03} and \citet{gentile06} report velocity
fields for nearby galaxies that show non-circular motions whose
amplitude rivals the mean orbital speed at small $r$. Swaters et al. (2003b) note that their \ensuremath{\Vrot(r)}\ model fails to reproduce the inner disk kinematics of their target. They correct \ensuremath{\Vrot(r)}\ for an isotropic velocity dispersion of 8 km/s, but do not attempt to model the isovelocity twists in their H$\alpha$ velocity field.
\citetalias{s03}
fit the simplest acceptable model to their data: an axisymmetric flow
with just two non-zero coefficients $\ensuremath{\Vrot(r)}$ and $\ensuremath{\Vrad(r)}$. They favor this model over a bar-like distortion partly because the galaxy is not
obviously barred, and partly because they find that the $\ensuremath{m^{\prime}} =3$
components are scarcely larger than the noise (see also \S\ref{n2976}). The addition of the
radial velocity term \ensuremath{\bar V_r}\ allows a more complicated flow pattern to be
fitted with an axisymmetric model, which significantly improves the
fit to their data. \citet{gentile06} do detect a radial $\ensuremath{m^{\prime}} = 3$
component in addition to a strong radial $\ensuremath{m^{\prime}} = 1$
term in the kinematics that they report, which they conclude ``are
consistent with an inner bar of several hundreds of pc and accretion of
material in the outer regions''. Despite finding large non-circular
motions, the authors
of all three studies nonetheless adopted their derived mean
orbital speed as the ``rotation curve'' of the galaxy, which they
assume results from centrifugal balance with the azimuthally averaged
mass distribution.
These deductions are suspect, however. As we show below
(\S\ref{discuss}.1), a bisymmetric distortion to the flow pattern may
not give rise to a large $\ensuremath{m^{\prime}}=3$ term in the velocity field, and the
smallness of these terms does not establish the
absence of a strong bisymmetric distortion. Further, associating the
$\ensuremath{\bar V_t}$ term with the rotation curve is valid only if all departures
from circular motion are small, yet they had found non-circular
velocity components almost as large as the mean orbital speed over a
significant radial range.
Early work on modeling gas flows in barred galaxies is reviewed in Sellwood \& Wilkinson (1993; see their section 6.7).
\citet{weiner01}, \citet{kranz03}, and
\citet{perez04} attempt to build a self-consistent fluid-dynamical model
of the non-axisymmetric flow pattern. They
estimate the non-axisymmetric part of the mass distribution from
photometry and try to match the observed flow to hydrodynamic
simulations to determine the amplitude of the non-axisymmetric
components of the potential. The objective of this, altogether more
ambitious, approach is to determine the separate contributions of the
luminous and dark matter to the potential. Here, our objective
is more modest: to estimate the mean orbital speed from a velocity map
that may possibly be strongly non-axisymmetric. Thus their attempt to
separate the baryonic from dark matter contributions seems needlessly
laborious for our more limited purpose.
\section{A New Approach}
\label{technique}
\begin{figure}
\epsscale{1.1}
\plotone{f1_color.eps}
\caption{Parameter definitions and flow pattern in the disk plane for
the bisymmetric\ model (eq.~\ref{bieq}). The solid circle represents the
largest $r$ included in the model, and the horizontal dash-dotted line is the
major axis of the disk defined by the sky plane. The long-dashed line
is the major axis of the bisymmetric distortion, at angle \ensuremath{\phi_b}\ from
the major axis. The diamond denotes the location \ensuremath{(\xe,\ye)}\ of a datapoint \ensuremath{D_n},
a distance \ensuremath{r_n}\ from the kinematic center \ensuremath{(\xc,\yc)}\ and at {\rm PA} s \ensuremath{\theta_b}\
from the bisymmetric
distortion axis and \ensuremath{\theta}\ from the major axis. The dotted circle shows
the circular orbit of radius \ensuremath{r_n}\ in the disk, and the axisymmetric
model component $\ensuremath{\bar V_t}(\ensuremath{r_n})$ is in the counter-clockwise direction.
The extrema of components $\ensuremath{V_{2,t}}(\ensuremath{r_n})$ and $\ensuremath{V_{2,r}}(\ensuremath{r_n})$ are indicated
by solid and dashed arrows, respectively, and large dots at the same distance from \ensuremath{(\xc,\yc)}\ as each set of arrows denote {\rm PA} s where the corresponding component passes through zero.}
\label{setup}
\end{figure}
\subsection{A bar-like distortion}
\label{bisymm}
Our objective is to model non-circular motions in a 2-D velocity map.
Since we do not wish to assume that non-circular motions are small, we
refrain from adopting the epicycle approximation. However, we do make
the following assumptions:
\begin{itemize}
\item The non-circular motions in the flow stem from a bar-like or
oval distortion to an axisymmetric potential. We suppose these
motions to be caused by either a non-axially symmetric halo
in the dark matter or
by a bar in the mass distribution of the baryons.
\item A strong bisymmetric
distortion to the potential, even one that is exactly described by a
$\cos(2\theta)$ angular dependence, can give rise to more complicated
motions of the stars and gas \citep[e.g.][]{sw93}. In particular, the flow may contain
higher even harmonics. Nevertheless, the $m=2$ terms will always be
the largest, and we therefore begin by neglecting higher harmonics.
\item We assume the bar-like distortion drives non-circular motions
about a fixed axis in the disk plane.
In a steady bar-like flow, the perturbed parts of
the azimuthal and radial velocities must be exactly out of phase with
each other. That is, the azimuthal streaming speed is smallest on
the bar major axis and greatest on its minor axis, while radial motions
are zero in these directions and peak, with alternating signs, at
intermediate angles \citep{sw93}.
\item We must assume the disk to be flat, because we require the
predicted $V_{\rm obs}$ from eq.~\ref{defVobs} to have the same
inclination at all $r$. This assumption is appropriate for spiral
galaxy velocity fields measured within the optical radius, where warps are
rare \citep[e.g.][]{briggs90}, and is therefore well-suited to
interpreting kinematics derived from {H}{$\alpha$}, CO or stellar spectroscopy.
The technique presented here should therefore not be applied to the
outer parts of \ion{H}{1}\ velocity fields, which typically extend well
into the warp region \citep[e.g.][]{broeils97}.
\end{itemize}
A model based on these assumptions predicts the observed velocity at
some general point in the map to be given by eq.~\ref{defVobs}, with
the $m=2$ terms as the only non-axisymmetric terms:
\begin{eqnarray}
\nonumber
\ensuremath{V_{\rm model}} & = & \ensuremath{V_{\rm sys}}\ + \ensuremath{\sin{i}} \,\,\left[\, \ensuremath{\bar V_t}\cos{\ensuremath{\theta}}
- \; \ensuremath{V_{2,t}}\cos(2 \ensuremath{\theta_b}) \, \cos{\ensuremath{\theta}} \right. \\
& & \left . - \,\ensuremath{V_{2,r}}\sin( 2 \ensuremath{\theta_b}) \sin{\ensuremath{\theta}} \,\right] \; .
\label{bieq}
\end{eqnarray}
The geometry in the disk plane is sketched in Fig.~\ref{setup}. As
above, \ensuremath{\theta}\ is the angle in the disk plane relative to the projected
major axis, which is marked by the horizontal dash-dotted line. The
major axis of the bisymmetric distortion (or bar for short) lies at
angle \ensuremath{\phi_b}\ to the projected major axis; thus angles relative to the
bar axis are $\ensuremath{\theta_b} = \ensuremath{\theta} - \ensuremath{\phi_b}$. We have chosen the phases of the
\ensuremath{V_{2,t}}\ and \ensuremath{V_{2,r}}\ terms such that both are negative at $\ensuremath{\theta_b}=0$ and
they vary with angle to the bar as the cosine and sine of $2\ensuremath{\theta_b}$
respectively. Comparing eqs.~\ref{defVobs} and \ref{bieq}, we see that
$\theta_{2,t} = \pi - 2\ensuremath{\phi_b}$ and $\theta_{2,r}=\pi/2 - 2\ensuremath{\phi_b}$.
The amplitudes of the tangential and radial components of the
non-circular flow, $\ensuremath{\Vbit(r)}$ and $\ensuremath{\Vbir(r)}$ respectively, are both
functions of radius. While it is possible to relate the separate
amplitudes for a given potential, we do not attempt to model the mass
distribution that creates the flow and therefore allow them both to
vary independently.
Circles in the disk plane project to ellipses on the sky, with
ellipticity \ensuremath{\epsilon_d}\ given by $1 - \ensuremath{\epsilon_d} = \cos i$, with a common
kinematic center, \ensuremath{(\xc,\yc)}. We use primes to denote projected angles onto
the sky plane. Thus the projected {\rm PA}\footnote{All {\rm PA} s
are measured North $\rightarrow$ East.} of the disk major axis
is \ensuremath{\phi_d^{\prime}}, while \ensuremath{\phi_b^{\prime}}\ is the {\rm PA}\ of the bar major axis in the sky
plane. These angles are related by
\begin{equation}
\ensuremath{\phi_b^{\prime}} = \ensuremath{\phi_d^{\prime}} + {\rm arctan}(\tan \ensuremath{\phi_b} \cos i) \;.
\label{barmajeq}
\end{equation}
In addition to the three velocity functions \ensuremath{\Vrot(r)}, \ensuremath{\Vbit(r)}\ and \ensuremath{\Vbir(r)},
the model is therefore
described by the parameters $(\ensuremath{x_c},\, \ensuremath{y_c},\, \ensuremath{V_{\rm sys}},\, \ensuremath{\epsilon_d},\, \ensuremath{\phi_d^{\prime}},\,
\ensuremath{\phi_b^{\prime}})$. We refer to the model described by eq.~\ref{bieq} as the
bisymmetric\ model.
\subsection{Other possible models}
\label{other}
Other models for the flow pattern could readily be derived from
eq.~\ref{defVobs}.
In particular, and solely to facilitate comparison with other work, we also fit a purely axisymmetric model with the coefficents of all
$m>0$ terms set to zero, but retain the $\ensuremath{\bar V_r}$ term. There is no
undetermined phase angle for this intrinsically axisymmetric model and
the predicted velocity is simply
\begin{equation}
\ensuremath{V_{\rm model}} = \ensuremath{V_{\rm sys}} + \ensuremath{\sin{i}} \; \left[ \, \ensuremath{\bar V_t} \cos{\ensuremath{\theta}}
+ \ensuremath{\bar V_r}\sin{\ensuremath{\theta}} \, \right] \;.
\label{radeq}
\end{equation}
The coefficient $\ensuremath{\bar V_r}$ corresponds to pure radial inflow or
outflow.\footnote{It is not possible to distinguish between inflow and
outflow in this model unless the side of the disk along the minor axis
that is nearest to the observer can be determined independently.} We
will refer to this as the radial\ model.
Other, more complicated, models could also be fitted to data by
retaining more terms as required, provided that an assumption is
made about the radial dependence of the phases of the non-axisymmetric
perturbations. The extension of these formulae to include other
velocity field harmonics is straightforward, and we have tried doing
so in some of our analyses (see \S\ref{n2976}).
\subsection{Discussion}
\label{tech_discuss}
If the non-circular motions measured in some spirals
do stem from bar-like or oval distortions,
then the bisymmetric\ model has several advantages over both the radial\
model and also over epicyclic approaches for characterizing these
asymmetries. Since $\ensuremath{m^{\prime}}=1$ velocity field components can arise
from either radial flows or a bisymmetric perturbation to the
potential (eq.~\ref{defVobs2}),
both the bisymmetric\ and radial\ models could produce
tolerable fits to the same data. However, the bisymmetric\ model offers a
more direct, unambiguous approach for identifying $m=2$ distortions
than does the radial\ model.
Moreover, interpretations of velocity field harmonics that rely on
epicycle theory \citep{franx94,schoen97,ca97} are applicable only in
the limit of a weak perturbation to the potential, whereas the
components of our bisymmetric\ model are not restricted to mild
distortions. We also note that since the bisymmetric\ model imposes a
fixed \ensuremath{\phi_b^{\prime}}\ on the non-circular flow pattern, it is not sensitive to
$m=2$ perturbations to the potential that are not in phase (such as
spiral patterns).
Finally, the bisymmetric\ technique is much simpler than fluid-dynamical
modeling of the velocity field (see \S\ref{previous}), since it does not
require (or yield) a model for the mass distribution.
\subsection{Fitting technique}
\label{minimization}
We attempt to fit the above kinematic models to observational data by
an extension of the minimization procedure devised by \citet{bs03}.
In general, we need to determine the systemic velocity \ensuremath{V_{\rm sys}},
kinematic center \ensuremath{(\xc,\yc)}, ellipticity \ensuremath{\epsilon_d}, and
disk {\rm PA}\ \ensuremath{\phi_d^{\prime}}, as well as $M$ unknown radial functions $V_{m,t}$ and $V_{m,r}$
($m=0$ and $m>0$ if desired) and the (fixed) {\rm PA} (s),
$\theta_m$, of any non-axisymmetric distortions to the flow.
We tabulate each of the $M$ independent velocity profiles at a set of
concentric circular rings in the disk plane that project to ellipses on the sky
with a common center, \ensuremath{(\xc,\yc)}. Once these tabulated values are
determined, we can construct a predicted \ensuremath{V_{\rm model}}\ at any general point
by interpolation. We difference our model from the data, which
consist of $N$ line-of sight velocity measurements $\{\ensuremath{D_n}\}$ with
uncertainties $\{\ensuremath{\sigma_n}\}$, and adjust the model parameters to determine
the minimum \ensuremath{\chi_{\rm r,min}^2}\ of the standard goodness-of-fit function \ensuremath{\chi_{\rm r}^2}\
with \ensuremath{\nu}\ degrees of freedom:
\begin{equation}
\ensuremath{\chi_{\rm r}^2}=\frac{1}{\nu}\sum_{n=1}^N \left( \frac{\ensuremath{D_n} - \ensuremath{\sum_{k=1}^K \wk \vk}}{\ensuremath{\sigma_n}
} \right)^2 \; .
\label{chieq}
\end{equation}
Here, the $K$ elements of $\{\ensuremath{V_k}\}$ are the values of the tabulated
velocity profiles in the model and the weights, $w_{k,n}$ describe the
interpolation from the tabulated $\ensuremath{V_k}$ to $\ensuremath{V_{\rm model}}$ (eq.~\ref{bieq} or
\ref{radeq}) at the position of the observed value \ensuremath{D_n}.
When $\ensuremath{\chi_{\rm r}^2}=\ensuremath{\chi_{\rm r,min}^2}$, the partial gradient of $\ensuremath{\chi_{\rm r}^2}$ with respect
to each $V_j$, where $j$ labels each of the \ensuremath{V_k}\ in turn, must satisfy
\begin{equation}
{\partial \ensuremath{\chi_{\rm r}^2} \over \partial V_j} = -\frac{2}{\nu}\sum_{n=1}^N
\left( \frac{\ensuremath{D_n} - \ensuremath{\sum_{k=1}^K \wk \vk}}{\ensuremath{\sigma_n} } \right) \frac{w_{j,n}}{\ensuremath{\sigma_n}} = 0 \;.
\end{equation}
Rearranging, we find
\begin{equation}
\sum_{k=1}^K \left( \sum_{n=1}^N \frac{\ensuremath{w_{k,n}}}{\ensuremath{\sigma_n} }\,\frac{\ensuremath{w_{j,n}}}{\ensuremath{\sigma_n} }
\right) \ensuremath{V_k} = \sum_{n=1}^N \frac{\ensuremath{w_{j,n}}}{\ensuremath{\sigma_n} ^2}{\ensuremath{D_n} } \;,
\label{minchieq}
\end{equation}
resulting in a linear system of $K$ equations for the $K$ unknowns, $\{\ensuremath{V_k}\}$.
For a given set of attributes $(\ensuremath{x_c},\, \ensuremath{y_c},\, \ensuremath{V_{\rm sys}},\, \ensuremath{\epsilon_d},\,
\ensuremath{\phi_d^{\prime}},\, \theta_m )$ of the projected disk, we compute $\{\ensuremath{V_k}\}$ by
solving the linear system of eq.~\ref{minchieq}, and use the
resulting $\{\ensuremath{V_k}\}$ values in eq.~\ref{chieq} to evaluate \ensuremath{\chi_{\rm r}^2}.
The best fitting model is found by minimizing
eq.~\ref{chieq} over the parameters mentioned above, which
necessitates recomputing $\{\ensuremath{V_k}\}$ via eq.~\ref{minchieq} at each
iteration. Any convenient method may be used to search for the
minimum; we use Powell's direction set method \citep{p92}.
\citet{bs03} use this minimization strategy to extract only the mean
orbital speed $\ensuremath{\Vrot(r)}$ from {H}{$\alpha$}\ velocity fields of spirals in the
\citet{pw00} sample. In our more general case, the
$M>1$ model profiles are defined by distinct sets of $K'_M$ rings in
the disk plane, and \ensuremath{ \{ \vk \}}\ contains all of the velocities from these
profiles: $K = \sum_{M} K'_M$. In other words, adding a velocity
profile (defined in $K'$ rings) to a model increases the rank of the
matrix in eq.~\ref{minchieq} by $K'$. The radial\
model has $M=2$, while $M=3$ for the bisymmetric\ model. Further
discussion of \ensuremath{ \{ \vk \}}\ and derivations of \ensuremath{ \{ \wk \}}\ are given in the
Appendix.
\section{Velocity Field Models of NGC~2976}
\label{n2976}
\begin{figure*}
\epsscale{0.9}
\plotone{f2.eps}
\caption{Kinematic models of the NGC~2976\ velocity field. The panels in
the top row show ({\it a}) the observed velocity field $\{\ensuremath{D_n}\}$ from
\citetalias{s03}, ({\it b}) the optimal radial\ and ($c$) the optimal
bisymmetric\ models. The velocity fields are plotted on the same
colorscale, shown in \ensuremath{\rm{km\,s^{-1}}}\ to the right of the top row. The panels in the
bottom row show residual maps $\{\ensuremath{\Delta V_n}\} = \{\ensuremath{D_n}\ - \ensuremath{\sum_{k=1}^K \wk \vk}\}$ for ($d$)
the radial\ and ($e$) the bisymmetric\ models, with the colorscale in
\ensuremath{\rm{km\,s^{-1}}}\ for both shown to the right of that row. The model
velocity fields and residuals have been rotated by $-(\ensuremath{\phi_d^{\prime}}\ + \pi/2)$
about \ensuremath{(\xc,\yc)}\ from Table~\ref{fits}. The data in \ref{models}{\it a}
are rotated by
the photometric value $-(-37\degr +\pi/2)$ (intermediate to the two
model values) about the photometric center $09^\mathrm{h}\,
47^\mathrm{m}\, 15\fs\, 3$, $67\degr\, 55\arcmin\,00\farcs\,4$
\citepalias{s03}. The orientation of $\{\ensuremath{D_n}\}$ (roughly correct for
the models as well) and the map scale are at the bottom left. The
unusual shape of contoured regions in the maps reflects the locations
of the individual pointings used to construct the {H}{$\alpha$}\ velocity field
of NGC~2976\ \citepalias{s03}.}
\label{models}
\end{figure*}
To illustrate the technique, we fit our bisymmetric\ model to the observed
high-quality velocity field of NGC~2976\ reported by \citetalias{s03}. NGC~2976\ is a nearby, low-mass Sc galaxy with $i \sim
60\degr$. We adopt a distance $\ensuremath{D}=3.56\;$Mpc, estimated from the tip
of the red giant branch \citep{kar02}, and convert angular scales to
linear scales using $1\arcsec = 17.3\;$pc.
\citetalias{s03} present {H}{$\alpha$}\ and CO velocity fields of NGC~2976, with a
spatial resolution\footnote{Throughout, we recompute the linear scales
presented by \citetalias{s03} for consistency with our choice of $D$.}
of $\sim5\arcsec$ ($86\,$pc) and spectral resolutions of $13\,\ensuremath{\rm{km\,s^{-1}}}$
and $2\,\ensuremath{\rm{km\,s^{-1}}}$, respectively. They find that the velocity field is not
well-modeled by disk rotation alone.
They report a detailed analysis of these kinematic data in
which the projection geometry of their model rings is determined from
optical and near-IR photometry. They conclude that there is no strong
evidence for a bisymmetric distortion in this galaxy, since all
$\ensuremath{m^{\prime}} >1$ components of the velocity field are consistent with
noise. They find that a combination of rotation and pure radial flows
provides an adequate fit. The amplitude of the inferred radial
velocity profile rivals that of the rotational component for $r
\lesssim 500\,$pc: NGC~2976\ thus exhibits some of the largest
non-circular motions ever detected in a low-mass,
rotationally-supported system. In their later paper, Simon et al.\
(2005) noted that finding large values of the \ensuremath{\bar V_r}\ term is a
strong indication that a model with an axisymmetric radial flow is
incorrect. They suggest that the non-circular motions in NGC~2976\ stem
from a triaxial
halo, but their use of epicycle theory relations \citep{schoen97} is inappropriate because
\ensuremath{\Vrad(r)}\ is not always small (see their fig. 9).
We also suspect that a bisymmetric distortion is
responsible for the observed departures from a circular flow pattern.
The {H}{$\alpha$}\ and CO velocity fields of NGC~2976\ presented in fig.~4 of
\citetalias{s03} were kindly made available to us by J. D. Simon. Following
these authors, we analyse the kinematics of the two tracers together,
since the data agree within their uncertainties.
We fit the combined velocity field with our bisymmetric\ model to examine
whether the departures from circular motion detected by
\citetalias{s03} stem from an $m=2$ distortion to the potential. In
order to demonstrate that our new technique (\S\ref{technique}) yields
a similar kinematic model to the one obtained by
\citetalias{s03}, we apply our radial\ model to the same dataset, and
compare values of the parameters we obtain as a consistency check on
our method. For completeness, we also attempt to fit the data with
a suite of other models including $m=0$, $m=1$ and $m=2$ distortions
(see below).
The observations presented by \citetalias{s03} sample the velocity
field of NGC~2976\ out to $r \sim 130\arcsec$ ($2.2\,$kpc) from its
photometric center. We evaluate the velocity profiles in a maximum of
$K' = 26$ rings, separated by 4\arcsec\ for $r<95\arcsec$ and by up to
10\arcsec\ farther out. Neither the bisymmetric\ model nor the radial\
model yielded reliable constraints on the non-circular components of
the velocity field for $r>100\arcsec$. We therefore conclude that the
outer part of NGC~2976\ is adequately described by a simple circular flow, and fix
the amplitudes of all coefficients but \ensuremath{\bar V_t}\ to zero beyond that
radius. This reduces the rank of the matrix (eq.~\ref{minchieq}) by a
few.
To check the validity of our planar disk assumption at the largest
radii probed by the measurements, we compare the disk parameters
derived from fits including different numbers of outer rings.
Specifically, each minimization uses the same ring radii, except that
the outermost ring included is varied in the range $80\arcsec < \ensuremath{r_{\rm max}}\
< 135\arcsec$ and velocity measurements at radii beyond $\ensuremath{r_{\rm max}}$ are
ignored in the fit. Models with $\ensuremath{r_{\rm max}} \lesssim 112\arcsec$ return
identical disk parameters within the uncertainties, but the optimal
values of \ensuremath{x_c}, \ensuremath{y_c}\ and \ensuremath{V_{\rm sys}}\ change substantially when rings at
larger $r$ are added. We therefore restrict our fits to include only
\ensuremath{D_n}\ with $\ensuremath{r_n} < 112\arcsec$ in our final models, as the disk may be
warped farther out.\footnote{The disk geometry and kinematics of NGC~2976\
at $r \gtrsim 1.5\,$kpc will be explored in detail using extant
aperture synthesis \ion{H}{1}\ maps of the system.}
We make an allowance for ISM turbulence by redefining $\{\ensuremath{\sigma_n}\}$ to be
the sum in quadrature of the uncertainties in the emission line
centroids and a contribution $\ensuremath{\Delta_{\rm ISM}}=5\,\ensuremath{\rm{km\,s^{-1}}}$. We find that choosing
values of \ensuremath{\Delta_{\rm ISM}}\ in the range $3\,\ensuremath{\rm{km\,s^{-1}}} \lesssim \ensuremath{\Delta_{\rm ISM}} \lesssim
7\,\ensuremath{\rm{km\,s^{-1}}}$ and varying the ring locations and sizes by 2--4\arcsec\ have
little impact on our results.
In addition to the bisymmetric\ and radial\ models of NGC~2976,
we also fitted models including a lopsided ($m=1$)
distortion.
The optimal $m=1$ model (including velocity profiles \ensuremath{\Vrot(r)}, $V_{1,t}(r)$ and $V_{1,r}(r)$; see eq.~\ref{defVobs}) produced a much less satisfactory fit to the
data than either the bisymmetric\ or the radial\ model. Adding a radial flow
term \ensuremath{\Vrad(r)}\ yielded optimal parameters identical to those of
the radial\ model, with the $m=1$ components consistent with zero.
The insignificance of a lopsided component, which we
conclude from these fits, is consistent with our result below that
the kinematic and photometric centers of NGC~2976\ are coincident within the
errors (see also \citetalias{s03}). We also attempted to fit
$m=0$ and $m=2$ distortions to the data simultaneously. Since both cause
$\ensuremath{m^{\prime}} = 1$ periodicities in the line-of-sight velocities, however,
the resulting model
had too much freedom and produced unphysically large variations in
all the velocity profiles.
\subsection{Uncertainties}
\label{uncert}
The curvature of the \ensuremath{\chi_{\rm r}^2}\ surface at the minimum implies small
formal statistical errors on the best fitting model parameters because
of the large numbers of data values. The $\ensuremath{\chi_{\rm r,min}^2} + 1.0/\ensuremath{\nu}$ contour
on the \ensuremath{\chi_{\rm r}^2}\ surface corresponds to variations $\ensuremath{\delta V} < 1\,\ensuremath{\rm{km\,s^{-1}}}$ on
the velocity profile points, which we regard as unrealistically small.
We therefore use a bootstrap technique to derive more reasonable
estimates of the scatter in the model parameters about their optimal
values.
For each model, we generate a bootstrap sample of the data
by adding randomly drawn
residuals $\ensuremath{\Delta V_n}\ = \ensuremath{D_n}\ - \ensuremath{\sum_{k=1}^K \wk \vk}$ from the distribution at
$\ensuremath{\chi_{\rm r}^2}=\ensuremath{\chi_{\rm r,min}^2}$ to the optimal model velocity field. Since $\{ \ensuremath{\Delta V_n}\
\}$ is correlated over a characteristic scale corresponding to $J$
adjacent datapoints, fully random selections do not reproduce the
quasi-coherent residuals we observe in the data. We therefore select
$P=N/J$ values of \ensuremath{\Delta V_n}\ and add them to the model at $P$ random
locations drawn from $\{\ensuremath{r_n},\ensuremath{\theta}\}$; residuals at the remaining
$(1-1/J)N$ in $\{\ensuremath{r_n},\ensuremath{\theta}\}$ are fixed to the value of the nearest
randomly drawn \ensuremath{\Delta V_n}. We find that $J=4$ produces bootstrap residual
maps with features on scales similar to those in $\{ \ensuremath{\Delta V_n}\ \}$ for the
models of NGC~2976\ in Fig.~\ref{models} (see below), but that there is
little change in the derived uncertainties for $2 \leq J \leq 5$.
For both the bisymmetric\ and radial\ models, we therefore construct
bootstrap samples of the observed velocity field using $J=4$. We
repeat the minimization for each sample, substituting the bootstrap
velocities for $\{\ensuremath{D_n}\}$ in eqs.~\ref{chieq} and \ref{minchieq}. We
carry out this procedure 1000 times, and adopt the standard deviation
of each parameter about its mean value from all the realizations as
its $1\sigma$ uncertainty in the model of the measured velocities
$\{\ensuremath{D_n}\}$.
\begin{figure*}
\epsscale{0.9}
\plotone{f3.eps}
\caption{Fitted velocity components for NGC~2976. ({\it a}) The
components of the optimal radial\ model: \ensuremath{\Vrot(r)}\ is shown by the red
circles and \ensuremath{\Vrad(r)}\ is shown by the blue squares (eq.~\ref{radeq}).
({\it b}) Velocity components from the optimal bisymmetric\ model: here
\ensuremath{\Vrot(r)}\ is shown by the red circles, \ensuremath{\Vbit(r)}\ by the green squares, and
\ensuremath{\Vbir(r)}\ by the blue triangles (eq.~\ref{bieq}). \Ignore{See
Fig.~\ref{bicomp} for a plot of $\ensuremath{\Vbit(r)}\ - \ensuremath{\Vbir(r)}$. The solid black
line shows a power-law fit to \ensuremath{\Vrot(r)}, which corresponds to a density
profile $\rhot \propto r^{-0.39}$ under spherical symmetry. For the
bisymmetric\ model (\ref{rcs}{\it b}). The solid black line shows a
power-law fit to \ensuremath{\Vrot(r)}, which corresponds to $\rhot \propto
r^{-1.04}$ under spherical symmetry.}}
\label{rcs}
\end{figure*}
\subsection{Results}
\label{results}
Our final models of the \citetalias{s03} {H}{$\alpha$}\ and CO velocity fields
for NGC~2976\ are shown in Fig.~\ref{models}. The minimization results
are given in Table~\ref{fits}, and the corresponding velocity profiles
are shown in Fig.~\ref{rcs}.
The observed velocity field from \citetalias{s03} is reproduced in
Fig.~\ref{models}{\it a}, the best fitting radial\ and bisymmetric\ models
are in Figs.~\ref{models}{\it b} and \ref{models}{\it c}, and the
residuals $\{\ensuremath{\Delta V_n}\}$ are in Figs.~\ref{models}{\it d} and
\ref{models}{\it e}. Both models reproduce the gross features of the
observed velocity field, although the bisymmetric\ model exhibits a somewhat
larger isovelocity contour ``twist'' along the kinematic minor axis
(oriented vertically in Fig.~\ref{models}) than the radial\
model. The residual patterns in Figs.~\ref{models}{\it d} and
\ref{models}{\it e} are very similar: \ensuremath{\Delta V_n}\ is correlated on scales of
$15-20\arcsec$ ($250-350\,$pc) in the maps, which may reflect
large-scale turbulence. The mean values \ensuremath{\langle|\res|\rangle}\ in Table~\ref{fits}
are slightly lower for the bisymmetric\ model than for the radial\ one, as
is also suggested by the colors in Figs.~\ref{models}{\it d} and
\ref{models}{\it e}.
The values of \ensuremath{(\xc,\yc)}\ and \ensuremath{V_{\rm sys}}\ in the two models (Table~\ref{fits})
are identical within their uncertainties, while the radial\ model
favors a larger \ensuremath{\epsilon_d}\ and \ensuremath{\phi_d^{\prime}}\ than the bisymmetric\ model at
the 2$\sigma$ level. Both sets of kinematic parameters $\left(
\ensuremath{x_c},\,\ensuremath{y_c},\,\ensuremath{\epsilon_d},\,\ensuremath{\phi_d^{\prime}} \right)$ are consistent with the photometric
values derived by \citetalias{s03}, corroborating their conclusion
that there is little evidence for an offset between them (see also \S\ref{n2976}). The values
of \ensuremath{\chi_{\rm r,min}^2}\ indicate that both models adequately describe $\{\ensuremath{D_n}\}$,
with $\chi^2 \sim 1$ per degree of freedom for the adopted
\ensuremath{\Delta_{\rm ISM}}.\footnote{The optimal model parameters remain unchanged with
choices of \ensuremath{\Delta_{\rm ISM}}\ in the range $3\,\ensuremath{\rm{km\,s^{-1}}} \lesssim \ensuremath{\Delta_{\rm ISM}} \lesssim
7\,\ensuremath{\rm{km\,s^{-1}}}$, but \ensuremath{\chi_{\rm r,min}^2}\ of the corresponding fits varies from $2.7
\lesssim \ensuremath{\chi_{\rm r,min}^2} \lesssim 0.9$.} Even though the bisymmetric\ model has
fewer \ensuremath{\nu}\ than the radial\ model, the difference between their
\ensuremath{\chi_{\rm r,min}^2}\ is formally significant at the $12\sigma$ level. As with the
unrealistically small model uncertainties implied by the $\ensuremath{\chi_{\rm r,min}^2} +
1.0/\ensuremath{\nu}$ contour on the \ensuremath{\chi_{\rm r}^2}\ surface, however, a literal
interpretation this difference in goodness-of-fit is unwise. We thus
conclude conservatively that the smaller \ensuremath{\chi_{\rm r,min}^2}\ and lower \ensuremath{\langle|\res|\rangle}\
of the bisymmetric\ over the radial\ model imply only a marginally
superior statistical fit to the data.
\begin{figure}
\epsscale{1.1}
\plotone{f4.eps}
\caption{Difference $\ensuremath{\Vbit(r)}\ - \ensuremath{\Vbir(r)}$ in the optimal bisymmetric\
model. The components are plotted separately in Fig.~\ref{rcs}{\it
b}.}
\label{bicomp}
\end{figure}
\begin{figure}
\epsscale{1.1}
\plotone{f5.eps}
\caption{Difference \ensuremath{\Delta \Vrot(r)}\ between the optimal \ensuremath{\Vrot(r)}\ from the
bisymmetric\ model and the optimal \ensuremath{\Vrot(r)}\ from the radial\ model. The
components are plotted separately in Fig.~\ref{rcs}.}
\label{rccomp}
\end{figure}
The best fitting velocity field components from our radial\ model are
shown in Fig.~\ref{rcs}{\it a}. Despite significant differences
between our minimization technique and that of \citetalias{s03}, our
measurements of \ensuremath{\Vrot(r)}\ and \ensuremath{\Vrad(r)}\ agree well with their
results (the large \ensuremath{\Vrot(r)}\ for $r \lesssim 10\arcsec$ in our
radial\ model was also found by \citetalias{s03} in their {\it ringfit}
velocity field decompositions; see their fig.~7a). We find that \ensuremath{\Vrad(r)}\ is $\sim 7\,\ensuremath{\rm{km\,s^{-1}}}$ smaller at $r<
30\arcsec$ than the radial velocity amplitudes presented by
\citetalias{s03}. This $\sim2\sigma$ discrepancy results from our
inclusion of a \ensuremath{\Delta_{\rm ISM}}\ term in $\{\ensuremath{\sigma_n}\}$ (eqs.~\ref{chieq} --
\ref{minchieq}), and disappears if we set $\ensuremath{\Delta_{\rm ISM}} = 0$ in our model.
Thus our analysis confirms the non-circular motions in NGC~2976\ found by
these authors.
The bisymmetric\ model favors a strongly non-axisymmetric flow about an
axis inclined $17^\circ$ to the projected major axis in the disk plane. The radial
variations and uncertainties of all three fitted velocity components
are shown in Fig.~\ref{rcs}{\it b}. The estimated uncertainties on
the $\{\ensuremath{V_k}\}$ are larger than those in the radial\ model
(Fig.~\ref{rcs}{\it a}), consistent with the larger scatter in the
profile values from ring to ring. This is likely due to the extra
velocity profile relative to the radial\ model, which gives the
bisymmetric\ model increased flexibility to fit small-scale features. As
in the radial\ model, we find significant non-circular motions, this
time in the form of a bisymmetric flow pattern in the disk plane. The
overall shape of the non-circular contributions \ensuremath{\Vbit(r)}\ and \ensuremath{\Vbir(r)}\
resembles that of \ensuremath{\Vrad(r)}\ in the radial\ model: this is reasonable
because both models must fit the $\ensuremath{m^{\prime}}=1$ variations of the velocity
field. The difference $\ensuremath{\Vbit(r)}\ - \ensuremath{\Vbir(r)}$ between the bisymmetric
components in Fig.~\ref{rcs} is plotted in Fig.~\ref{bicomp}. There is
marginal evidence that $\ensuremath{\Vbit(r)} > \ensuremath{\Vbir(r)}$ for $45\arcsec
\lesssim r \lesssim 65\arcsec$, but elsewhere the two components have
very similar amplitudes. Linear theory applied to a weak, stationary
bar-like distortion produces $\ensuremath{\Vbit(r)} = \ensuremath{\Vbir(r)}$ for a solid-body
rotation velocity profile and $\ensuremath{\Vbit(r)} = 1.5\ensuremath{\Vbir(r)}$ for a flat one
\citep{sw93}. Although linear theory cannot be trusted for strong
perturbations, it is somewhat reassuring that it predicts similar
\ensuremath{\Vbit(r)}\ and \ensuremath{\Vbir(r)}\ for a rising \ensuremath{\Vrot(r)}.
The most significant difference between the optimal radial\ and
bisymmetric\ models is in the shape of \ensuremath{\Vrot(r)}. Beyond the region affected
by non-circular motions, $r \gtrsim 80\arcsec$, \ensuremath{\Vrot(r)}\ is identical
in the two models, as it must be, but large differences arise where
non-circular motions are large. Fig.~\ref{rccomp} shows the
difference between \ensuremath{\Vrot(r)}\ from the bisymmetric\ model and that from the
radial\ model: the former profile rises more steeply than the latter,
and its amplitude is larger by $\sim 15\,\ensuremath{\rm{km\,s^{-1}}}$ for $15\arcsec \lesssim
r \lesssim 50\arcsec$. We discuss the reason for these differences
in the next section.
\begin{deluxetable*}{lccccccccc}
\tablewidth{0pt}
\tablecaption{Minimization Results \label{fits}}
\tablehead{ \colhead{Model} & \colhead{\ensuremath{\epsilon_d}} & \colhead{\ensuremath{\phi_d^{\prime}}} & \colhead{\ensuremath{x_c}} & \colhead{\ensuremath{y_c}} & \colhead{\ensuremath{V_{\rm sys}}} & \colhead{\ensuremath{\phi_b^{\prime}}} & \colhead{\ensuremath{\chi_{\rm r,min}^2}} & \colhead{\ensuremath{\nu}} & \colhead{\ensuremath{\langle|\res|\rangle}} \\
& &\colhead{(\degr)}& \colhead{(\arcsec)}& \colhead{(\arcsec)}& \colhead{(\ensuremath{\rm{km\,s^{-1}}})} & \colhead{(\degr)} & & & \colhead{(\ensuremath{\rm{km\,s^{-1}}})} \\
\colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)} & \colhead{(9)} & \colhead{(10)} }
\startdata
radial\ & $0.568 \pm 0.007$ & $-36.0 \pm 0.6$ & $-1.7 \pm 0.3$ & $0.8 \pm 0.4$ & $0.3 \pm 0.4$ & \nodata & 1.35 & 1034 & 3.4 \\
bisymmetric & $0.556 \pm 0.007$ & $-37.6 \pm 0.6$ & $-1.9 \pm 0.3$ & $1.2 \pm 0.3$ & $0.7 \pm 0.4$ & $-45 \pm 4$ & 1.20 & 1009 & 3.1 \\
\enddata
\tablecomments{Col. (1): Model. Col. (2): Disk ellipticity. Col. (3): Disk PA, measured North $\rightarrow$ East to the receding side of the disk. Col. (4): Right ascension of disk center, relative to the photometric center $09^\mathrm{h}\,47^\mathrm{m}\,15\fs\,3$ \citepalias{s03}. Col. (5): Declination of disk center, relative to photometric center $67\degr\,55\arcmin\,00\farcs\,4$ \citepalias{s03}. Col. (6): Disk systemic velocity, heliocentric optical definition. Col. (7): Bisymmetric distortion PA, measured North $\rightarrow$ East. Col. (8): Minimum value of \ensuremath{\chi_{\rm r}^2}\ (eq.~\ref{chieq}) obtained. Col. (9): Number of degrees of freedom in the minimization. Col. (10): Amplitude of the average (data - model) residual.}
\end{deluxetable*}
\section{Discussion}
\label{discuss}
\subsection{Mean streaming speed}
\label{streaming}
The large differences in the fitted \ensuremath{\Vrot(r)}\ for the bisymmetric and
radial models over the
inner part of NGC~2976\ (Fig.~\ref{rccomp}) demand explanation. Fig.~\ref{comps} shows
sky-plane projections of the angular variations of the separate fitted
velocity components at $r=20\arcsec$ for both models.
Fig.~\ref{comps}{\it a} shows the projected \ensuremath{\Vrad(r)}\ of the radial\
model (dash-dotted line), which shifts the peak in the
projected model velocity (solid line) away from the kinematic
major axis ($\ensuremath{\theta} = 0$) and reproduces the iso-velocity ``twist'' in the
observed velocity field (Fig.~\ref{models}). Since the projected
\ensuremath{\bar V_r}\ must be zero at $\ensuremath{\theta}=0$ in this model (see eq.~\ref{radeq}),
the projected \ensuremath{\Vrot(r)}\ (dashed line) must equal \ensuremath{V_{\rm model}}\ along the
kinematic major axis.
Fig.~\ref{comps}{\it b} shows the corresponding case for the bisymmetric\
model where the non-axisymmetric terms allow \ensuremath{V_{\rm model}}\ to differ from
\ensuremath{\bar V_t}\ along the major axis (eq.~\ref{bieq}). The abscissae are
marked as \ensuremath{\theta}\ along the bottom and as \ensuremath{\theta_b}\ along the top, which
differ only slightly because the best fitting major axis of the
bisymmetric distortion in NGC~2976\ projects to a PA similar to the
kinematic major axis (i.e. $\ensuremath{\phi_b^{\prime}} \sim \ensuremath{\phi_d^{\prime}}$;
Table~\ref{fits}). The significant negative contribution of the
projected \ensuremath{\Vbit(r)}\ (dash-dotted line) to \ensuremath{V_{\rm model}}\ (solid line)
at $\ensuremath{\theta} = 0$ offsets the positive contribution from \ensuremath{\Vrot(r)}\ (dashed
line). The greater amplitude of \ensuremath{\Vrot(r)}\ in the bisymmetric\ model of
NGC~2976\ is therefore due to the large non-circular motions in the inner
parts that happen to be negative near the kinematic major axis because
the $m=2$ distortion is oriented close to this axis.
Notice also that the \ensuremath{V_{2,t}}\ and \ensuremath{V_{2,r}}\ components in Fig.~\ref{comps} show,
as they must
(\S\ref{theory}), both $\ensuremath{m^{\prime}} = 1$ and $\ensuremath{m^{\prime}} =3$ periodicities, and
that both are of similar amplitude (see also Fig.~\ref{bicomp}).
Yet their relative phases ensure
that the net effect of the $\ensuremath{m^{\prime}}=3$ terms on \ensuremath{V_{\rm model}}\ cancels almost
exactly. A larger $\ensuremath{m^{\prime}} = 3$ signal could arise if the \ensuremath{V_{2,t}}\ and
\ensuremath{V_{2,r}}\ terms have different amplitudes, but their relative phases
always ensure at least partial cancellation regardless of the
orientation of the projected bar. Thus one should not conclude that a
very weak $\ensuremath{m^{\prime}}=3$ signal in the velocity map implies no significant
bisymmetric distortion.
\subsection{Centrifugal balance?}
\label{balance}
It is clear from Table~\ref{fits} and Fig.~\ref{models} that both the
bisymmetric\ and radial\ models are adequate parameterizations of the
observed geometry and kinematics of NGC~2976. But does either model
provide insight into its physical structure?
The mean orbital speed, \ensuremath{\Vrot(r)}, in the radial\ model can balance the
central attraction of the system only if either the non-circular
motions are small, or \ensuremath{\Vrad(r)}\ actually implies a real radial flow that
somehow does not affect orbital balance. The first possibility is not
true, as we have confirmed (Figs.~\ref{rcs} and \ref{comps}) the large
non-circular motions found for $r \lesssim 500\,$pc in NGC~2976\ by
\citetalias{s03}. If \ensuremath{\Vrad(r)}\ is attributed to radial flows that do not
affect orbital balance, then all of the detected gas in this quiescent
system would be displaced on kpc scales in $1-3\,$Gyr; we agree with
\cite{s05} that this explanation is not viable. We thus conclude that
although the optimal radial\ model is a reasonable statistical fit to
the data and provides strong evidence for non-circular motions, the
fitted \ensuremath{\Vrot(r)}\ cannot be used to determine the mass distribution
within NGC~2976.
If the non-circular motions in NGC~2976\ are dominated by an $m=2$
perturbation to its potential, then the velocity profiles of the
optimal bisymmetric\ model should better reflect the galaxy's structure
than those of the radial\ model. While the fitted \ensuremath{\Vrot(r)}\ rises more
steeply in the bisymmetric\ model, it is merely the average azimuthal speed
around a circle, not a precise indicator of circular orbital
balance. It should be stressed that circles in the disk plane
approximate streamlines only when non-circular motions are small. In
a bar-like potential, the gas on the bar major axis will be moving
more slowly than average, since it is about to plunge in towards the
center, whereas gas at the same galactocentric radius on the bar minor
axis will be moving faster than average, since it has arrived there
from a larger radius. Under these circumstances, it is not possible
to assert that the azimuthal average, \ensuremath{\bar V_t}, is exactly equal to the
circular orbit speed in an equivalent azimuthally averaged mass
distribution. The only reliable way to extract the azimuthally
averaged central attraction in this case is to find the
non-axisymmetric model that yields a fluid dynamical flow pattern to
match that observed, and to average afterwards.
Despite these cautionary statements, we suspect that the \ensuremath{\bar V_t}\ curve from the bisymmetric\ model provides a better estimate, than does that from the radial\ model, of the azimuthally averaged central attraction in NGC~2976.
\subsection{Evidence for a bar}
\label{bar}
As discussed in \S\S\ref{intro} \& \ref{technique}, the elliptical
streams of fixed direction and phase in the bisymmetric\ model could be
driven by either a triaxial halo or by a bar in the mass distribution.
In either case, the distortion is significant only at $r \la 80\arcsec$
($1.4\,$kpc; Fig.~\ref{rcs}{\it b}) in NGC~2976, beyond
which the flow appears to be near circular.
The aspherical halo interpretation therefore requires the halo that
hosts NGC~2976\ to have an asphericity that increases for decreasing $r$.
Such an idea was proposed by \citet{hayashi06new}, although other work
\citep{dub94,gnedin04,kazant04,beren06,gustaf06} has indicated a tendency for disk
assembly to circularize the potential.
We therefore favor the interpretation that NGC~2976\ hosts a bar.
\citet{kmd07} have examined the {\it Two Micron All Sky Survey\/}
\citep[2MASS;][]{skrutskie06} $J$, $H$ and $K_s$ images of NGC~2976\ to
search for a bar. Their fits to this photometry reveal a radial variation in
ellipticity of amplitude $\Delta\epsilon > 0.1$ \citep[see
also][]{s05}, and their visual inspection of the images reveals a ``candidate" bar
with ${\rm PA}_{\rm bar}=-43 \degr$ and semi-major axis $a = 72 \pm
5\arcsec$ (see their table~2). Their estimated ${\rm PA}_{\rm bar}$ is
fully consistent with our kinematic estimate \ensuremath{\phi_b^{\prime}}\ (Table~\ref{fits}),
and $a$ compares well with the range of $r$ where
\ensuremath{\Vbit(r)}\ and \ensuremath{\Vbir(r)}\ are non-zero (Fig.~\ref{rcs}{\it b}). Furthermore,
${\rm PA}_{\rm bar}$ and \ensuremath{\phi_b^{\prime}}\ are roughly coincident with the apparent
major axis of the CO distribution
in NGC~2976\ (see fig.~4 of \citetalias{s03}), which
suggests that the molecular gas density is larger along this PA than
elsewhere in the disk. Thus the 2MASS photometry and CO morphology
provide strong supporting evidence that NGC~2976\ contains a bar with the
properties implied by our bisymmetric\ model.
\subsection{Mass components}
\label{mass}
Our fits have revealed strong non-circular motions in NGC~2976\ that appear to
result from forcing by a bar. While \ensuremath{\Vrot(r)}\ in the bisymmetric\ model
better reflects the azimuthally averaged mass distribution than its
counterpart in the radial\ model, precise statements about the mass budget
in NGC~2976\ are hampered by our lack of a reliable
estimate of the circular orbital speed curve (see \S\ref{balance}).
The amplitude of the non-circular motions in NGC~2976\ implies a
relatively large bar mass, which in turn suggests that the disk itself
contributes significantly to the central attaction. It is therefore
likely that the baryons in NGC~2976\ dominate its kinematics well beyond
the $r \sim 500\;$pc suggested by the fits of \citetalias{s03}.
Indeed, the steeper rise of \ensuremath{\Vrot(r)}\ in the bisymmetric\ model relative to
that deduced by \citetalias{s03} would allow a larger disk
mass-to-light ratio (${\cal M}/L$) to be tolerated by the kinematics.
This conclusion eases the tension between their dynamical upper bound
on the stellar ${\cal M}/L$ and that expected from stellar population
synthesis for the observed broadband colors \citepalias[see \S3.1.1
of][]{s03}.
We defer the detailed mass modeling of NGC~2976\ required for quantitative
estimates of its mass budget to a future paper. Such a study would be
assisted by additional kinematic data from extant \ion{H}{1}\ aperture
synthesis observations, as well as by decompositions of publicly available infrared photometry from the {\it Spitzer Infrared Nearby Galaxies Survey} \citep{sings}.
\subsection{Other galaxies}
\label{othergals}
We suggest that our approach could be useful for characterizing the
non-circular motions detected in other galaxies, particularly in low-mass
systems where the reported non-circular motions are large
\citep{swaters03,s05,gentile06}. It is more direct than
interpretations of velocity field Fourier coefficients in the weak
perturbation limit, yields physically meaningful kinematic components
for systems with bar-like or oval distortions to the potential, and
its application is much simpler than that of a full fluid-dynamical
model.
We have shown that the velocity field of NGC~2976, when fitted by our
bisymmetric model, reveals a steeper inner rise in \ensuremath{\Vrot(r)}\ than in
previous analyses by other methods. Similar findings have been
reported by Hayashi \& Navarro (2006) and Valenzuela et al. (2007)
for other systems. In NGC~2976, the reason for this difference
(Fig.~\ref{comps}) is that the \ensuremath{V_{2,t}}\ terms happen to partly cancel the
\ensuremath{\bar V_t}\ terms, because they have opposite signs on the projected major
axis when the bar is oriented close to this direction. It should be
clear, however, that the \ensuremath{V_{2,t}}\ terms will have the opposite effect if
the bar is more nearly aligned with the projected minor axis. Thus
even if the non-circular motions detected in other systems result from
bars in the potential, it is unlikely that our bisymmetric\ model will
always cause \ensuremath{\Vrot(r)}\ to rise more steeply than found previously. In
any event, it should be clear that when large non-circular flows are
present, the mean orbital speed derived from models that use epicycle
theory can yield a very misleading estimate of the interior mass
needed for centrifugal balance.
\section{Conclusions}
\label{conclusions}
We have presented a new method for fitting 2-D velocity maps of spiral
galaxies that are characterized by non-circular motions. We suppose
the potential to contain a bar-like or oval distortion that drives the
gas in the disk plane on an elliptical flow pattern of fixed
orientation, such as could arise from a triaxial halo or a bar in the
mass distribution of the baryons. Our model has important advantages
over previous approaches since it is not restricted to small
non-circular motions, as is required when epicycle theory is employed,
and we do note invoke large radial flows that have no clear physical
origin or interpretation.
Our bisymmetric\ flow model can be fitted to data by a generalization of
the technique developed by \citet{bs03}. The fit extracts multiple
non-parametric velocity profiles from an observed velocity field,
and we employ a bootstrap method to estimate uncertainties.
\begin{figure*}
\epsscale{0.9}
\plotone{f6_color.eps}
\caption{Projected contributions from different kinematic components
at $r=20\arcsec$ ($345\,$pc) in the optimal ({\it
a}) radial\ and ({\it b}) bisymmetric\ models. In \ref{comps}{\it a},
the dashed line shows the angular dependence of the projected rotational
velocity term in the radial\ model relative to the kinematic major axis
($2^\mathrm{nd}$
on the right-hand side (RHS) of eq.~\ref{radeq}), and the
dash-dotted line shows that of the radial velocity term
($3^\mathrm{rd}$ on the RHS of eq.~\ref{radeq}). In \ref{comps}{\it
b}, the angular dependence of the components in the bisymmetric\ model are
plotted relative to \ensuremath{\phi_d^{\prime}}\ along the bottom horizontal axis and \ensuremath{\phi_b^{\prime}}\
along the top horizontal axis. The dashed line shows the
rotational velocity term ($2^\mathrm{nd}$ on the RHS of
eq.~\ref{bieq}), and the dash-dotted and dash-dot-dotted
lines show the tangential and radial bisymmetric terms, respectively
($3^\mathrm{rd}$ and $4^\mathrm{th}$ on the RHS of
eq.~\ref{bieq}). The solid lines in both panels shows the net
projected model velocity relative to \ensuremath{V_{\rm sys}}. }
\label{comps}
\end{figure*}
As an example, we have applied our technique to the {H}{$\alpha$}\ and CO
kinematics of NGC~2976\ presented by \citetalias{s03}. We show that the
bisymmetric\ model fits the data at least as well as the {\it ringfit}
procedure implemented by these authors that invokes large radial
velocities, which we are also able to reproduce by our methods. Both
the bisymmetric\ and radial\ models reveal large non-circular motions in
NGC~2976, but the derived mean orbital speed profiles \ensuremath{\Vrot(r)}\ differ
markedly between the two cases. We explain the reason for this large
difference in \S\ref{discuss}.
When disks are observed in projection, kinematic distortions with intrinsic sectoral harmonic $m$ cause azimuthal variations of orders $\ensuremath{m^{\prime}} = m \pm 1$ in line-of-sight velocity maps.
Our analysis of NGC~2976\ clearly demonstrates that $\ensuremath{m^{\prime}} = 1$
distortions to its velocity field can be fitted by a bisymmetric
distortion to the potential, which we regard as more physically
reasonable than radial flows. We show in Fig.~\ref{comps} that
$\ensuremath{m^{\prime}} = 3$ distortions should be small in the bisymmetric\ model; this is
because the $\ensuremath{m^{\prime}} = 3$ variations in the radial and tangential
components project out of phase. They will cancel exactly only when
of equal amplitude, which should be approximately true in the rising
part of \ensuremath{\Vrot(r)}.
We suggest that NGC~2976\ hosts a strong bar oriented at $\sim
17^\circ$ to the projected major axis. Our interpretation is
supported by its CO morphology \citepalias{s03} and more strongly by
the results of \citet{kmd07}, who analyzed the 2MASS photometry of
NGC~2976\ and found a bar whose size and orientation are similar to those
required by our bisymmetric\ model.
We find that the mean orbital speed in NGC~2976\ rises more
steeply than indicated by previous studies (\citetalias{s03};
\citealt{s05}). While \ensuremath{\Vrot(r)}\ in our bisymmetric\ model
is not an exact measure of the
circular orbital speed in the equivalent axially symmetrized galaxy, we regard
it as a better approximation to this quantity. Since the strongly
non-circular flow pattern implies a massive bar, which in turn
suggests a massive disk, we expect a larger baryonic mass than was
estimated by \citetalias{s03}. It is likely, therefore, that most of
the increased central attraction required by our more steeply rising
\ensuremath{\Vrot(r)}\ will not reflect a corresponding increase in the density of the
inner dark matter halo, but will rather ease the tension between
maximum disk fits to its kinematics and ${\cal M}/L$ predictions from
broadband photometry \citepalias{s03}. Indeed, since non-circular
motions are detected throughout the region $r \lesssim 80\arcsec$
($1.4\,$kpc), it seems likely that the luminous matter in NGC~2976\ is an
important contributor to the central attraction at least as far out as
this radius. Detailed mass models of this system are forthcoming.
Application of our method to other galaxies will not always result in
a steeper inner rise in the mean orbital speed. We find this
behavior in NGC~2976\ only because the bar is oriented near to the
projected major axis. Neglect of non-cirular motions, or application
of a radial flow model, when the bar is oriented close to the
projected minor axis will lead to an erroneously steep rise in the
apparent inferred mean orbital speed,
which will rise less steeply when our model is applied.
\acknowledgments
We thank Josh Simon for providing the data for
NGC~2976, and Alberto Bolatto for help in interpreting the
measurement uncertainties. We also thank Alberto Bolatto and Josh Simon for helpful comments on the manuscript. KS is a Jansky
Fellow of the National Radio Astronomy Observatory. JAS is partially
supported by grants from the NSF (AST-0507323) and from NASA
(NNG05GC29G).
\clearpage
|
1,477,468,750,639 | arxiv | \section{Introduction}
In the fundamental paper \cite{Gr69} of 1968, Grothendieck states a series of conjectures concerning the existence of certain algebraic cycles on smooth projective algebraic varieties over an algebraically closed ground fields. Those are known as the standard conjectures. In particular, given such a variety $X$ of dimension $n$, the Lefschetz standard conjecture predicts the existence of self-correspondences on $X$ that give an inverse to the operations
$$H^{k}(X) \rightarrow H^{2n-k}(X)$$
given by the cup-product $n-k$ times with a hyperplane section for all $k\leq n$. Here $H^*(X)$ stands for any Weil cohomology theory on $X$, e.g. singular cohomology if $X$ is defined over $\mathbb{C}$, or $l$-adic \'etale cohomology in characteristic different from $l$. If we can invert the morphism $H^{k}(X) \rightarrow H^{2n-k}(X)$ using self-correspondences on $X$, we say that the Lefschetz conjecture holds in degree $k$.
Let us now, and for the rest of the paper, work over $\mathbb{C}$. The Lefschetz standard conjecture then implies the other ones and has strong theoretical consequences. For instance, it implies that numerical and homological equivalence coincide, and that the category of pure motives for homological equivalence is semisimple. We refer to \cite{Kl68} and \cite{Kl91} for more detailed discussions. The Lefschetz standard conjecture for varieties which are fibered in abelian varieties over a smooth curve also implies the Hodge conjecture for abelian varieties as shown by Yves Andr\'e in \cite{An96}. Grothendieck actually writes in the aforementioned paper that ``alongside the resolution of singularities, the proof of the standard conjectures seems to [him] to be the most urgent task in algebraic geometry''.
Though the motivic picture has tremendously developed since Grothendieck's statement of the standard conjectures, very little progress has been made in their direction. The Lefschetz standard conjecture is known for abelian varieties, see \cite{Kl68} and in degree $1$ where it reduces to the Hodge conjecture for divisors. Aside from examples obtained by taking products and hyperplane sections, those seem to be the only two cases where a proof is known.
\bigskip
In this paper, we want to investigate further the geometrical content of the Lefschetz standard conjecture, and try to give insight into the specific case of hyperk\"ahler varieties. The original form of the Lefschetz standard conjecture for a variety $X$ predicts the existence of specific algebraic cycles in the product $X\times X$. Those cycles can be considered as family of cycles on $X$ parametrized by $X$ itself. Our first remark is that the conjecture actually reduces to a general statement about the existence of large families of algebraic cycles in $X$ parametrized by any smooth quasi-projective base. For this, we use Hodge theory on $X$.
It turns out that for those families to give a positive answer to the conjecture, it is enough to control the local variation of the family of cycles considered. Let us give a precise statement. Let $X$ be a smooth projective variety, $S$ a smooth quasi-projective variety, and let $Z\in CH^k(X\times S)$ be a family of codimension $k$ cycles in $X$ parametrized by $S$. Let $\mathcal T_S$ be the tangent sheaf of $S$. Using the Leray spectral sequence for the projection onto $S$ and constructions from Griffiths and Voisin in \cite{IVHS3}, \cite{Vo88}, we construct a map
$$\phi_Z : \bigwedge^k \mathcal T_S \rightarrow H^k(X, \mathcal O_X)\otimes \mathcal O_S,$$
We then get the following result, which we state here only in degree $2$ for simplicity, but see section 2.
\begin{thm}
Let $X$ be a smooth projective variety.
Then the Lefschetz conjecture is true in degree $2$ for $X$ if and only if there exists a smooth quasi-projective variety $S$, a codimension $2$ cycle $Z$ in $CH^2(X\times S)$ and a point $s\in S$ such that the morphism
$$\phi_{Z,s} : \bigwedge^2 \mathcal T_{S,s} \rightarrow H^2(X, \mathcal O_X)$$
considered above for $k=2$, is surjective.
\end{thm}
This variational approach to the existence of algebraic cycles can be compared to the study of semi-regularity maps as in \cite{Bl72}.
In the following section, we give an explicit formula for $\phi_Z$ in case the cycle $Z$ is given by the Chern classes of a family of vector bundles $\mathcal E$ on $X\times S$. In this situation, we show that $\phi_Z$ is expressed very simply in terms of the Kodaira-Spencer map. Indeed, $\mathcal T_{S,s}$ maps to the space $\mathrm{Ext}^1(\mathcal{E}_s, \mathcal{E}_s)$. We then have a Yoneda product
$$\mathrm{Ext}^1(\mathcal{E}_s, \mathcal{E}_s)\times \mathrm{Ext}^1(\mathcal{E}_s, \mathcal{E}_s)\rightarrow \mathrm{Ext}^2(\mathcal{E}_s, \mathcal{E}_s)$$
and a trace map
$$\mathrm{Ext}^2(\mathcal{E}_s, \mathcal{E}_s)\rightarrow H^2(X,\mathcal O_X).$$
We show that we can express $\phi_{Z,s}$ in terms of the composition
$$\phi_2(\mathcal E) : \bigwedge^2 \mathcal T_{S,s} \rightarrow H^2(X, \mathcal O_X)$$ of those two maps, and we get the following theorem.
\begin{thm}
Let $X$ be a smooth projective variety. Then the Lefschetz conjecture is true in degree $2$ for $X$ if there exists a smooth quasi-projective variety $S$, a vector bundle $\mathcal E$ over $X\times S$, and a point $s\in S$ such that the morphism
\begin{equation}\label{deuxf}
\phi_2(\mathcal E)_s : \bigwedge^2 \mathcal T_{S,s} \rightarrow H^2(X, \mathcal O_X)
\end{equation}
induced by the composition above is surjective.
\end{thm}
The main interest of this theorem is that it makes it possible to only use first-order computations to check the Lefschetz standard conjecture, which is global in nature, thus replacing it by a local statement on deformations of $\mathcal E$. Of course, when one wants to ensure that there exists a vector bundle over $X$ that has a positive-dimensional family of deformations, the computation of obstructions is needed, which involves higher-order computations. However, once a family of vector bundles is given, checking the surjectivity condition of the theorem involves only first-order deformations.
\bigskip
The last part of the paper is devoted to applications of the previous results to hyperk\"ahler varieties. We will recall general properties of those and their hyperholomorphic bundles in section 4. Those varieties have $h^{2,0}=1$, which makes the last criterion easier to check. In the case of $2$-dimensional hyperk\"ahler varieties, that is, in the case of K3 surfaces, Mukai has investigated in \cite{Mu84} the $2$-form on the moduli space of some stable sheaves given by (\ref{deuxf}) and showed that it is nondegenerate. In particular, it is nonzero. Of course, the case of surfaces is irrelevant in our work, but we will use Verbitsky's theory of hyperholomorphic bundles on hyperholomorphic varieties as presented in \cite{Ver96}. In his work, Verbitsky extends the work of Mukai to higher dimensions and shows results implying the nondegeneracy of the the form (\ref{deuxf}) in some cases. Using those, we are able to show that the existence of nonrigid hyperholomorphic bundles on a hyperk\"ahler variety is enough to prove the Lefschetz standard conjecture in degree $2$. Indeed, we get the following.
\begin{thm}\label{nd}
Let $X$ be a projective irreducible hyperk\"ahler variety, and let $\mathcal E$ be a stable hyperholomorphic bundle on $X$. Assume that $\mathcal E$ has a nontrivial positive-dimensional family of deformations. Then the Lefschetz conjecture is true in degree $2$ for $X$.
\end{thm}
In a slightly different direction, recall that the only known hyperk\"ahler varieties, except in dimension $6$ and $10$, are the two families constructed by Beauville in \cite{Be83} which are the small deformations of Hilbert schemes of points on a K3 surface or of generalized Kummer varieties. For those, the Lefschetz standard conjecture is easy -- see \cite{Ar06} for a general discussion -- as their cohomology comes from that of a surface. We get the following.
\begin{thm}\label{stdef}
Let $n$ be a positive integer. Assume that for every K3 surface $S$, there exists a stable hyperholomorphic sheaf $\mathcal E$ with a nontrivial positive-dimensional family of deformations on the Hilbert scheme $S^{[n]}$ parametrizing subschemes of $S$ of length $n$. Then the Lefschetz conjecture is true in degree $2$ for any projective deformation of $S^{[n]}$. The same result holds for generalized Kummer varieties.
\end{thm}
Both those results could be applied taking $\mathcal E$ to be the tangent sheaf of the variety considered, in case it has nontrivial deformations.
Those results fit well in the -- mostly conjectural -- work of Verbitsky as exposed in \cite{Ver99} predicting the existence of large moduli spaces of hyperholomorphic bundles. Unfortunately, we were not able to exhibit bundles satisfying the hypotheses of the theorems.
\bigskip
Varieties are defined to be reduced and irreducible. All varieties and schemes are over the field of complex numbers.
\paragraph{Acknowledgements.}It is a great pleasure to thank Claire Voisin for her help and support, as well as many enlightening discussions during the writing of this paper. I am grateful to Eyal Markman for kindly explaining me the results of \cite{Ma10}. I would also like to thank Daniel Huybrechts for pointing out the relevance of Verbitsky's results and for the interesting discussions we had around the manuscript during his nice invitation to the university of Bonn, as well as Burt Totaro and the referee for many useful comments.
\section{General remarks on the Lefschetz standard conjecture}
This section is devoted to some general remarks on the Lefschetz standard conjecture. Although some are well-known to specialists, we include them here for ease of reference. Let us first recall the statement of the conjecture.
Let $X$ be a smooth projective variety of dimension $n$ over $\mathbb{C}$. Let $\xi \in H^2(X, \mathbb{Q})$ be the cohomology class of a hyperplane section of $X$. According to the hard Lefschetz theorem, see for instance \cite{Vo02}, Chapter 13, for all $k\in\{0, \ldots, n\}$, cup-product with $\xi^{n-k}$ induces an isomorphism
$$\cup \xi^{n-k} : H^{k}(X, \mathbb{Q})\rightarrow H^{2n-k}(X, \mathbb{Q}).$$
The Lefschetz standard conjecture was first stated in \cite{Gr69}, conjecture $B(X)$. It is the following.
\begin{conj}
Let $X$ and $\xi$ be as above. Then for all $k\in\{0, \ldots, n\}$, there exists an algebraic cycle $Z$ of codimension $k$ in the product $X\times X$ such that the correspondence
$$[Z]_* : H^{2n-k}(X, \mathbb{Q})\rightarrow H^{k}(X, \mathbb{Q})$$
is the inverse of $\cup \xi^{n-k}$.
\end{conj}
If this conjecture holds for some specific $k$ on $X$, we will say the Lefschetz conjecture holds in degree $k$ for the variety $X$.
Let us recall the following easy lemma, see \cite{Kl91}, Theorem 4.1, which shows in particular that the Lefschetz conjecture does not depend on the choice of a polarization.
\begin{lem}
Let $X$ and $\xi$ be as above. Then the Lefschetz conjecture holds in degree $k$ for $X$ if and only if there exists an algebraic cycle $Z$ of codimension $k$ in the product $X\times X$ such that the correspondence
$$[Z]_* : H^{2n-k}(X, \mathbb{Q})\rightarrow H^{k}(X, \mathbb{Q})$$
is bijective.
\end{lem}
\begin{proof}
Let $Z$ be as in the lemma. The morphism
$$[Z]_* \circ (\cup \xi^{n-k} \circ [Z]_*)^{-1}: H^{2n-k}(X, \mathbb{Q})\rightarrow H^{k}(X, \mathbb{Q})$$
is the inverse of $\cup \xi^{n-k} : H^{k}(X, \mathbb{Q})\rightarrow H^{2n-k}(X, \mathbb{Q}).$ Now by the Cayley-Hamilton theorem, the automorphism $(\cup \xi^{n-k} \circ [Z]_*)^{-1}$ of $H^{2n-k}(X, \mathbb{Q})$ is a polynomial in $(\cup \xi^{n-k} \circ [Z]_*)$. As such, it is defined by an algebraic correspondence. By composition, the morphism $[Z]_* \circ (\cup \xi^{n-k} \circ [Z]_*)^{-1}$ is defined by an algebraic correspondence, which concludes the proof.
\end{proof}
For the next results, we will need to work with primitive cohomology classes. Let us recall some notation. Let $S$ be a smooth polarized projective variety of dimension $l$. Let $L$ denote cup-product with the cohomology class of a hyperplane section. For any integer $k$ in $\{0, \ldots, l\}$, let $H^k(S, \mathbb{Q})_{prim}$ denote the primitive part of $H^k(S, \mathbb{Q})$, that is, the kernel of
$$L^{l-k+1} : H^k(S, \mathbb{Q})\rightarrow H^{2l-k+2}(S, \mathbb{Q}).$$
The cohomology groups of $S$ in degrees less than $l$ then admit a Lefschetz decomposition
$$H^k(S, \mathbb{Q}) = \bigoplus_{i\geq 0} L^i H^{k-2i}(S, \mathbb{Q})_{prim}.$$
The following lemma is well-known, but we include it here for ease of reference as well as to keep track of the degrees for which we have to use the Lefschetz standard conjecture.
\begin{lem}\label{proj}
Let $k$ be an integer, and let $S$ be a smooth projective scheme of dimension $l\geq k$. Consider the Lefschetz decomposition
$$H^k(S, \mathbb{Q})=\bigoplus_{i\geq 0} L^i H^{k-2i}(S, \mathbb{Q})_{prim},$$
where $L$ is the cup-product by the class of a hyperplane section. Assume that the Leschetz conjecture holds for $S$ in degrees up to $k-2$. Then the projections $H^k(S, \mathbb{Q})\rightarrow L^i H^{k-2i}(S, \mathbb{Q})_{prim}$ are induced by algebraic correspondences.
\end{lem}
\begin{proof}
By induction, it is enough to prove that the projection $H^k(S, \mathbb{Q})\rightarrow L H^{k-2}(S, \mathbb{Q})$ is induced by an algebraic correspondence. Let $Z\subset S\times S$ be an algebraic cycle such that
$$[Z]_* : H^{2l-k+2}(S, \mathbb{Q})\rightarrow H^{k-2}(S, \mathbb{Q})$$
is the inverse of $L^{l-k+2}$. Then the composition $ L\circ [Z]_*\circ L^{l-k+1}$ is the desired projection since $H^k(S, \mathbb{Q})_{prim}$ is the kernel of $L^{l-k+1}$ in $H^k(S, \mathbb{Q})$ .
\end{proof}
\bigskip
The next result is the starting point of our paper. It shows that the Lefschetz standard conjecture in degree $k$ on $X$ is equivalent to the existence of a sufficiently big family of codimension $k$ algebraic cycles in $X$, and allows us to work on the product of $X$ with any variety.
\begin{prop}\label{surj}
Let $X$ be a smooth projective variety of dimension $n$, and let $k\leq n$ be an integer. Then the Lefschetz conjecture is true in degree $k$ for $X$ if and only if there exists a smooth projective scheme $S$ of dimension $l\geq k$ satisfying the Lefschetz conjecture in degrees up to $k-2$ and a codimension $k$ cycle $Z$ in $X\times S$ such that the morphism
$$[Z]_* : H^{2l-k}(S, \mathbb{Q})\rightarrow H^k(X, \mathbb{Q})$$
induced by the correspondence $Z$ is surjective.
\end{prop}
\begin{proof}
Taking $S=X$, the ''only if'' part is obvious. For the other statement, fix a polarization on $S$, and let $L$ be the cup-product with the class of a hyperplane section of $S$. Consider the morphism $s : H^k(S, \mathbb{Q}) \rightarrow H^k(S, \mathbb{Q})$ which is given by multiplication by $(-1)^{i}$ on $L^i H^{k-2i}(S, \mathbb{Q})_{prim}$. By the Hodge index theorem, the pairing
$$H^k(S, \mathbb{C})\otimes H^k(S, \mathbb{C})\rightarrow \mathbb{C}, \,\alpha\otimes\beta \mapsto \int_S \alpha \cup L^{l-k}(s(\beta))$$
turns $H^k(S, \mathbb{Q})$ into a polarized Hodge structure. Furthermore, Lemma \ref{proj} shows that $s$ is induced by an algebraic correspondence.
We have a morphism $[Z]_* : H^{2l-k}(S, \mathbb{Q})\rightarrow H^k(X, \mathbb{Q})$ which is surjective. Its dual $[Z]^* : H^{2n-k}(X,\mathbb{Q})\rightarrow H^k(S, \mathbb{Q})$ is injective, where $n$ is the dimension of $X$. Let us consider the composition
$$[Z]_*\circ L^{l-k} \circ s \circ [Z]^* : H^{2n-k}(X, \mathbb{Q})\rightarrow H^k(X, \mathbb{Q}),$$
It is defined by an algebraic correspondence, and it is enough to show that it is a bijection. Since $H^{2n-k}(X, \mathbb{Q})$ and $H^k(X, \mathbb{Q})$ have the same dimension, we only have to prove it is injective.
Let $\alpha\in H^{2n-k}(X,\mathbb{Q})$ lie in the kernel of the composition. For any $\beta\in H^{2n-k}(X, \mathbb{Q})$, we get
$$([Z]^*\beta)\cup ((L^{l-k} \circ s)([Z]^*\alpha))=0.$$
Since $[Z]^*(H^{2n-k}(X, \mathbb{Q}))$ is a sub-Hodge structure of the polarized Hodge structure $H^k(S, \mathbb{Q})$, the restriction of the polarization $$<u, v>=\int_S u \cup (L^{l-k}\circ s)(v)$$
on $H^k(S, \mathbb{Q})$ to this subspace is nondegenerate, which shows that $\alpha$ is zero.
\end{proof}
\begin{rk}
Using the weak Lefschetz theorem, one can always reduce to the case where $S$ is of dimension $k$.
\end{rk}
\begin{cor}\label{tr}
Let $X$ be a smooth projective variety of dimension $n$, and let $k\leq n$ be an integer. Assume the Lefschetz conjecture for all varieties in degrees up to $k-2$ and that the generalized Hodge conjecture is true for $H^k(X,\mathbb{Q})$.
Then the Lefschetz conjecture is true in degree $k$ for $X$ if and only if there exists a smooth projective scheme $S$, of dimension $l$, and a codimension $k$ cycle $Z$ in $CH^k(X\times S)$ such that the morphism
\begin{equation}\label{Lef}
H^{l}(S, \Omega^{l-k}_S)\rightarrow H^k(X, \mathcal{O}_X)
\end{equation}
induced by the morphism of Hodge structures
$$[Z]_* : H^{2l-k}(S, \mathbb{C})\rightarrow H^k(X, \mathbb{C})$$
is surjective.
\end{cor}
\begin{rk}\label{incond}
Note that this corollary is unconditional for $k=2$ since the generalized Hodge conjecture is just the Hodge conjecture for divisors, and the Lefschetz standard conjecture is obvious in degree $0$.
\end{rk}
\begin{proof}
Let $X$, $S$ and $Z$ be as in the statement of the corollary. Let $H$ be the image of $H^{2l-k}(S, \mathbb{Q})$ by $[Z]_*$. By (\ref{Lef}), we have $H^{k, 0}=H^k(X, \mathcal{O}_X)$. Let $H'$ be a sub-Hodge structure of $H^k(X,\mathbb{Q})$ such that $H^k(X,\mathbb{Q})=H\oplus H'$. Then $H'^{k,0}=0$. As $H'$ has no part of type $(k,0)$, the generalized Hodge conjecture then predicts that there exists a smooth projective variety $X'$ of dimension $n-1$, together with a proper morphism $f: X'\rightarrow X$ such that $H'$ is contained in $f_* H^{k-2}(X',\mathbb{Q})$.
If the Lefschetz conjecture is true in degree $k-2$, then it is true for $H^{k-2}(X',\mathbb{Q})$. As a consequence, we get a cycle $Z'$ of codimension $k-2$ in $X'\times X'$ such that $[Z']_* : H^{2(n-1)-k+2}(X', \mathbb{Q})\rightarrow H^{k-2}(X', \mathbb{Q})$ is surjective. Consider the composition
$$H^{2(n-1)+2-k}(X'\times \mathbb P^1, \mathbb{Q}) \twoheadrightarrow H^{2(n-1)-k+2}(X', \mathbb{Q})\twoheadrightarrow H^{k-2}(X', \mathbb{Q}) \rightarrow H^k(X, \mathbb{Q}),$$
the first map being the pullback by any of the immersions $X'\rightarrow X'\times \mathbb P^1, x'\mapsto (x', x)$, the second one being $[Z']_*$ and the last one $f_*$. This composition is induced by an algebraic correspondence $Z''\hookrightarrow X'\times \mathbb P^1\times X$, and is surjective onto $f_* H^{k-2}(X',\mathbb{Q})$. It is easy to assume, after taking products with projective spaces, that $S$ and $X'\times \mathbb P^1$ have the same dimension. Now since the subspaces $H$ and
$f_* H^{k-2}(X',\mathbb{Q})$ generate $H^k(X,\mathbb{Q})$, the correspondence induced by the cycle $Z+Z''$ in $(S\coprod (X'\times \mathbb P^1))\times X$ satisfies the hypotheses of Proposition \ref{surj}.
\end{proof}
\bigskip
With the notations of the previous corollary, in case, $Z$ is flat over $X$, we have a family of codimension $k$ algebraic cycles in $X$ parametrized by $S$. The next theorem shows that the map (\ref{Lef}), which is the one we have to study in order to prove the Lefschetz conjecture in degree $k$ for $X$, does not depend on the global geometry of $S$, and can be computed locally on $S$. This will allow us to give an explicit description of the map (\ref{Lef}) in terms of the deformation theory of the family $Z$ in the next section.
Let us first recall a general cohomological invariant for families of algebraic cycles. We follow \cite{Vo02}, 19.2.2, see also \cite{IVHS3}, \cite{Vo88} for related discussions. In the previous setting, $Z$, $X$ and $S$ being as before, the algebraic cycle $Z$ has a class
$$[Z]\in H^k(X\times S, \Omega^k_{X\times S}).$$
Using the K\"unneth formula, this last group maps to
$$H^0(S, \Omega^k_S)\otimes H^k(X, \mathcal{O}_X), $$
which means that the cohomology class $[Z]$ gives rise to a morphism of sheaves on $S$
\begin{equation}\label{def}
\phi_Z : \bigwedge^k \mathcal T_S \rightarrow H^k(X, \mathcal O_X)\otimes \mathcal O_S,
\end{equation}
where $\mathcal{T}_S$ is the tangent sheaf of $S$.
If $s$ is a complex point of $S$, let $\phi_{Z,s}$ be the morphism $\bigwedge^k \mathcal T_{S,s} \rightarrow H^k(X, \mathcal O_X)$ coming from $\phi_Z$.
\bigskip
Note that the definition of $\phi_{Z,s}$ is local on $S$. Indeed, the map $H^k(X\times S, \Omega^k_{X\times S})\rightarrow H^0(S, \Omega^k_S)\otimes H^k(X, \mathcal{O}_X)$ factors through the restriction map
$$H^k(X\times S, \Omega^k_{X\times S})\rightarrow H^0(S, R^k p_* \Omega^k_{X\times S}),$$
where $p$ is the projection from $X\times S$ to $S$, corresponding to the restriction of a cohomology class to the fibers of $p$. Actually, it can be shown that it only depends on the first order deformation $Z_s^{\epsilon}$ of $Z_s$ in $X$, see \cite{Vo02}, Remarque 19.12 under rather weak assumptions. We will recover this result in the next section by giving an explicit formula for $\phi_{Z,s}$. This fact is the one that allows us to reduce the Lefschetz standard conjecture to a variational statement.
The next theorem shows, using the map $\phi_{Z,s}$, that the Lefschetz conjecture can be reduced to the existence of local deformations of algebraic cycles in $X$.
\begin{thm}\label{vari}
Let $X$ be a smooth projective variety. Assume as in Corollary \ref{tr} that the generalized Hodge conjecture is true for $H^k(X,\mathbb{Q})$ and the Lefschetz conjecture holds for smooth projective varieties in degree $k-2$.
Then the Lefschetz conjecture is true in degree $k$ for $X$ if and only if there exist a smooth quasi-projective scheme $S$, a codimension $k$ cycle $Z$ in $CH^k(X\times S)$ and a point $s\in S$ such that the morphism
\begin{equation}
\phi_{Z,s} : \bigwedge^k \mathcal T_{S,s} \rightarrow H^k(X, \mathcal O_X)
\end{equation}
is surjective.
\end{thm}
\begin{proof}
Assume the hypothesis of the theorem holds. Up to taking a smooth projective compactification of $S$ and taking the adherence of $Z$, we can assume $S$ is smooth projective. The morphism of sheaves
$$\phi_Z : \bigwedge^k \mathcal T_S \rightarrow H^k(X, \mathcal O_X)\otimes \mathcal O_S$$
that we constructed earlier corresponds to an element of the group
$$Hom_{\mathcal O_S}(\bigwedge^k \mathcal T_S, H^k(X, \mathcal O_X)\otimes \mathcal O_S)=H^0(\Omega^k_S\otimes H^k(X, \mathcal{O}_X)),$$
which in turn using Serre duality corresponds to a morphism
$$H^l(S, \Omega^{l-k}_S)\rightarrow H^k(X, \mathcal{O}_X),$$
where $l$ is the dimension of $S$.
By the definition of $\phi_Z$, this morphism is actually the morphism (\ref{Lef}) of Corollary \ref{tr}. Indeed, this last morphism was constructed using the K\"unneth formula for $X\times S$, Poincar\'e duality and taking components of the Hodge decomposition, which is the way $\phi_Z$ is defined, since Serre duality is compatible with Poincar\'e duality.
Moreover, by construction, if $\phi_{Z,s}$ is surjective, then $H^l(S, \Omega^{l-k}_S)\rightarrow H^k(X, \mathcal{O}_X)$ is. As for the converse, if $H^l(S, \Omega^{l-k}_S)\rightarrow H^k(X, \mathcal{O}_X)$ is surjective, then we can find points $s_1, \ldots, s_r$ of $s$ such that the images of the $\phi_{Z,s_i}$ generate $H^k(X, \mathcal{O}_X)$. Replacing $S$ by $S^r$, the cycle $Z$ by the disjoint union of the $Z_i=p_i^* Z$, where $p_i : S^r\times X \rightarrow S\times X$ is the projection on the product of the $i$-th factor, and $s$ by $(s_1, \ldots, s_r)$, this concludes the proof by Corollary \ref{tr}.
\end{proof}
The important part of this theorem is that it does not depend on the global geometry of $S$, but only on the local variation of the family $Z$. As such, it makes it possible to use deformation theory and moduli spaces to study the Lefschetz conjecture, especially in degree $2$ where Theorem \ref{vari} is unconditional by Remark \ref{incond}.
\section{A local computation}
Let $X$ be a smooth variety and $S$ a smooth scheme, $X$ being projective and $S$ quasi-projective. Let $Z$ be a cycle of codimension $k$ in the product $X\times S$. As we saw earlier, for any point $s\in S$, the correspondence defined by $Z$ induces a map
$$\phi_{Z,s} : \bigwedge^k \mathcal T_{S,s} \rightarrow H^k(X, \mathcal O_X)$$
The goal of this section is to compute this map in terms of the deformation theory of the family $Z$ of cycles on $X$ parametrized by $S$. We will formulate this result when the class of $Z$ in the Chow group of $X\times S$ is given by the codimension $k$ part $ch_k(\mathcal E)$ of the Chern character of a vector bundle $\mathcal E$ over $X\times S$. It is well-known that we obtain all the rational equivalence classes of algebraic cycles as linear combinations of those.
\bigskip
Let us now recall general facts about the deformation theory of vector bundles and their Atiyah class.
Given a vector bundle $\mathcal{E}$ over $X\times S$, and $p$ being the projection of $X\times S$ to $S$, let $\mathcal{E}xt^1_p(\mathcal{E}, \mathcal{E})$ be the sheafification of the presheaf $U\mapsto \mathrm{Ext}^1_{\mathcal{O}_{X\times U}}(\mathcal{E}_{|X\times U}, \mathcal{E}_{|X\times U})$ on $S$. The deformation of vector bundles determined by $\mathcal E$ is described by the Kodaira-Spencer map. This is a map of sheaves
$$\rho : \mathcal{T}_S\rightarrow \mathcal{E}xt^1_p(\mathcal{E}, \mathcal{E}),$$
where $\mathcal{T}_S$ is the tangent sheaf to $S$. Let $s$ be a complex point of $S$. The Kodaira-Spencer map at $s$ is given by the composition
$$\rho_s : T_{S, s} \rightarrow \mathcal{E}xt^1_p(\mathcal{E}, \mathcal{E})_s\rightarrow \mathrm{Ext}^1(\mathcal{E}_s, \mathcal{E}_s), $$
the last one being the canonical one.
In the next section, we will use results of Verbitsky which allow us to produce unobstructed elements of $\mathrm{Ext}^1(\mathcal{E}_s, \mathcal{E}_s)$ in the hyperholomorphic setting.
\bigskip
Associated to $\mathcal E$ as well are the images in $H^k(X\times S, \Omega^k_{X\times S})$ of the Chern classes of $\mathcal{E}$, which we will denote by $c_k(\mathcal{E})$ with a slight abuse of notation. We also have the images $ch_k(\mathcal{E})\in H^k(X\times S, \Omega^k_{X\times S})$ of the Chern character.
\bigskip
The link between Chern classes and the Kodaira-Spencer map is given by the Atiyah class. It is well-known that the Chern classes of $\mathcal F$ can be computed from its Atiyah class $A(\mathcal{F})\in \mathrm{Ext}^1(\mathcal{F}, \mathcal F\otimes \Omega^1_Y)$, see \cite{At57}, \cite{HL97}, Chapter 10 :
\begin{prop}\label{chern}
For $k$ a positive integer, let $\alpha_k\in H^k(Y, \Omega_Y^k)$ be the trace of the element $A(\mathcal F)^k\in \mathrm{Ext}^k(\mathcal{F}, \mathcal F\otimes \Omega^k_Y)$ by the trace map. Then
$$\alpha_k=k!\,ch_k(\mathcal F).$$
\end{prop}
Now in the relative situation with our previous notation, the vector bundle $\mathcal E$ has an Atiyah class $A(\mathcal E)$ with value in $\mathrm{Ext}^1(\mathcal{E}, \mathcal E\otimes \Omega^1_{X\times S})$. The latter group maps to the group $H^0(S, \mathcal{E}xt^1_p(\mathcal E, \mathcal E\otimes \Omega^1_{X\times S}))$, which contains
$$H^0(S, \mathcal{E}xt^1_p(\mathcal E, \mathcal E)\otimes \Omega^1_{S})=\mathrm{Hom}(\mathcal{T}_S\rightarrow \mathcal{E}xt^1_p(\mathcal{E}, \mathcal{E}))$$
as a direct factor. We thus get a morphism of sheaves
$$\tau : \mathcal{T}_S\rightarrow \mathcal{E}xt^1_p(\mathcal{E}, \mathcal{E}).$$
For the following well-known computation, see \cite{HL97} or \cite{Il71}, Chapter IV.
\begin{prop}\label{KS}
The map $\tau$ induced by the Atiyah class of $\mathcal E$ is equal to the Kodaira-Spencer map $\rho$.
\end{prop}
Those two results make it possible to give an explicit description of the map $\phi_{Z}$ of last section in case the image of $Z$ in the Chow group of $X\times S$ is given by the codimension $k$ part $ch_k(\mathcal E)$ of the Chern character of a vector bundle $\mathcal E$ over $X\times S$. First introduce a map of sheaves coming from the Kodaira-Spencer map.
For $k$ a positive integer, let
$$\phi_k(\mathcal E) : \bigwedge^k \mathcal T_S \rightarrow H^k(X, \mathcal O_X)\otimes \mathcal O_S$$
be the composition of the $k$-th alternate product of the Kodaira-Spencer map with the map
$$\bigwedge^k \mathcal{E}xt^1_p(\mathcal{E}, \mathcal{E})\rightarrow \mathcal Ext^k_p(\mathcal E, \mathcal E)\rightarrow H^k(X, \mathcal O_X)\otimes \mathcal O_S,$$
the first arrow being Yoneda product and the second being the trace map.
\begin{lem}
We have
$$\phi_k(\mathcal E)=k!\,\phi_{ch_k(\mathcal E)},$$
where $\phi_{ch_k(\mathcal E)}$ is the map appearing in (\ref{def}).
\end{lem}
\begin{proof}
We have the following commutative diagram
\begin{center}
$$\xymatrix{\mathrm{Ext}^1(\mathcal E, \mathcal{E}\otimes \Omega^1_{X\times S})^{\otimes k} \ar[r]\ar[d] & \mathrm{Ext}^k(\mathcal E, \mathcal{E}\otimes \Omega^k_{X\times S})\ar[r]\ar[d] & H^k(X\times S, \Omega^k_{X\times S})\ar[d]\\
H^0(S, \mathcal{E}xt^1_p(\mathcal E, \mathcal{E}\otimes \Omega^1_{X\times S}))^{\otimes k} \ar[r]\ar[d] & H^0(S, \mathcal{E}xt^k_p(\mathcal E, \mathcal{E}\otimes \Omega^k_{X\times S}))\ar[r]\ar[d] & H^0(S, R^k p_*\Omega^k_{X\times S})\ar[d]\\
H^0(S, \Omega^1_S\otimes \mathcal{E}xt^1_p(\mathcal E, \mathcal{E}))^{\otimes k} \ar[r] & H^0(S, \Omega^k_S\otimes \mathcal{E}xt^k_p(\mathcal E, \mathcal{E}))\ar[r] & H^0(S, \Omega^k_S\otimes H^k(X, \mathcal{O}_X)),}$$
\end{center}
where the horizontal maps on the left are given by Yoneda product, the horizontal maps on the right side are the trace maps, the upper vertical maps come from the Leray exact sequence associated to $p$, and the lower vertical maps come from the projection $\Omega^1_{X\times S} \rightarrow p^* \Omega^1_S$.
By definition, and using Proposition \ref{KS}, the element $A(\mathcal E)^{\otimes k}\in \mathrm{Ext}^1(\mathcal E, \mathcal{E}\otimes \Omega^1_{X\times S})^{\otimes k}$ maps to $$\phi_k(\mathcal E)\in \mathrm{Hom}(\bigwedge^k \mathcal T_S, H^k(X, \mathcal O_X)\otimes \mathcal O_S)=H^0(S, \Omega^k_S\otimes H^k(X, \mathcal{O}_X)),$$
following the left side, then the lower side of the diagram.
On the other hand, Proposition \ref{chern} shows that it also maps to $k!\,\phi_{ch_k(\mathcal E)}$, following the upper side, then the right side of the diagram. This concludes the proof.
\end{proof}
As an immediate consequence, we get the following criterion.
\begin{thm}\label{crit}
Let $X$ be a smooth projective variety, and assume the same hypotheses as in Theorem \ref{vari}. Then the Lefschetz conjecture is true in degree $k$ for $X$ if there exists a smooth quasi-projective scheme $S$, a vector bundle $\mathcal E$ over $X\times S$, and a point $s\in S$ such that the morphism
\begin{equation}\label{critere}
\phi_k(\mathcal E)_s : \bigwedge^k \mathcal T_{S,s} \rightarrow H^k(X, \mathcal O_X)
\end{equation}
induced by $\phi_k(\mathcal E)$ is surjective.
\end{thm}
\begin{rk}
Since Chern classes of vector bundles generate the Chow groups of smooth varieties, we can get a converse to the preceding statement by stating the theorem for complexes of vector bundles -- or of coherent sheaves. The statement would be entirely similar. As we will not use it in that form, we keep the preceding formulation.
\end{rk}
\paragraph{Example.} Let $A$ be a polarized complex abelian variety of dimension $g$. The tangent bundle of $A$ is canonically isomorphic to $H^1(A, \mathcal{O}_A)\otimes \mathcal{O}_A$. The trivial line bundle $\mathcal{O}_A$ on $A$ admits a family of deformations parametrized by $A$ itself such that the Kodaira-Spencer map $T_{A,O}\rightarrow H^1(A, \mathcal{O}_A)$ is the identity under the above identification. Now the induced deformation of $\mathcal{O}_A\oplus \mathcal{O}_A$ parametrized by $A\times A$ satisfies the criterion of Theorem \ref{crit}, since the map $\bigwedge^2 H^1(A, \mathcal{O}_A) \rightarrow H^2(A, \mathcal{O}_A)$ given by cup-product is surjective and identifies with the map (\ref{critere}). Of course, the Lefschetz conjecture for abelian varieties is well-known, see \cite{Li68}, Theorem 3.
\section{The case of hyperk\"{a}hler varieties}
In this section, we describe how Verbitsky's theory of hyperholomorphic bundles on hyperk\"{a}hler varieties as developed in \cite{Ver96} and \cite{Ver99} makes those a promising source of examples for theorem \ref{crit}. Unfortunately, we were not able to provide examples, as it appears some computations of dimensions of moduli spaces in \cite{Ver99} were incorrect, but we will show how the existence of nontrivial examples of moduli spaces of hyperholomorphic bundles on hyperk\"{a}hler varieties as conjectured in \cite{Ver99} implies the Lefschetz standard conjecture in degree $2$.
\subsection{Hyperholomorphic bundles on hyperk\"ahler varieties}
See \cite{Be83} for general definitions and results. An irreducible hyperk\"ahler variety is a simply connected k\"ahler manifold which admits a closed everywhere non-degenerate two-form which is unique up to a factor. As such, an irreducible hyperk\"ahler variety $X$ has $H^{2,0}(X, \mathcal{O}_X)=\mathbb{C}$, and Theorem \ref{crit} takes the following simpler form in degree $2$.
\begin{thm}
Let $X$ be an irreducible projective hyperk\"ahler variety. The Lefschetz conjecture is true in degree $2$ for $X$ if there exists a smooth quasi-projective variety $S$, a vector bundle $\mathcal E$ over $X\times S$, and a point $s\in S$ such that the morphism
\begin{equation}
\phi_2(\mathcal E)_s : \bigwedge^2 \mathcal T_{S,s} \rightarrow H^2(X, \mathcal O_X),
\end{equation}
induced by the Kodaira-Spencer map and the trace map, is nonzero.
\end{thm}
\bigskip
In the paper \cite{Be83}, Beauville constructs two families of projective irreducible hyperk\"ahler varieties in dimension $2n$ for every integer $n$. Those are the $n$-th punctual Hilbert scheme $S^{[n]}$ of a projective $K3$ surface $S$ and the generalized Kummer variety $K_n$ which is the fiber at the origin of the Albanese map from $A^{[n+1]}$ to $A$, where $A$ is an abelian surface and $A^{[n+1]}$ is the $n+1$-st punctual Hilbert scheme of $A$.
The Bogomolov-Tian-Todorov theorem, see \cite{Bo78}, \cite{Ti87}, \cite{To89}, states that the local moduli space of deformations of an irreducible hyperk\"ahler variety is unobstructed. Small deformations of a hyperk\"ahler variety remain hyperk\"ahler, and in the local moduli space of $S^{[n]}$ and $K_n$, the projective hyperk\"ahler varieties form a dense countable union of hypersurfaces. The varieties $S^{[n]}$ and $K_n$ have Picard number at least $2$, whereas a very general projective irreducible hyperk\"ahler variety has Picard number $1$, hence is not of this form. Except in dimension $6$ and $10$, where O'Grady constructs in \cite{OG99} and \cite{OG03} new examples, all the known hyperk\"ahler varieties are deformations of $S^{[n]}$ or $K_n$.
\bigskip
The Lefschetz standard conjecture is easy to prove in degree $2$ for $S^{[n]}$ (resp. $K_n$), using the tautological correspondence with the $K3$ surface (resp. the abelian surface), see \cite{Ar06}, Corollary 7.5. In terms of Theorem \ref{crit}, one can show that the tautological sheaf on $S^{[n]}$ (resp. $K_n$) associated to the tangent sheaf of $S$ has enough deformations to prove the Lefschetz conjecture in degree $2$. Since the tautological correspondence between $S$ and $S^{[n]}$ gives an isomorphism between $H^{2,0}(S)$ and $H^{2,0}(S^{[n]})$, checking that the criterion is satisfied amounts to the following.
\begin{prop}
Let $S$ be a projective $K3$ surface. Then there exists a smooth quasi-projective variety $M$ with a distinguished point $O$ parametrizing deformations of $\mathcal{T}_S$ and a vector bundle $\mathcal E$ over $M\times M$ such that $\mathcal{E}_{|\{O\times S\}} \simeq \mathcal{T}_S$, such that the map
$$\phi_2(\mathcal E)_O : \bigwedge^2 \mathcal T_{M,O} \rightarrow H^2(S, \mathcal O_S)$$
induced by the Kodaira-Spencer map and the trace map, is nonzero.
\end{prop}
\begin{proof}
This is proved by Mukai in \cite{Mu84}. A Riemann-Roch computation proves that the moduli space of deformations of of the tangent bundle of a $K3$ surface is smooth of dimension $90$.
\end{proof}
This last proof is of course very specific to Hilbert schemes and does not apply as such to other hyperk\"ahler varieties. We feel nonetheless that it exhibits general facts about hyperk\"ahler varieties which seem to give strong evidence to the Lefschetz conjecture in degree $2$.
\subsection{Consequences of the existence of a hyperk\"ahler structure on the moduli space of stable hyperholomorphic bundles}
In his paper \cite{Mu84}, Mukai studies the moduli spaces of some stable vector bundles on K3 surfaces and endows them with a symplectic structure by showing that the holomorphic two-form induced by (\ref{critere}) on the moduli space is nondegenerate. Of course, this result is not directly useful when dealing with the Lefschetz standard conjecture in degree 2 as it is trivial for surfaces. Nevertheless, Verbitsky shows in \cite{Ver96} that it is possible to extend Mukai's result to the case of higher-dimensional hyperk\"ahler varieties.
\bigskip
Before describing Verbitsky's results, let us recall some general facts from linear algebra around quaternionic actions and symplectic forms. This is all well-known, and described for instance in \cite{Be83}, Example 3, and \cite{Ver96}, section 6. Let $\mathbb H$ denote the quaternions, and let $V$ be a real vector space endowed with an action of $\mathbb H$ and a euclidean metric $(,)$.
Let $I\in \mathbb H$ be a quaternion such that $I^2=-1$. The action of $I$ on $V$ gives a complex structure on $V$. We say that $V$ is quaternionic hermitian if the metric on $V$ is hermitian for all such complex structures $I$. Fix such an $I$, and choose $J$ and $K$ in $\mathbb H$ satisfying the quaternionic relations $I^2=J^2=K^2=-Id, IJ=-JI=K$. We can define on $V$ a real symplectic form $\eta$ such that $\eta(x,y)=(x, Jy)+i(x, Ky)$. This symplectic form does not depend on the choice of $J$ and $K$. Furthermore, $\eta$ is $\mathbb{C}$-bilinear for the complex structure induced by $I$. Now given such $I$ and $\eta$ on $V$, it is straightforward to reconstruct a quaternionic action on $V$ by taking the real and complex parts of $\eta$.
\smallskip
Taking $V$ to be the tangent space to a complex variety, we can globalize the previous computations to get the following. Let $X$ be an irreducible hyperk\"ahler variety with given K\"ahler class $\omega$. Then the manifold $X$ is endowed with a canonical hypercomplex structure, that is, three complex structures $I, J, K$ which satisfy the quaternionic relations $I^2=J^2=K^2=-Id, IJ=-JI=K$. It is indeed possible to check that $J$ and $K$ obtained as before are actually integrable. Conversely, the holomorphic symplectic form on $X$ can be recovered from $I, J, K$ and a K\"ahler form on $X$ with class $\omega$.
If $\mathcal E$ is a complex hermitian vector bundle on $X$ with a hermitian connection $\theta$, we say that $\mathcal E$ is hyperholomorphic if $\theta$ is compatible with the three complex structures $I, J$ and $K$. In case $\mathcal E$ is stable, this is equivalent to the first two Chern classes of $\mathcal E$ being Hodge classes for the Hodge structures induced by $I, J$ and $K$, see \cite{Ver96}, Theorem 2.5. This implies that any stable deformation of a stable hyperholomorphic bundle is hyperholomorphic. It is a consequence of Yau's theorem, see \cite{Ya78} that the tangent bundle of $X$ is a stable hyperholomorphic bundle.
\bigskip
Let $\mathcal E$ be a stable hyperholomorphic vector bundle on $X$, and let $S=Spl(\mathcal E, X)$ be the reduction of the coarse moduli space of stable deformations of $\mathcal E$ on $X$. For $s$ a complex point of $S$, let $\mathcal E_s$ be the hyperholomorphic bundle corresponding to a complex point $s$ in $S$. The Zariski tangent space to $S$ at $s$ maps to $Ext^1(\mathcal{E}_s, \mathcal{E}_s)$ using the map from $S$ to the coarse moduli space of stable deformations of $\mathcal E$. We can now define a global section $\eta_S$ of $\mathcal Hom(\mathcal T_S\otimes \mathcal T_S, \mathcal O_S)$, where $\mathcal T_S$ is the tangent sheaf to $S$, by the composition
$$\mathcal T_{S,s}\otimes \mathcal T_{S,s} \rightarrow \bigwedge^2 Ext^1(\mathcal{E}_s, \mathcal{E}_s)\rightarrow Ext^2(\mathcal E_s, \mathcal E_s)\rightarrow H^2(X, \mathcal O_X)=\mathbb{C}$$
as in the preceding section. The following is due to Verbitsky, see part (iv) of the proof in section 9 of \cite{Ver96} for the second statement.
\begin{thm}(\cite{Ver96}, Theorem 6.3)\label{mod}
Let $Spl(\mathcal E, X)$ be the reduction of the coarse moduli space of stable deformations of $\mathcal E$ on $X$. Then $S=Spl(\mathcal E, X)$ is endowed with a canonical hyperk\"ahler structure. The holomorphic section of $\mathcal Hom(\mathcal T_S\otimes \mathcal T_S, \mathcal O_S)$ induced by this hyperk\"ahler structure is $\eta_S$.
\end{thm}
In this theorem, $S$ does not have to be smooth. We use Verbitsky's definition of a singular hyperk\"ahler variety as in \cite{Ver96}, Definition 6.4.
\bigskip
We can now prove Theorem \ref{nd}.
\begin{proof}[\textbf{Proof of Theorem \ref{nd}}]
Let $X$ be a smooth projective irreducible hyperk\"ahler variety, and let $\mathcal E$ be a stable hyperholomorphic bundle on $X$. Assume that $\mathcal E$ has a nontrivial positive-dimensional family of deformations, and let $s$ be a smooth point of $S=Spl(\mathcal E, X)$ such that $\mathcal T_{S,s}$ is positive dimensional. We can choose a smooth quasi-projective variety $S'$ with a complex point $s'$ and a family $\mathcal E_{S'}$ of stable hyperholomorphic deformations of $\mathcal E$ on $X$ parametrized by $S'$ such that the moduli map $S'\rightarrow S$ maps $s'$ to $s$ and is \'etale at $s'$. Since $\eta_S$ induces a symplectic form on $\mathcal T_{S,s}$, the map
$$\phi_2(\mathcal E_{S'})_s' : \bigwedge^2 \mathcal T_{S',s'} \rightarrow H^2(X, \mathcal O_X)=\mathbb{C}$$
is surjective as it identifies with $\eta_{S,s}$ under the isomorphism $\mathcal T_{S',s'}\xrightarrow{\sim}\mathcal T_{S,s}$. The result now follows from Theorem \ref{crit}.
\end{proof}
\bigskip
In order to prove Theorem \ref{stdef}, we need to recall some well-known results on deformations of hyperk\"ahler varieties. Everything is contained in \cite{Be83}, Section 8 and \cite{Ver96}, Section 1. See also \cite{Hu99}, Section 1 for a similar discussion. Let $X$ be an irreducible hyperk\"ahler variety with given K\"ahler class $\omega$. Let $\eta$ be a holomorphic everywhere non-degenerate $2$-form on $X$. Let $q$ be the Beauville-Bogomolov quadratic form on $H^2(X, \mathbb{Z})$, and consider the complex projective plane $P$ in $\mathbb{P}(H^2(X, \mathbb{C}))$ generated by $\eta, \overline{\eta}$ and $\omega$. There exists a quadric $Q$ of deformations of $X$ given the elements $\alpha\in P$ such that $q(\alpha)=0$ and $q(\alpha+\overline{\alpha})>0$.
Recalling that the tangent bundle of $X$ comes with an action of the groups of quaternions of norm $1$ given by the three complex structures $I, J, K$, which satisfy the quaternionic relations $I^2=J^2=K^2=-Id, IJ=-JI=K$, this quadric $Q$ of deformations of $X$ corresponds to the complex structures on $X$ of the form $aI+bJ+cK$ with $a,b,c$ being three real numbers such that $a^2+b^2+c^2=1$ -- those complex structures are always integrable. The quadric $Q$ is called a twistor line.
In this setting, let $d$ be the cohomology class of a divisor in $H^2(X, \mathbb{C})$, and let $\alpha$ be in $Q$. This corresponds to a deformation $X_{\alpha}$ of $X$. The cohomology class $d$ corresponds to a rational cohomology class in $H^2(X_{\alpha}, \mathbb{C})$, and it is the cohomology class of a divisor if and only if it is of type $(1,1)$, that is, if and only if $q(\alpha, d)=0$, where by $q$ we also denote the bilinear form induced by $q$. Indeed, $d$ is a real cohomology class, so if $q(\alpha, d)=0$, then $q(\overline{\alpha}, d)=0$ and $d$ is of type $(1,1)$. It follows from this computation that $d$ remains the class of a divisor for all the deformations of $X$ parametrized by $Q$ if and only if $q(\eta,d)=q(\omega, d)=0$.
\bigskip
We will work with the varieties $S^{[n]}$, the case of generalized Kummer varieties being completely similar. Let us start with a K3 surface $S$, projective or not, and let us consider the irreducible hyperk\"ahler variety $X=S^{[n]}$ given by the Douady space of $n$ points in $S$ -- this is K\"ahler by \cite{Va89}. In the moduli space $M$ of deformations of $X$, the varieties of the type $S'^{[n]}$ form a countable union of smooth hypersurfaces $H_i$. On the other hand, the hyperk\"ahler variety admits deformations parametrized by a twistor line, and those cannot be included in any of the $H_i$. Indeed, if that were the case, the class $e$ of the exceptional divisor of $X=S^{[n]}$ would remain algebraic in all the deformations parametrized by the twistor line. But this is impossible, as $e$ is a class of an effective divisor, which implies that $q(\omega, e)>0$, $\omega$ being a K\"ahler class, see \cite{Hu99}, 1.11 and 1.17.
This computation actually shows that the twistor lines are transverse to the hypersurfaces $H_i$. Now the preceding defintion of the twistor line parametrizing deformations of an irreducible hyperk\"ahler $X$ shows that it moves continuously with deformations of $X$. Counting dimensions, this implies that the union of the twistor lines parametrizing deformations of Douady spaces of $n$ points on K3 surfaces cover a neighborhood of the $H_i$ in $M$. We thus get the following.
\begin{lem}\label{twist}
Let $n$ be a positive integer, and let $X$ be a small projective deformation of the Douady space of $n$ points on a K3 surface. Then there exists a K3 surface $S$ and a twistor line $Q$ parametrizing deformations of $S^{[n]}$ such that $X$ is a deformation of $S^{[n]}$ along $Q$.
\end{lem}
\bigskip
The next result of Verbitsky is the main remaining ingredient we need to prove Theorem \ref{stdef}. Recall first that if $\mathcal E$ is a hyperholomorphic vector bundle on an irreducible hyperk\"ahler variety $X$, then by definition the bundle $\mathcal E$ deforms as $X$ deforms along the twistor line.
\begin{thm}\label{defmod}(\cite{Ver96}, Corollary 10.1)
Let $X$ be an irreducible hyperk\"ahler variety, and let $\mathcal E$ be a stable hyperholomorphic vector bundle on $X$, and let $Spl(\mathcal E, X)$ be the reduction of the coarse moduli space of stable deformations of $\mathcal E$ on $X$.
Then the canonical hyperk\"ahler structure on $Spl(\mathcal E, X)$ is such that if $Q$ is the twistor line parametrizing deformations of $X$, $Q$ is a twistor line parametrizing deformations of $Spl(\mathcal E, X)$ such that if $\alpha\in Q$, then $Spl(\mathcal E, X)_{\alpha}=Spl(\mathcal E_{\alpha}, X_{\alpha})$.
\end{thm}
This implies that the deformations of a hyperholomorphic bundle on $X$ actually deform as the complex structure of $X$ moves along a twistor line. We can now prove our last result.
\begin{proof}[\textbf{Proof of Theorem \ref{stdef}}]
Let $X$ be an irreducible projective hyperk\"ahler variety that is a deformation of the Douady space of $n$ points on some K3 surface. By a standard Hilbert scheme argument, in order to prove the Lefschetz conjecture for $X$, it is enough to prove it for an open set of the moduli space of projective deformations of $X$. By Lemma \ref{twist}, we can thus assume that $X$ is a deformation of some $S^{[n]}$ along a twistor line $Q$, where $S$ is a K3 surface. Let $\mathcal E$ on $S^{[n]}$ be a sheaf as in the statement of the theorem. By Theorems \ref{defmod} and \ref{nd}, we get a bundle $\mathcal E'$ which still satisfies the hypothesis of Theorem \ref{crit}. This concludes the proof.
\end{proof}
It is particularly tempting to use this theorem with the tangent bundle of $S^{[n]}$, which is stable by Yau's theorem and hyperholomorphic since its first two Chern classes are Hodge classes for all the complex structures induced by the hyperk\"ahler structure of $S^{[n]}$. Unfortunately, while Verbitsky announces in \cite{Ver99}, after the proof of Corollary 10.24, that those have some unobstructed deformations for $n=2$ and $n=3$,it seems that if $n=2$, the tangent bundle might be actually rigid. However, we get the following result by applying the last theorem to the tangent bundle.
\begin{cor}
Let $n$ be a positive integer. Assume that for every K3 surface $S$, the tangent bundle $\mathcal T_{S^{[n]}}$ of $ S^{[n]}$ has a nontrivial positive-dimensional family of deformations. Then the Lefschetz conjecture is true in degree $2$ for any projective deformation of the Douady space of $n$ points on a K3 surface.
\end{cor}
\begin{rk}
The conditions of the corollary might be actually not so difficult to check. Indeed, Verbitsky's Theorem 6.2 of \cite{Ver96} which computes the obstruction to extending first-order deformations implies easily that the obstruction to deform $\mathcal T_{S^{[n]}}$ actually lies in $H^2(S^{[n]}, \Omega^2_{S^{[n]}})$, where we see this group as a subgroup of
$$\mathrm{Ext}^2(\mathcal T_{S^{[n]}}, \mathcal T_{S^{[n]}})=H^2(S^{[n]}, \Omega^{\otimes 2}_{S^{[n]}})$$
under the isomorphism $\mathcal T_{S^{[n]}}\simeq \Omega^1_{S^{[n]}}$.
Now the dimension of $H^2(S^{[n]}, \Omega^{2}_{S^{[n]}})$ does not depend on $n$ for large $n$, see for instance \cite{GS93}, Theorem 2. As a consequence, the hypothesis of the Corollary would be satisfied for large $n$ as soon as the dimension of $\mathrm{Ext}^1(\mathcal T_{S^{[n]}}, \mathcal T_{S^{[n]}})$ goes to infinity with $n$.
\end{rk}
\begin{rk}
Of course, our results might be apply to different sheaves. In the recent preprint \cite{Ma10}, Markman announces the construction of -- possibly twisted -- sheaves that, combined with our results, proves the Lefschetz standard conjecture in degree 2 for deformations of Hilbert schemes of K3 surfaces.
\end{rk}
\begin{rk}
It is quite surprising that we make use of nonprojective K\"ahler varieties in these results dealing with the standard conjectures. Indeed, results like those of Voisin in \cite{Voi02} show that there can be very few algebraic cycles, whether coming from subvarieties or even from Chern classes of coherent sheaves, on general nonprojective K\"ahler varieties.
\end{rk}
\providecommand{\bysame}{\leavevmode ---\ }
\providecommand{\og}{``}
\providecommand{\fg}{''}
\providecommand{\smfandname}{et}
\providecommand{\smfedsname}{\'eds.}
\providecommand{\smfedname}{\'ed.}
\providecommand{\smfmastersthesisname}{M\'emoire}
\providecommand{\smfphdthesisname}{Th\`ese}
|
1,477,468,750,640 | arxiv |
\section{Introduction}
Services are being moved from the cloud to the edge of the network.
This migration is due to several reasons: lack of trust on the cloud provider~\cite{baumann2015shielding}, energy savings~\cite{lyu2018selective,ning2019green} or willing to reclaim control over data and code.
Edge devices are used to accumulate, process and stream data~\cite{makinen2015streaming,varghese2016challenges}.
The nature of such data can be potentially very sensitive: edge devices can be used to process health-based data emitted by body sensors (\eg cardiac data~\cite{segarra2019}), data originated by smart home sensors indicating the presence or absence of humans inside a household, or even financial transactions~\cite{lind2016teechan,shepherd2017establishing}.
In this context, applications using this information must be protected against powerful attackers, potentially even with physical access to the devices.
Additionally, communication channels for inter-edge device applications must also be secured to prevent attacks such as man-in-the-middle attacks.
Edge devices are typically low-energy units with limited processing and storage capacity.
As such, it is unpractical to rely on sophisticated software-based protection mechanisms (\eg homomorphic encryption~\cite{naehrig2011can}), currently due to their high processing requirements and low performance~\cite{gottel2018security}.
Alternatively, new hardware-based protection mechanisms can be easily leveraged by programmers to provide the mentioned protection guarantees.
Specifically, \emph{trusted execution environments} (TEEs) are increasingly made available by hardware vendors in edge-devices~\cite{shepherd2016secure}.
Several \ARM-based devices, such as the popular Raspberry Pi\footnote{\url{https://www.raspberrypi.org}, accessed on 30.07.2019}, embed native support for \textsc{TrustZone}\xspace~\cite{ngabonziza2016trustzone,arm:tz}, \ARM's specific design for TEEs.
\textsc{TrustZone}\xspace can be leveraged to deploy \emph{trusted applications} (TAs) with additional security guarantees.
There exist several programming frameworks and runtime systems to
develop TAs for \textsc{TrustZone}\xspace with varying capabilities and different degrees of
stability and support (\eg SierraTEE\footnote{\url{https://www.sierraware.com/open-source-ARM-TrustZone.html}, accessed on 30.07.2019}, \textsc{Op-Tee}\xspace\footnote{\url{https://www.op-tee.org}, accessed on 30.07.2019}, and~\cite{McGillion2015OpenTEEA}).
While few studies look at the interaction between TEEs and the corresponding untrusted execution environments~\cite{jang2015secret,amacher2019}, little is known on the network performance bottlenecks experienced by TAs on \textsc{Arm}\xspace processors. %
We fill this gap by contributing \textsc{iperfTZ}\xspace, a tool to measure accurately the network performance (\eg latency, throughput) of TAs for \textsc{TrustZone}\xspace.
\textsc{iperfTZ}\xspace consists of three components, namely \textit{(1)} a client application, \textit{(2)} a TA, and \textit{(3)} a server.
Our tool can be used to guide the calibration of TAs for demanding workloads, for instance understanding the exchanges with untrusted applications or for secure inter-TEE applications~\cite{shepherd2017establishing}.
In addition, \textsc{iperfTZ}\xspace can be used to study the impact of network and memory performance on the energy consumption of running TAs.
By adjusting \textsc{iperfTZ}\xspace's parameters, users evaluate the network throughput of their TAs and can quickly uncover potential bottlenecks early in the development cycle.
For instance, internal buffer sizes affect the achievable network throughput rates by a factor of \SI{1.8}{\times}, almost halving throughput rates.
The rest of the paper is organized as follows.
\autoref{sec:usecase} motivates the need for tools analyzing TAs.
We provide an in-depth background on \textsc{TrustZone}\xspace in \autoref{sec:background}, as well as covering details on the \textsc{TrustZone}\xspace runtime system \textsc{Op-Tee}\xspace.
In \autoref{sec:archicture} we present the architecture of \textsc{iperfTZ}\xspace and some implementation details in \autoref{sec:implementation}. %
We report our evaluation results in \autoref{sec:evaluation}.
We cover related work in \autoref{sec:rw} before concluding in \autoref{sec:conclusion}.
\section{Motivating Scenario}
\label{sec:usecase}
We consider scenarios with simple yet practical services deployed as TAs.
For instance, in~\cite{dais19} authors deploy key-value stores inside a \textsc{TrustZone}\xspace runtime system.
Benchmarks show a $12\times$-$17\times$ slowdown when compared to plain (yet unsecure) deployments, due to shared memory mechanisms between the trusted and untrusted environments.
As further detailed in \autoref{sec:archicture}, networking in \textsc{Op-Tee}\xspace is supported by similar shared memory mechanisms.
Yet, we observe the lack of tools to clearly highlight the root causes of such bottlenecks.
Further, in the \textsc{TrustZone}\xspace ecosystem, there is a lack of proper tools to evaluate network bottlenecks contrary to untrusted environments (\eg, \texttt{iperf3}\footnote{\url{https://software.es.net/iperf/}, accessed on 30.07.2019}, \texttt{netperf}\footnote{\label{fn:netperf}\url{https://hewlettpackard.github.io/netperf/}, accessed on 30.07.2019}, \texttt{nuttcp}\footnote{\label{fn:nuttcp}\url{https://www.nuttcp.net/Welcome\%20Page.html}, accessed on 30.07.2019}).
The overhead originating from the shared memory mechanism can be identified by comparing the measured network throughput inside and outside the TEE.
Measuring such overheads is of particular relevance in embedded, mobile and IoT environments.
In those scenarios, devices are often battery powered, limited both in time and capacity. %
Hence, network performance tools should further highlight energy costs, pointing users to specific bottlenecks.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.495\textwidth}
\centering
\includegraphics[width=\linewidth]{trustzone}
\caption{{\textsc{Arm}\xspace}v8.4-A architecture~\cite{arm:sel2}}
\label{fig:blocktz}
\end{subfigure}
\quad%
\begin{subfigure}[t]{0.465\textwidth}
\centering
\includegraphics[width=\linewidth]{globalplatform}
\caption{\textsc{GlobalPlatform}\xspace architecture~\cite{gp:sysarch}}%
\label{fig:blockgp}
\end{subfigure}%
\caption{Block diagrams highlighting relevant software components.}%
\label{fig:blockdia}
\end{figure*}
\section{Background}
\label{sec:background}
This section provides a background on \textsc{Arm}\xspace \mbox{\textsc{TrustZone}\xspace} (\autoref{subsec:tz}), the \textsc{GlobalPlatform}\xspace specifications (\autoref{subsec:gpspec}) and \textsc{Op-Tee}\xspace, the \textsc{TrustZone}\xspace runtime system used for \textsc{iperfTZ}\xspace (\autoref{subsec:optee}).
This background helps understanding technical challenges in our context and how \textsc{iperfTZ}\xspace addresses them.
\subsection{ARM TrustZone in a Nutshell}
\label{subsec:tz}
\textsc{TrustZone}\xspace is a security architecture designed for \textsc{Arm}\xspace processors and was introduced in 2003~\cite{arm:sel2}.
It partitions hardware and software resources into two worlds, \ie \emph{secure} and \emph{normal} world, as shown in \autoref{fig:blocktz}.
A dedicated \texttt{NS} bit~\cite{arm:tz} drives this world separation and allows to execute secure (\texttt{NS} bit set low) or non-secure (\texttt{NS} bit set high) transactions on the system bus. %
In general, non-secure transactions cannot access system resource secured by a low \texttt{NS} bit.
The \textsc{TrustZone}\xspace architecture spans beyond the system bus, including peripherals (\eg GPUs~\cite{volos2018graviton} and I/O). %
Every \textsc{TrustZone}\xspace-enabled processor is logically split into a secure and a non-secure (virtual) core, executing in a time-shared manner.
Hence, accessible system resources are determined by the executing core: secure cores can access all system resources, while non-secure cores can only access non-secure ones.
\textsc{Arm}\xspace processors embed one \emph{memory management unit} (MMU) per virtual core in charge of mapping virtual addresses to physical addresses.
The \emph{translation lookaside buffer} (TLB) in the MMU is used to maintain the mapping translations from virtual to physical memory addresses.
Tagging TLB entries with the identity of the world %
allows secure and non-secure address translation entries to co-exist.
With tags the TLB no longer has to be flushed making fast world switches possible.
The implementation of \textsc{TrustZone}\xspace is organized into four \emph{exception levels} (EL) with increasing privileges~\cite{arm:a53} (\autoref{fig:blocktz}).
EL0, the lowest one, executes unprivileged software.
EL1 executes operating systems, while EL2 provides support for virtualization. %
Finally, \textsc{Arm}\xspace Trusted Firmware is running at EL3 dispatching boot stages at boot time and monitoring secure states.
Switches between the two worlds are supervised by a secure monitor~\cite{arm:v8}. %
It is invoked in two ways: \emph{(1)} by executing a \emph{secure monitor call} (SMC), or \emph{(2)} by a subset of \emph{hardware exception mechanisms}~\cite{arm:tz}.
When invoked, the secure monitor saves the state of the currently executing world, before restoring the state of the world being switched to.
After dealing with the worlds' state, the secure monitor returns from exception to the restored world.
\subsection{The GlobalPlatform Standard}
\label{subsec:gpspec}
\textsc{GlobalPlatform}\xspace\footnote{\url{https://globalplatform.org}, accessed on 30.07.2019} publishes specifications for several TEEs (\eg \textsc{Op-Tee}\xspace and~\cite{McGillion2015OpenTEEA}).
We provide more details on \textsc{Op-Tee}\xspace in \autoref{subsec:optee} (an implementation of such specifications), while briefly explaining the terminology in the remainder to understand~\autoref{fig:blockgp}.
An \emph{execution environment} (EE) provides all components to execute applications, including hardware and software components. %
A \emph{rich execution environment} (REE) runs a rich OS, generally designed for performance.
However, it lacks access to any secure component.
In contrast, TEEs are designed for security, but programmers have to rely on a reduced set of features.
A trusted OS manages the TEE under constrained memory and storage bounds.
TEE and REE run alongside each other.
In recent \textsc{Arm}\xspace releases (since v8.4), multiple TEEs can execute in parallel~\cite{arm:sel2}, each with their own trusted OS.
TAs rely on system calls implemented by the trusted OS, typically implemented as specific APIs~\cite{gp:internal}.
\emph{Client applications} (CA) running in the rich OS can communicate with TAs using the \emph{TEE Client API}.
Similarly, TAs can access resources such as \emph{secure elements} (\ie tamper-resistant devices), \emph{trusted storage}, and \emph{peripherals}, or send messages outside the TEE.
\emph{Communication agents} in the TEE and REE mediate exchanges between TAs and CAs.
Finally, the \emph{TEE Socket API} can be used by TAs to setup network connections with remote CAs and TAs.
\vs{still too many acronyms here, but should be a bit better now}
\subsection{\textsc{Op-Tee}\xspace: Open Portable Trusted Execution Environment}
\label{subsec:optee}
\textsc{Op-Tee}\xspace is an open-source implementation of several \mbox{\textsc{GlobalPlatform}\xspace} specifications~\cite{gp:sysarch,gp:client,gp:internal,gp:socket} with native support for \textsc{TrustZone}\xspace.
The \textsc{Op-Tee}\xspace OS manages the TEE resources, while any Linux-based distribution can be used as rich OS alongside it.
\textsc{Op-Tee}\xspace supports two types of TAs: \emph{(1)} regular TAs~\cite{gp:sysarch} running at EL0, and \emph{(2)} \emph{pseudo TAs} (PTAs), statically linked against the \textsc{Op-Tee}\xspace OS kernel.
PTAs run at EL1 as secure privileged-level services inside \textsc{Op-Tee}\xspace OS's kernel.
Finally, \textsc{Op-Tee}\xspace provides a set of client libraries to interact with TAs and to access secure system resources from within the TEE.
\section{Networking for Trusted Applications}
\label{sec:archicture}
The application in the REE acts as a proxy interface to the TA forwarding arguments.
First, the application creates a context (\texttt{TEEC\_InitializeContext}).
Then, it allocates dynamic shared-memory areas (\texttt{TEEC\_AllocateSharedMemory}), basically used as buffer between the secure and normal worlds, as well as forward function arguments, piggybacked upon the session creation (\texttt{TEEC\_OpenSession}).
Functions in the TA can be used through \texttt{TEEC\_InvokeCommand} calls.
Once the session is closed (\texttt{TEEC\_CloseSession}), the shared-memory areas can be released from the context and freed before finalizing the context.
For networked TAs, \ie generating or receiving network traffic
respectively from and to TAs, runtime systems must provide support for sockets and corresponding APIs. %
To do so, either \emph{(1)} the TEE borrows the network stack from the REE, or \emph{(2)} the TEE relies on \emph{trusted device drivers}.
The former solution implies leveraging \emph{remote procedure calls} (RPC) to a \texttt{tee-supplicant} (an agent which responds to requests from the TEE), and achieves a much smaller \emph{trusted computing base}.
The latter allows for direct access to the network device drivers for much lower network latencies.
Furthermore, it simplifies confidential data handling as the data does not have to leave the TEE.
The former requires developers to provide data confidentiality before network packets leave the TEE, for instance by relying on encryption.
\textsc{iperfTZ}\xspace leverages \texttt{libutee}\footnote{\url{https://optee.readthedocs.io/architecture/libraries.html\#libutee}, accessed on 30.07.2019} and its socket API, supporting streams or datagrams.
The socket interface exposes common functions: \texttt{open}, \texttt{send}, \texttt{recv}, \texttt{close}, \texttt{ioctl} and \texttt{error}.
The \textsc{GlobalPlatform}\xspace specification allows TEE implementations to extend protocol-specific functionalities via command codes and \texttt{ioctl} functions.
For example, it is possible to adjust the receiving and sending socket buffer sizes with TCP socket or changing the address and port with UDP sockets.
The \texttt{libutee} library manages the lifecycle of sockets via a TA session to the socket's PTA.
The socket PTA handles the RPC to the \texttt{tee-supplicant}, in particular allocating the RPC parameters and assigning their values.
Afterwards, a SMC instruction is executed to switch back to the normal world.
The \texttt{tee-supplicant} constantly checks for new service requests from the TEE.
Once a new request arrives, its arguments are read by the \texttt{tee-supplicant} and the specified command is being executed.
Finally, when the data is received by the \texttt{tee-supplicant}, it is relayed over \textsc{Posix}\xspace sockets to the rich OS.
In essence, when data is sent or received over a socket, it traverses
all exception levels, both secure (from EL0 up to EL3) and non-secure (from EL2 to EL0 and back up).
\autoref{fig:optee} summarizes the previous paragraphs and shows the interaction between the secure and normal worlds in \textsc{Op-Tee}\xspace.
The secure world hosts the TA, which interacts directly with \texttt{libutee} (\autoref{fig:optee}-\ding{202}).
When using \textsc{GlobalPlatform}\xspace's Socket API, \texttt{libutee} does a system call (\autoref{fig:optee}-\ding{203}) to \textsc{Op-Tee}\xspace.
\textsc{Op-Tee}\xspace then delegates the request to the socket PTA (\autoref{fig:optee}-\ding{204}).
The secure monitor is invoked through a SMC (\autoref{fig:optee}-\ding{205}), which maps the data from the TEE to the REE's address space.
From there execution switches into the normal world and the \textsc{Op-Tee}\xspace driver (\autoref{fig:optee}-\ding{206}) resumes operation.
Requests are then handled by the \texttt{tee-supplicant} (\autoref{fig:optee}-\ding{207}) over \texttt{ioctl} system calls.
The agent executes system calls using \texttt{libc} (\autoref{fig:optee}-\ding{208}) to directly relate the underlying network driver (\autoref{fig:optee}-\ding{209}) over the \textsc{Posix}\xspace interface.
Once data reaches the network driver, it can be sent over the wire (\autoref{fig:optee}-\ding{210}).
\begin{figure*}[t]
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\linewidth]{optee.pdf}
\caption{Execution flow inside \textsc{Op-Tee}\xspace.}%
\label{fig:optee}
\end{minipage}%
\quad%
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=0.68\linewidth]{interaction}
\caption{Interaction of \textsc{iperfTZ}\xspace's components in the client-server model.}%
\label{fig:sys}
\end{minipage}
\end{figure*}
\subsection{Threat Model}
\label{subsec:threatmodel}
For our threat model we consider a malicious user that has physical access or is capable to obtain remote access on the devices used to deploy \textsc{iperfTZ}\xspace as depicted in~\autoref{fig:sys}.
By gaining access to the devices or the network the devices are connected to, the malicious user has either the intention to compromise these devices or to exploit \textsc{iperfTZ}\xspace for denial-of-service attacks.
We assume that the REE, which includes the rich OS and the user space, cannot be trusted. However, we assume that the devices and the TEE, which includes bootloader, \textsc{Op-Tee}\xspace, and secure monitor, can be trusted.
As also stated in~\cite{arm:tz}, side-channel attacks are out of scope of our threat model.
Assuming our \textsc{TrustZone}\xspace-enabled device is equipped with an \emph{embedded MultiMediaCard} (eMMC), then TAs can be securely stored on the eMMC and the malicious user cannot tamper with a TA's binary.
Otherwise, the malicious user who has gained control over the REE, has access to the TAs and can manipulate a TA's binary and compromise \textsc{iperfTZ}\xspace.
Manipulation of the CA's parameters by the malicious user to trigger a buffer overflow can be excluded.
In \textsc{iperfTZ}\xspace, buffer allocation and initialization use the same variable as size indicator.
Hence, the TA will return an out of memory error code, if it tries to allocate more memory than it is allowed to use.
During a network bandwidth measurement, the malicious user can run a (distributed) denial-of-service attack to reduce the network bandwidth, such that a lower network throughput is measured and reported by \textsc{iperfTZ}\xspace.
Although irrelevant to \textsc{iperfTZ}\xspace, the malicious user could run a man-in-the-middle attack, either directly within the REE or on the network, and intercept the traffic exchanged between the two devices.
At the time of writing, \textsc{Op-Tee}\xspace does not provide support for the TLS protocol which renders secure connections unusable.
\section{Implementation}
\label{sec:implementation}
We describe the implementation challenges of the three components included in \textsc{iperfTZ}\xspace,\footnote{The source code of \textsc{iperfTZ}\xspace will be made available on GitHub.} namely \textit{(1)} a CA acting as proxy for \textsc{iperfTZ}\xspace's \textit{(2)} TA, and \textit{(3)} the server component which the TA is interfacing.
All components are implemented in the C language, and consists of 927 lines of code: 243 for the client, 314 for \textsc{iperfTZ}\xspace's TA, and 430 for the server.\footnote{Numbers for individual components include local header lines of code.}
\subsection{\textsc{iperfTZ}\xspace: Client Application}
When the CA starts, the TEE context is initialized using the file descriptor fetched from the \textsc{Op-Tee}\xspace driver.
Two distinct dynamic shared-memory areas are allocated at this time, to \emph{(1)} exchange arguments passed over the \emph{command line interface} with the TA (see \autoref{subsec:ta}) and \emph{(2)} to retrieve metrics gathered by the TA during the network measurement. %
Several arguments (\eg, IP of the target server node, dummy data size, socket buffer size) are written in the shared memory area.
The dummy data size is used by the TA to read/write data to the interface socket.
Both shared memory areas get registered with the operation data structure before calling the \texttt{TEEC\_InvokeCommand} function.
The executing thread in the CA is blocked until the TA completes.
The execution inside the TEE is resumed at the TA's main entry point upon world switch.
Once the TA completes, an \texttt{SMC} instruction drives the CPU core to switch back into the normal world, where execution is resumed.
The metrics gathered from the TA are available to the user as persistent files.
\subsection{\textsc{iperfTZ}\xspace: Trusted Application}
\label{subsec:ta}
The \textsc{iperfTZ}\xspace TA is the primary executing unit.
It takes the role of the client in the client-server model. %
The TA allocates a buffer for the dummy data on the heap, filled with random data generated by \textsc{Op-Tee}\xspace's Cryptographic Operations API~\cite{gp:internal}.
With the information from the arguments, the TA finally sets up a TCP interface socket and opens a client connection before assigning the socket buffer sizes.
Our implementation relies on the Time API~\cite{gp:internal} to measure the elapsed time during the network throughput measurement inside the TEE.
\textsc{Op-Tee}\xspace computes the time value from the physical count register and the frequency register.
The count register is a single instance register shared between normal and secure world EL1.
The network throughput measurement is then started while either maintaining a constant bit rate, transmitting a specific number of bytes or running for $10$ seconds.
During the measurement, the TA gathers metrics on the number of transmit calls, \ie \texttt{recv} and \texttt{send}, bytes sent, time spent in the transmit calls and the total runtime.
Upon completion, results are written to the shared memory area and the execution switches back to the normal world.
\subsection{\textsc{iperfTZ}\xspace: Server}
\label{subsec:server}
The server component is deployed and executed inside the normal world.
This is used to wait for incoming TCP connections (or inbound UDP datagrams) from \textsc{iperfTZ}\xspace's TA.
While executing, it gathers similar network metrics as the other components.
Additionally, this component collects TCP specific metrics, such as the smoothed \emph{round trip time} or the \emph{maximum segment size}.
This TCP specific data is not accessible for TAs and can only be retrieved on the server side using a \texttt{getsockopt} system call.
\section{Evaluation}
\label{sec:evaluation}
In this section we will demonstrate how \textsc{iperfTZ}\xspace can measure the network throughput.
We further draw conclusions regarding hardware and software implementation designs.
We report that it is particularly challenging to assess network throughput, given the remarkable diversity one can find on embedded and mobile \textsc{Arm}\xspace systems.
\begin{table}[t]
\footnotesize
\centering
\caption{Comparison of evaluation platforms.}%
\label{tab:platform}
\setlength{\aboverulesep}{0pt}
\setlength{\belowrulesep}{0pt}
\rowcolors{1}{gray!10}{gray!0}
\begin{tabular}{>{\kern-\tabcolsep}lll<{\kern-\tabcolsep}}
\toprule\rowcolor{gray!25}
\multicolumn{1}{c}{\textbf{Device}} & \multicolumn{1}{c}{\textbf{QEMU}} & \multicolumn{1}{c}{\textbf{Raspberry}} \\
\midrule%
CPU Model & Intel Xeon E3-1270 v6 & Broadcom BCM2837 \\
\rowcolor{gray!10}
CPU Frequency & \SI{3.8}{\GHz} & \SI{1.2}{GHz} \\
\rowcolor{gray!0}
Memory Size & \SI{63}{\gibi\byte} DDR4 &
\SI{944}{\mebi\byte} LPDDR2 \\
\rowcolor{gray!10}
Memory data rate & \SI{2400}{\mega T\per\second} & \SI{800}{\mega T\per\second} \\
\rowcolor{gray!0}
& Samsung & Transcend micro SDHC \\
\rowcolor{gray!0}
\multirow{-2}{*}{Disk Model} & MZ7KM480HMHQ0D3 & UHI-I Premium \\
\rowcolor{gray!10}
Disk Size & \SI{480}{\giga\byte} & \SI{16}{\giga\byte} \\
\rowcolor{gray!0}
Disk Read Speed & \SI[per-mode=symbol]{528.33}{\mega\byte\per\second} &
\SI[per-mode=symbol]{90}{\mega\byte\per\second} \\
Network Bandwidth & \SI[per-mode=symbol]{1}{\giga\bit\per\second} &
\SI[per-mode=symbol]{100}{\mega\bit\per\second} \\
\bottomrule
\end{tabular}
\end{table}
\textbf{Evaluation settings.}
We deploy \textsc{iperfTZ}\xspace on the Raspberry Pi platform.
Due to the limited network bandwidth of Raspberry Pi devices supported by \textsc{Op-Tee}\xspace, we also include results under emulation using QEMU.\footnote{\url{https://www.qemu.org}, accessed on 30.07.2019}
With QEMU we can run the same evaluation as on the Raspberry Pi and we also profit from a higher network bandwidth.
\autoref{tab:platform} compares in detail the two setups.
For both setups we use the same machine as server, on which we collect power consumptions and run the \textsc{iperfTZ}\xspace server component.
\textbf{Server.} The server is connected to a Gigabit switched network, with access to power meter measurements.
The nodes being measured are at a single-hop from the server.
During the micro-benchmarks server components will be deployed on the server with fixed dummy buffer and socket buffer sizes of \SI{128}{\kibi\byte}.
This allows creating an accurate time series of the recorded throughput, latency and power metrics by concentrating the data acquisition on a single node.
\textbf{QEMU.} We deploy \textsc{Op-Tee}\xspace with QEMU v3.1.0-rc3 running on a Dell PowerEdge R330 server.
The \textsc{Op-Tee}\xspace project has built-in support for QEMU and uses it in system emulation mode.
In system emulation mode QEMU emulates an entire machine, dynamically translating different hardware instruction sets when running a virtual machine with a different architecture.
In order to provide full network capability, we replace the default SLiRP network\footnote{\url{https://wiki.qemu.org/Documentation/Networking\#User_Networking_.28SLIRP.29}, accessed on 30.07.2019} deployed with \textsc{Op-Tee}\xspace by a bridged network with a tap device.
\textbf{Raspberry Pi.} %
\textsc{Op-Tee}\xspace only supports the Raspberry Pi 3B.
We deploy \textsc{Op-Tee}\xspace on a Raspberry Pi 3B v1.2 equipped with a Broadcom BCM2837 SoC.
The SoC implements an ARM Cortex-A53 with ARMv8-A architecture. %
The BCM2837 chip lacks support for cryptographic acceleration instructions and %
is not equipped with \emph{\textsc{TrustZone}\xspace Protection Controller} (TZPC), \emph{\textsc{TrustZone}\xspace Address Space Controller} (TZASC), \emph{Generic Interrupt Controller} (GIC) or any other proprietary security control interfaces on the bus~\cite{sequitur}.
The Raspberry Pi 3B lacks an on-chip memory or eMMC to provide a securable memory.
We take these limitations into account in our evaluation, and leave further considerations once a more mature support for the Raspberry Pi platform is released.
\begin{figure*}[ht]
\centering
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[trim={5px 30px 35px 205px},clip,width=\linewidth]{kv_s}%
\captionsetup{skip=0pt}
\caption{Partially shared memory}%
\label{fig:psm}
\end{subfigure}%
\quad%
\begin{subfigure}[t]{0.46\textwidth}
\includegraphics[trim={100px 30px 0 205px},clip,width=\linewidth]{kv_t}%
\captionsetup{skip=0pt}
\caption{Temporarily shared memory}%
\label{fig:tsm}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[trim={5px 0 35px 175px},clip,width=\linewidth]{kv_w}%
\caption{Whole shared memory}%
\label{fig:wsh}
\end{subfigure}%
\quad%
\begin{subfigure}[t]{0.46\textwidth}
\includegraphics[trim={100px 0 0 175px},clip,width=\linewidth]{kv-ree}%
\caption{CA in the REE}%
\label{fig:ca}
\end{subfigure}
\caption{Throughput-latency plots for different kinds of shared memory.}%
\label{fig:evalmemory}
\end{figure*}
\textbf{Power masurement.} To measure the power consumption of the two platforms, we connect the Dell PowerEdge server to a LINDY iPower Control 2x6M \emph{power distribution unit} (PDU)~\cite{lindy:pdu} and the Raspberry Pi 3B to an Alciom PowerSpy2~\cite{alciom:powerspy}.
The LINDY PDU provides a HTTP interface queried up to every second with a resolution of 1 W and a
precision of 1.5\%.
Alciom PowerSpy2 devices rely on Bluetooth channels to transfer the collected metrics. %
Both measuring devices collect voltage, current and power consumption in real time.
\textbf{Memory Bandwidth.}
We use an existing key-value store TA~\cite{dais19} to evaluate the overhead of the different types of shared memory.
The hash-table at the core of the key-value store uses separate chaining for collision resolution and implements modular hashing.
The \textsc{GlobalPlatform}\xspace specification defines three different types of shared memory: \emph{whole} (an entire memory area), \emph{partial} (a subset of an entire memory area with a specified offset), and \emph{temporarily} (a memory area within the REE with an optional offset).
The temporarily shared memory area is only shared with the TA for the duration of the TEE method invocation; the two others get registered and unregistered with the TEE session.
The key-value store supports common operations such as \texttt{DEL}, \texttt{GET} and \texttt{PUT} on key-value pairs.
We benchmark each operation in isolation as well as combining \texttt{GET} and \texttt{PUT} operations (\texttt{MIX}ed benchmark).
The benchmarks operate as follows: for whole and partially shared memory, the CA will request a shared memory region of \SI{512}{\kibi\byte} from the TEE and fills it with random data from \texttt{/dev/urandom}.
With temporarily shared memory, the CA will allocate a \SI{512}{\kibi\byte} buffer and initialize it similarly with random data.
Before invoking a key-value operation a chunk size of \SI{1}{\kibi\byte} is selected as data object at a random offset in the shared memory respectively buffer.
The random offset is then used as key
and every operation is timed using \texttt{CLOCK\_MONOTONIC}.\footnote{Manual page: \texttt{man time.h}}
During the benchmark 256 operations are issued at a fixed rate between $1$ and $32768$ operations per second. %
\autoref{fig:evalmemory} shows the throughput-latency plots for each type of shared memory as well as for running the key-value store as a CA in the REE.
Compared to the Raspberry Pi, the results on QEMU are predominantly superposed and only achieve about half the throughput.
We believe this is due to an I/O bound from the \textsc{Arm}\xspace instruction and \textsc{TrustZone}\xspace emulation using QEMU.
We further observe with QEMU that the \texttt{DEL} benchmark for temporarily shared memory (\autoref{fig:tsm}) and as CA (\autoref{fig:ca}) is clearly distinguishable from the other benchmarks.
On the Raspberry Pi platform the graphs are well separated and ranked according to our expectations (lowest to highest throughput): \texttt{PUT}, \texttt{MIX50}, \texttt{MIX20}, \texttt{GET}, and \texttt{DEL}.
The \texttt{PUT} operation has the lowest throughput because of memory allocation, memory copy and object insertion in the TA.
The \texttt{GET} operation looks up the data object and copies it to the shared memory resulting in a higher throughput than the \texttt{PUT} operation.
The mixed benchmarks show a similar behavior: the higher the \texttt{PUT} ratio, the lower the throughput.
Hence, the \texttt{MIX50} ($50\%$ \texttt{PUT} operations) has a lower average throughput than \texttt{MIX20}.
The \texttt{DEL} operation avoids any time intensive memory operation and only has to free a data object after looking it up in the store.
An interesting observation is made when comparing the memory throughput of the benchmarks executed in the REE against the benchmarks executed in the TEE.
Key-value store operations executed inside TAs experience a 12$\times$-14$\times$ overhead with QEMU and a 12$\times$-17$\times$ overhead on the Raspberry Pi.
This overhead is due to the world and context switches associated to TA method invocations.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics[trim={0px 0px 0px 205px},clip,width=\linewidth]{rpi_perf}%
\caption{Raspberry Pi}
\end{subfigure}%
\quad%
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics[trim={0px 0 0 205px},clip,width=\linewidth]{qemu_perf}%
\caption{QEMU}
\end{subfigure}
\caption{TCP network throughput measurements for \SI{128}{\kibi\byte} buffer sizes.}%
\label{fig:evalthroughput}
\end{figure*}
\textbf{Network Bandwidth.}
This micro-benchmark compares the network throughput measured with \textsc{iperfTZ}\xspace in \textsc{Op-Tee}\xspace to the network throughput measured with \texttt{iperf3} in Linux.
We deploy both programs with the same set of parameters, \ie \SI{128}{\kibi\byte} socket and dummy buffer sizes.
Upon each iteration the bit rate is doubled starting at \SIrange[per-mode=symbol]{1}{512}{\mega\bit\per\second}. %
It should be noticed that TAs are by default limited to \SI{1}{\mebi\byte} of memory during runtime.
For this reason we do not allocate more than \SI{512}{\kibi\byte} for the dummy data on the TA's heap.
Linux has two kernel parameters which limit the maximum size of read and write socket buffers: \texttt{/proc/sys/net/core/rmem\_max} and \texttt{/proc/sys/net/core/wmem\_max}.
These kernel parameters controlling the socket buffer size limit can be changed at runtime using \texttt{sysctl}, in order to allocate larger socket buffers.
\textsc{iperfTZ}\xspace is generally exceeding on both setups the network throughput of \texttt{iperf3}.
On the Raspberry Pi 3B we cannot observe any degradation of the network throughput due to an overhead from frequent world switches.
This result does not come as a surprise.
The memory bandwidth benchmark operates at a throughput of several hundred \si{\mega\byte\per\second}, while the network bandwidth benchmark operates at about \SI{10}{\mega\byte\per\second}.
There is a gap of one order of magnitude in throughput between the two benchmarks, which we assume to be sufficient for the overhead not to arise.
However, on QEMU we observe a serious degradation of the network throughput, when trying to achieve \si{\giga\bit\per\second} bit rate with \textsc{Op-Tee}\xspace.
Remarkably, high throughput rates are strongly affected by the world switching overhead, even degrading beyond unaffected throughput rates.
Our measurements indicate that network throughput beyond \SI{500}{\mega\bit\per\second} is affected by a \SI{1.8}{\times} world switching overhead, almost halving the network throughput.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics[trim={65px 15px 35px 305px},clip,width=\linewidth]{rpi3b_energy}%
\caption{Raspberry Pi}%
\label{fig:rpie}
\end{subfigure}%
\quad%
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics[trim={65px 15px 35px 305px},clip,width=\linewidth]{qemu_energy}%
\caption{QEMU}%
\label{fig:qemue}
\end{subfigure}
\caption{Energy consumption during TCP network throughput measurements. Bit rates on the x-axis are given in logarithm to base 2.}%
\label{fig:evalenergy}
\end{figure*}
\textbf{Energy.}
During the network bandwidth benchmark, we recorded the power consumed by both setups.
The LINDY iPower Control and the Alciom PowerSpy2 both record the timestamp as Unix time in seconds and the instantaneous power in watts.
We use those units to execute a numerical integration over time using the trapezoidal method to obtain the total energy consumed by both setups during a benchmark run.
\autoref{fig:evalenergy} shows these results.
The total energy on the y-axis (in joule) is consumed by the device while executing a benchmark run for a specific bit rate on the x-axis (as binary logarithmic scale in \si{\mega\bit\per\second}).
On the Raspberry Pi (\autoref{fig:rpie}) we observe that before reaching saturation, \textsc{iperfTZ}\xspace is consuming about \SI{2}{\joule} (\SI{11}{\percent}) more than \texttt{iperf3}.
In the highly saturated range, the energy doubles with the throughput.
However, with QEMU (\autoref{fig:qemue}), the energy difference between the execution in the REE and the TEE is significant.
Given that QEMU is running on an energy-demanding and powerful server, \textsc{iperfTZ}\xspace consumes about \SI{173}{J} (\SI{36}{\percent}) more before the overhead arises than \texttt{iperf3} in the REE.
We can clearly attribute this additional energy consumption observed on both setups to the execution of \textsc{iperfTZ}\xspace in the TEE.
Certainly, the world switching overhead also contributes to an increase of the energy consumption with QEMU.
By assuming a similar behavior for the energy consumption on QEMU as in the saturated range on the Raspberry Pi, we obtain a \SI{1.6}{\times} energy overhead due to world switching.
\section{Related Work}
\label{sec:rw}
There exists a plethora of network benchmarking and tuning tools.
We note that the implementation of \textsc{iperfTZ}\xspace is heavily inspired by the well-known \texttt{iperf} tool.
In this sense, \textsc{iperfTZ}\xspace support a subset of its command-line parameters, for instance to facilitate the execution of existing benchmarking suites.\footnote{Full compatibility with \texttt{iperf} would require substantial engineering efforts that we leave out of the scope of this work.}
The \texttt{ttcp} (Test TCP) tool was one of first programs implemented to measure the network performance over TCP and UDP protocols.
Lately, it has been superseded by \texttt{nuttcp}.\footnote{See~\autoref{fn:nuttcp}}
A tool with similar features is \texttt{netperf}.\footnote{See~\autoref{fn:netperf}}
Unlike the aforementioned tools,
\texttt{tcpdump}\footnote{\url{https://www.tcpdump.org}, accessed on 30.07.2019} is a packet analyzer that captures TCP packets being sent or received over a network.
\textsc{iperfTZ}\xspace does not provide packet analysis tools.
Instead, it does offer client and server-side measurements both for TCP and UDP datafows.
More recently, \texttt{iperf} integrated most of the functionalities of \texttt{ttcp}, extending it with multi-threading capabilities (since \texttt{iperf} v2.0) and allowing bandwidth measurements of parallel streams.
While it would be possible to provide similar support in \textsc{iperfTZ}\xspace, the execution of code inside the TAs is currently single-threaded, hence limiting the achievable outbound throughput.
The most recent version of \texttt{iperf} (v3.0) ships a simplified (yet single-threaded) implementation specifically targeting non-parallel streams.
Flowgrind\footnote{\url{www.flowgrind.net}, accessed on 30.07.2019} is a distributed TCP traffic generator.
In contrast, \textsc{iperfTZ}\xspace follows a client-server model, with traffic generated between a server and a TA.
StreamBox-TZ~\cite{park2019streambox} is a stream analytics engine, which processes large IoT streams on the edge of the cloud.
The engine is shielded from untrusted software using \textsc{TrustZone}\xspace.
Similar to \textsc{iperfTZ}\xspace, StreamBox-TZ runs on top of \textsc{Op-Tee}\xspace in a TA.
Yet, \textsc{iperfTZ}\xspace does not process data streams but can generate and measure network performance of those streams.
To summarize and to the best of our knowledge, \textsc{iperfTZ}\xspace is the first tool specifically designed to run as a TA for \textsc{TrustZone}\xspace that can measure the achievable network throughput for such applications.
\section{Conclusion and Future Work}
\label{sec:conclusion}
The deployment of TAs is becoming increasingly pervasive for the management and processing of data over the network.
However, due to constraints imposed by the underlying hardware and runtime system, network performance of TAs can be affected negatively.
\textsc{iperfTZ}\xspace is a tool to measure and evaluate network performance of TAs for \textsc{Arm}\xspace \textsc{TrustZone}\xspace, a widely available TEE on embedded, IoT and mobile platforms.
We implemented the \textsc{iperfTZ}\xspace prototype on top of \textsc{Op-Tee}\xspace and we evaluated it on the Raspberry Pi platform.
Our experimental results highlight performance and energy trade-offs deployers and programmers are confronted with both on hardware and emulated environments.
We believe the insights given by our work can be exploited to improve design and configuration of TEEs for edge devices handling real-world workloads for TAs.
We intend to extend our work to support different types of sockets (\eg, datagram sockets) and to leverage on-chip cryptographic accelerators.
This would allow us to provide TLS-like channels for TAs, a feature that has not yet been implemented in \textsc{Op-Tee}\xspace.
Finally, we aim for supporting various kinds of TEEs, especially in the context of embedded platforms and SoC, such as Keystone\footnote{\url{https://keystone-enclave.org}, accessed on 30.07.2019} for RISC-V processors.
\section*{Acknowledgments}
The research leading to these results has received funding from the European Union's Horizon 2020 research and innovation programme under the LEGaTO Project (\href{https://legato-project.eu/}{legato-project.eu}), grant agreement No~780681.
{
\footnotesize
\bibliographystyle{splncs04}
|
1,477,468,750,641 | arxiv | \section{Introduction}
B-physics has been reserved an important role in the exciting programme to be
developed at the Large Hadron Collider or LHC along
the 21st century. As an example, the ATLAS TDR \cite{tdr} collects a
large number of topics related to charm and beauty flavours allowing precise
tests of the Standard Model benefitting from
a foreseen huge statistics even with the machine running at
$\lq\lq$low" luminosity (${\simeq}\ 10^{33}$ cm$^{-2}\ s^{-1}$).
In a series of previous papers \cite{mas0,mas1,mas2,mas3}
we examined charmonium hadroproduction in a Monte Carlo framework, using
PYTHIA 5.7 \cite{pythia} event generator with the colour-octet model
(COM) \cite{braaten} implemented in.
Basically, such a production mechanism is based on the formation of
an intermediate coloured state during the hard partonic interaction,
evolving non-perturbatively into
physical heavy resonances in the final state with certain
probabilities governed by NRQCD \cite{bodwin}. This mechanism
constitutes a (relativistic) generalization of the so-called colour-singlet
model (CSM) \cite{csm} which requires the formation of a colour-singlet state
in the hard interaction itself. Although the discrepancies
between the CSM and experimental cross sections on bottomonia hadroproduction
are smaller than those found for charmonia \cite{fermi}, still some
extra contribution should be invoked to account for the surplus observed
at the Fermilab Tevatron.
In this paper we extend our analysis on $J/\psi$
and ${\psi}'$ hadroproduction \cite{mas2} to the bottomonium family
lowest vector resonance, i.e. the $\Upsilon(1S)$ state.
Once again, those matrix elements (MEs) determined from Tevatron
data in other analysis \cite{cho} have to be lowered once
initial-state radiation of gluons is taken into account.
This is because of the raise of the ({\em effective}) intrinsic
momentum ($k_T$) of the interacting partons enhancing the high-$p_T$ tail
of the differential cross section for heavy quarkonia production
(for more details the reader is referred to Ref. \cite{mas2}).
The study of bottomonia production
at hadron colliders should permit a stringent test
of the colour-octet production mechanism, especially regarding
the expected (mainly transverse) polarization of the resonance created
through this process at high-$p_T$. Moreover, LHC experiments
will cover a wider range of transverse momentum than at the Tevatron.
Therefore, it is worth to estimate, as a
first step, the foreseen
production rate of bottomonium resonances at the LHC and this
constitutes one of the goals of this work. Thereby any experimental
strategy to be conducted in the future (for example a specific
high-level trigger within the dedicated B-physics data-taking) can be
foreseen in advance.
We have based our study on recent results from Run 1b
at the Tevatron \cite{fermi1}. This means significantly
more statistics than the data sample from Run 1a, employed
in a former analysis \cite{cho}. However, the different sources
of prompt $\Upsilon(1S)$ production were not yet
separated along the full accessible
$p_T$-range, in contrast to charmonium production.
Hence we give in this work numerical values for some relevant
combinations of long-distance MEs (including {\em direct} and {\em indirect}
$\Upsilon(1S)$ production \footnote{Prompt resonance production includes
both direct and indirect channels, the latter exclusively referred to
feeddown from $\Upsilon(nS)$ and $\chi_{bJ}(nP)$ states - i.e. excluding
long-lived particle decays})
extracted from the fit to the CDF experimental points. Nevertheless, we
still are able to estimate some colour-octet MEs for {\em direct} production
from the measurements on different production sources at $p_T>8$ GeV
\cite{fermi2}.
\section{Implementation of the COM in PYTHIA}
Originally the event generator PYTHIA 5.7 produces direct $J/\psi$ and
higher ${\chi}_{cJ}$ resonances via the CSM only \cite{pythia}. It is
not difficult to extend this generation to the bottomonium family
by redefining the resonance mass and wave function parameter accordingly.
In our analysis we have besides implemented a code
in the event generator to account for
the colour-octet production mechanism via the following
${\alpha}_s^3$ partonic processes \footnote{We find from our simulation
that gluon-gluon scattering actually stands
for the dominant process as expected, gluon-quark scattering
contributes appreciably however (${\simeq}\ 28\%$ of the colour-octet
production cross section), whereas
quark-antiquark scattering represents ${\simeq}\ 4\%$. These fractions
are slightly larger
than those found in charmonium production as could be expected
from a heavier quark mass} for heavy quarkonium production:
\begin{equation}
g\ +\ g\ {\rightarrow}\ (Q\overline{Q})[^{2S+1}X_J]\ +\ g
\end{equation}
\begin{equation}
g\ +\ q\ {\rightarrow}\ (Q\overline{Q})[^{2S+1}X_J]\ +\ q
\end{equation}
\begin{equation}
q\ +\ \overline{q}\ {\rightarrow}\ (Q\overline{Q})[^{2S+1}X_J]\ +\ g
\end{equation}
where $(Q\overline{Q})[^{2S+1}X_J]$ stands for a certain heavy
quarkonium state denoted by its spectroscopic notation. In particular
we have considered the
$^3S_1^{(8)}$, $^1S_0^{(8)}$ and $^3P_J^{(8)}$ contributions
as leading-order intermediate coloured states. In addition we generated
$\Upsilon(1S)$ and $\chi_{bJ}(nP)$ ($n=1,2$) resonances
decaying into $\Upsilon(1S)$ according to the CSM as mentioned above.
A lower $p_T$ cut-off was set equal to 1 GeV (by default in PYTHIA)
throughout the generation
since some of the contributing channels are singular at vanishing
transverse momentum \cite{montp}.
\subsection{Set of $\lq\lq$fixed'' and free parameters used in the generation}
Below we list the main parameters, including
masses and branching fractions, used in our generation with PYTHIA 5.7.
We employed the CTEQ2L parton distribution function (PDF) in all our
\vspace{0.2in} analysis.
\newline
{\em Masses and branching fractions:}
\begin{itemize}
\item $m_b=4.88$ GeV
\item $m_{resonance}=2m_b$
\item $BR[\Upsilon(1S){\rightarrow}\mu^+\mu^-]=2.48\ \%$ (\cite{pdg})
\end{itemize}
\vskip 0.5 cm
{\em Colour-singlet parameters} (from \cite{schuler}):
\begin{itemize}
\item $<O_1^{\Upsilon(1S)}(^3S_1)>{\mid}_{tot}=11.1$ GeV$^3$, defined as
\[ <O_1^{\Upsilon(1S)}(^3S_1)>{\mid}_{tot}\ =\ \sum_{n=1}^3
<O_1^{\Upsilon(nS)}(^3S_1)>Br[\Upsilon(nS){\rightarrow}\Upsilon(1S)X]
\]
\item $<O_1^{\chi_{b1(1P)}}(^3P_1)>=6.09$ GeV$^5$
\item $<O_1^{\chi_{b1(2P)}}(^3P_1)>=7.10$ GeV$^5$
\end{itemize}
\vskip 0.5cm
The radial wave functions at the origin (and their derivatives)
used in the generation can
be related to the above matrix elements as
\begin{equation}
<O_1^{\Upsilon(1S)}(^3S_1)>\ =\ \frac{9}{2\pi}{\mid}R(0){\mid}^2
\end{equation}
\begin{equation}
<O_1^{\chi_{bJ(nP)}}(^3P_J)>\ =\ \frac{9}{2\pi}(2J+1) {\mid}R'(0){\mid}^2
\end{equation}
whose numerical values were obtained from
a Buchm\"{u}ller-Tye potential model tabulated in Ref.
\vspace{0.2in} \cite{eichten}. \newline
{\em Colour-octet long-distance parameters to be extracted from the fit:}
\newline
\begin{itemize}
\item $<O_8^{\Upsilon(1S)}(^3S_1)>{\mid}_{tot}$, defined as
\begin{eqnarray}
<O_8^{\Upsilon(1S)}(^3S_1)>{\mid}_{tot} & & =\ \sum_{n=1}^3\biggl\{
<O_8^{\Upsilon(nS)}(^3S_1)>Br[\Upsilon(nS){\rightarrow}\Upsilon(1S)X]
\nonumber \\
& & +\ \sum_{J=0}^2
<O_8^{\chi_{bJ}(nP)}(^3S_1)>Br[\chi_{bJ}(nP){\rightarrow}\Upsilon(1S)X]
\biggr\}
\end{eqnarray}
\item $<O_8^{\Upsilon(1S)}(^1S_0)>$
\item $<O_8^{\Upsilon(1S)}(^3P_0)>$
\end{itemize}
\vskip 0.5cm
\par
On the other hand, the differences in shape between the
$^1S_0^{(8)}$ and $^3P_J^{(8)}$ contributions were not sufficiently great
to justify independent generations for them. In fact,
temporarily setting $<O_8^{\Upsilon(1S)}(^3P_0)>
=m_b^2<O_8^{\Upsilon(1S)}(^1S_0)>$ and
defining the ratio
\begin{equation}
r(p_T)\ =\ \frac{{\sum}_{J=0}^{2}\frac{d{\sigma}}{dp_T}[^3P_J^{(8)}]}
{\frac{d{\sigma}}{dp_T}[^1S_0^{(8)}]}
\end{equation}
it is found $r\ {\simeq}\ 5$ as a mean value over the $[0,20]$ GeV $p_T$-range.
Actually the above ratio is not steady as a function of
the $\Upsilon(1S)$ transverse momentum. Therefore in the generation we
splitted the $p_T$ region into two domains:
for $p_T\ {\leq}\ 6$ GeV we set $r= 6$ whereas for $p_T>6$ GeV we set
$r=4$.
In summary, only the $^1S_0^{(8)}$ channel was generated but
rescaled by the factor $r$ to incorporate the $^3P_J^{(8)}$
contribution as we did in \cite{mas2} for charmonium hadroproduction.
Consequently, in analogy to \cite{cho}
we shall derive a numerical estimate for the
combination of the colour-octet matrix elements:
\[
\frac{<O_8^{\Upsilon(1S)}(^1S_0)>}{5}+
\frac{<O_8^{\Upsilon(1S)}(^3P_0)>}{m_b^2}
\]
\subsection{Altarelli-Parisi evolution}
According to the colour-octet model, gluon fragmentation becomes the
dominant source of heavy quarkonium direct production at high
transverse momentum. On the other hand,
Altarelli-Parisi (AP) evolution of the splitting gluon
into ($Q\overline{Q}$)
produces a depletion of its momentum and has to be properly taken
into account. If not so, the resulting long-distance parameter
for the $^3S_1^{(8)}$ channel would be underestimated from the fit
\cite{montp}.
The key idea is that AP evolution of the
fragmenting gluon is performed from the evolution of
the {\em gluonic partner} of quarkonium in the final state
of the production channel
\begin{equation}
g\ +\ g\ {\rightarrow}\ g^{\ast}({\rightarrow}
(Q\overline{Q})[^3S_1^{(8)}])\ +\ g
\end{equation}
\vskip 0.2cm
Let us remark that, in fact, $g^{\ast}$ is not
generated in our code \cite{mas2}. Final hadronization into a
($Q\overline{Q}$) bound state is taken into
account by means of the colour-octet matrix
elements multiplying the respective
short-distance cross sections \cite{cho,mas2}.
Nevertheless, it is reasonable to assume that, on the average,
the virtual $g^{\ast}$ should evolve at high $p_T$
similarly to the other final-state gluon - which actually is
evolved by the PYTHIA machinery.
We used this fact to simulate the (expected) evolution
of the (ungenerated) $g^{\ast}$ whose momentum was assumed
to coincide with that of the resonance (neglecting the effect
of emission/absorption of soft gluons by the intermediate coloured
state bleeding off colour \cite{mas3}).
\par
Therefore, event by event we get a correcting factor
to be applied to the transverse mass of the
$(Q\overline{Q})$ state (for the $^3S_1^{(8)}$ channel only):
\begin{equation}
x_p\ =\ \frac{\sqrt{P_T^{{\ast}2}+m_{(Q\overline{Q})}^2}}
{\sqrt{P_T^{2}+m_{(Q\overline{Q})}^2}}
\end{equation}
where $P_T$ ($P_T^{\ast}$) denotes the transverse momentum of
the final-state gluon without (with) AP evolution and
$m_{(Q\overline{Q})}$ denotes the mass of the
resonance. At high $p_T$,
\begin{equation}
p_T^{AP}\ =\ x_p\ {\times}\ p_T
\end{equation}
where $p_T$ is the transverse momentum of the resonance
as generated by PYTHIA (i.e. without AP evolution), whereas
for $p_T\ {\leq}\ m_{(Q\overline{Q})}$ the effect becomes
much less significant as it should be. Thus the interpolation between
low and high $p_T$ is smooth with the right asymptotic
limits at both regimes.\par
The above way to implement AP evolution may appear
somewhat simple but it remains in the spirit of our whole
analysis, i.e. using PYTHIA machinery whenever possible. In fact,
it provides an energy depletion of the fragmenting gluon
in accordance with Cho and Leibovich's work for
charmonium hadroproduction \cite{cho,montp}.
It is worth to note, moreover, that the effect of the AP evolution
on the generation over the [0,20] GeV $p_T$-range, though sizeable,
is considerably less pronounced for bottomonium than
for charmonium because of the larger mass of the former.\par
Notice finally that, although we can switch on/off AP evolution and
initial-state radiation {\em at will} in the generation, both next-to-leading
order effects have to be incorporated for a realistic
description of the hadronic dynamics of the process.
\vskip 1. cm
\begin{figure}[htb]
\centerline{\hbox{
\psfig{figure=upsi18_merge.eps,height=6.5cm,width=8.cm}
\psfig{figure=upsi18_merge_ap.eps,height=6.5cm,width=8.cm}
}}
\caption{Theoretical curves obtained from a fit using PYTHIA including
the colour-octet mechanism for prompt $\Upsilon(1S)$ production against CDF
data at the Tevatron
{\it a)} without AP evolution of the fragmenting gluon, {\it b)}
with AP evolution of the fragmenting gluon. The CTEQ2L parton distribution
function and $m_b=4.88$ GeV were employed in the fits; dotted line: CSM,
dashed line: $^1S_0^{(8)}+^3P_J^{(8)}$ contribution,
dot-dashed line: $^3S_1^{(8)}$ contribution, solid line: all contributions.}
\end{figure}
\section{Fit to Tevatron data}
As already mentioned, the theoretical differential cross section
on inclusive production of prompt $\Upsilon(1S)$'s stands
above Tevatron experimental points for relatively high $p_T$
if the set of long-distance parameters from \cite{cho} are
blindly employed in the PYTHIA generation with initial-state
radiation on. Therefore we performed a new fit to recent
CDF data, incorporating both
direct and indirect production through the
CSM (as a $\lq\lq$fixed'' contribution) which, in fact, is dominant
at low and even moderate $p_T$.
\par
\subsection{Extraction of the colour-octet MEs}
In order to assess the effect of AP evolution on the fit parameters
we show in table 1 two sets of numerical values for the relevant
colour-octet MEs obtained from a best ${\chi}^2$
fit to Tevatron data \cite{fermi1} using the CTEQ2L PDF: (i) the first
row corresponds to a generation {\em without} AP evolution; (ii)
the second set does take it into account. Notice the
increase of $<O_8^{\Upsilon(1S)}(^3S_1)>$ in the latter case w.r.t.
AP off (but to a lesser extent than for charmonium \cite{montp})
whereas $M_5^{\Upsilon(1S)}$ decreases consequently to keep
the fit at low and moderate $p_T$ values.
Let us stress that our MEs numerical estimates have to be viewed with
some caution because of the theoretical and $\lq\lq$technical''
(due to the Monte Carlo assumptions) uncertainties. For example our
algorithm for AP evolution should be
regarded as a way to reasonably steepening the high-$p_T$ tail of the
(leading-order) differential cross section which otherwise
would fall off too slowly as a function of $p_T$.
\begin{table*}[hbt]
\setlength{\tabcolsep}{1.5pc}
\caption{Colour-octet matrix elements (in units of $10^{-3}$ GeV$^3$) from
the best fit to CDF data at the Tevatron on prompt $\Upsilon(1S)$ production.
The CTEQ2L PDF was used with initial-state radiation on, and AP
evolution off and on respectively. For
comparison we quote the values given in \cite{schuler,cho}:
$480$ and $40{\pm}60$
respectively.}
\label{FACTORES}
\begin{center}
\begin{tabular}{lcc} \hline
ME: & $<O_8^{\Upsilon(1S)}(^3S_1)>$ &
$M_5^{\Upsilon(1S)}=5{\times}\biggl(\frac{<O_8^{\Upsilon(1S)}(^3P_0)>}{m_b^2}+
\frac{<O_8^{\Upsilon(1S)}(^1S_0)>}{5}\biggr)$ \\
\hline
AP off & $93{\pm}18$ & $17{\pm}$20 \\
\hline
AP on & $139{\pm}31$ & $6{\pm}$18 \\
\hline
\end{tabular}
\end{center}
\end{table*}
In figure 1 we show the theoretical curves obtained from our fit to
CDF data (independently with AP evolution off and on in the generation)
for both colour-singlet and colour-octet contributions. Let
us remark that due to the $p_T$ cut-off parameter set in the generation,
only those experimental points with $p_T>1$ GeV were used in the fit.
Very good fits, with ${\chi}^2/N_{DF}$ values close to unity, were found.
\subsubsection{Separated production sources for $p_T>8$ GeV}
Current statistics does not permit
to subtract indirect production sources
to obtain the direct $\Upsilon(1S)$ production cross section
along the full accessible $p_T$-range. Nevertheless,
feeddown from higher states ($\Upsilon(nS)$, ${\chi}_{bJ}(nP)$)
was experimentally separated out for $p_T>8$ GeV \cite{fermi2}.
We use this information to check our analysis {\em a posteriori}
(rather than using it as a constraint in the generation)
and to draw some important conclusions. To this end
the relative fractions of the contributing channels
for $p_T>8$ GeV are reproduced in table 2 from Ref. \cite{fermi2}.
On the other hand we show in table 3
the fractions found in this work corresponding to the
different generated channels for $p_T>8$ GeV, following the notation
introduced in section 2.1.
\vskip 0.5cm
\begin{table*}[hbt]
\setlength{\tabcolsep}{1.5pc}
\caption{Relative fractions (in $\%$) of the different contributions to
$\Upsilon(1S)$ production from
CDF data at $p_T>8$ GeV \cite{fermi2}. Statistical and
systematic errors have been summed quadratically.}
\label{FACTORES}
\begin{center}
\begin{tabular}{lcc} \hline
contribution & Tevatron results \\
\hline
direct $\Upsilon(1S)$ & $51.8{\pm}11.4$ \\
\hline
$\Upsilon(2S)$+$\Upsilon(3S)$ & $10.7{\pm}6.4$ \\
\hline
${\chi}_b(1P)$ & $26.7{\pm}8.1$ \\
\hline
${\chi}_b(2P)$ & $10.8{\pm}4.6$ \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[hbt]
\setlength{\tabcolsep}{1.5pc}
\caption{Relative fractions (in $\%$) of the different contributions
to $\Upsilon(1S)$ production at the Tevatron for $p_T>8$ GeV from our
generation. Possible contributions from
$\chi_{bJ}(3P)$ states were not generated.}
\label{FACTORES}
\begin{center}
\begin{tabular}{lcc} \hline
contribution & our generation \\
\hline
$\Upsilon(1S){\mid}_{^3S_1^{(8)}}$ & $42.3$ \\
\hline
$\Upsilon(1S){\mid}_{^1S_0^{(8)}+^3P_J^{(8)}}$ & $3.7$ \\
\hline
$\Upsilon(1S){\mid}_{CSM}$ & $14.9$ \\
\hline
$\Upsilon(2S)$+$\Upsilon(3S){\mid}_{CSM}$ & $3.0$ \\
\hline
${\chi}_b(1P){\mid}_{CSM}$ & $21.4$ \\
\hline
${\chi}_b(2P){\mid}_{CSM}$ & $14.7$ \\
\hline
\end{tabular}
\end{center}
\end{table*}
By comparison between tables 2 and 3 we can conclude that
the $\Upsilon(1S)$ indirect production from $\chi_{bJ}$'s decays
is almost completely accounted for by the CSM according to the
assumptions and values of the parameters presented in
Section 2. Indeed, experimentally $37.5{\pm}9.3\%$ of
$\Upsilon(1S)$ production is due to $\chi_{bJ}(1P)$ and
$\chi_{bJ}(2P)$ decays \cite{fermi2}
while from our generation we find a close value, namely $36.1\%$,
coming exclusively from colour-singlet production!
Moreover, assuming that a $7.7\%$ from the $42.3\%$ fraction
corresponding to the colour-octet $^3S_1^{(8)}$ contribution
(as expressed in Eq. (6)) can be attributed to the
$\Upsilon(2S)+\Upsilon(3S)$ channel in addition to the colour-singlet
contribution ($3\%)$, we obviously get the fraction $10.7\%$ for the latter,
bringing our theoretical result into agreement with the
experimental value. Furthermore, this single assignment implies to
reproduce very well the experimental fraction
(${\approx}\ 52\%$) of direct $\Upsilon(1S)$ production by
adding the remaining $^3S_1^{(8)}$ contribution to the
$\Upsilon(1S){\mid}_{^1S_0^{(8)}+^3P_J^{(8)}}$ and ${\Upsilon(1S)}_{CSM}$
channels (${\approx}\ 53\%$).
Of course all the above counting was based on mean values from
table 2 and subject to uncertainties. Nevertheless, apart from
the consistency of our generation w.r.t. experimental results
under minimal assumptions, we can conclude, as an important
consequence, that
there is almost {\em no need for} $\Upsilon(1S)$ indirect production
from feeddown of $\chi_{bJ}$ states produced through
{\em the colour-octet mechanism}. In other words,
the relative contribution from $P$-wave states to
$<O_8^{\Upsilon(1S)}(^3S_1)>{\mid}_{tot}$ in Eq. (6) should be
quite smaller than
na\"{\i}vely expected from NRQCD scaling rules compared to the
charmonium sector, in agreement with
some remarks made in \cite{schuler}.
The underlying reason for this discrepancy w.r.t other
analyses \cite{cho} can be traced back to
the dominant colour-singlet contribution to the cross section
at $p_T$ values as much large as $14$ GeV (see figure 1)
caused by the effective $k_T$ smearing - already applied to
charmonium hadroproduction by one of us \cite{mas2}.
On the other hand the corresponding velocity scaling rule
in the bottomonium sector is nicely verified. Indeed
from the value $<O_1^{\Upsilon(1S)}(^3S_1)>{\mid}_{tot}=11.1$ GeV$^3$ and
the result found for $<O_8^{\Upsilon(1S)}(^3S_1)>{\mid}_{tot}=0.139$ GeV$^3$
shown in table 1, the ratio
\begin{equation}
\frac{<O_8^{\Upsilon(1S)}(^3S_1)>{\mid}_{tot}}
{< O_1^{\Upsilon(1S)}(^3S_1)>{\mid}_{tot}}\ {\approx}\ 0.012
\end{equation}
is in accordance with the expected order of magnitude
${\approx}\ v^4$, where $v$ is the relative velocity
of the bottom quark inside bottomonium ($ v^2\ {\approx}\ 0.1$).
\begin{figure}[htb]
\centerline{\hbox{
\psfig{figure=upsi14_merge_ap.eps,height=6.5cm,width=8.cm}
\psfig{figure=xsect_merge_upsi14.eps,height=6.5cm,width=8.cm}
}}
\caption{{\em left :} Predicted prompt
$\Upsilon(1S)$ differential cross section at the LHC
using the CTEQ2L PDF and AP evolution incorporated in the generation.
A rapidity cut ${\mid}y{\mid}<2.5$ was required for bottomonium;
dot-dashed line: $^3S_1^{(8)}$ contribution. Solid line: all
contributions. {\em right :} Integrated cross section.}
\end{figure}
\section{$\Upsilon(1S)$ Production at the LHC}
Bottomonium hadroproduction is especially
interesting to check the validity of the colour-octet model as often
emphasized in the literature \cite{beneke2,tkabladze}. This becomes
particularly clear at the LHC since experimental data will spread over
a wider $p_T$-range than at the Tevatron.\par
Keeping this interest in mind, we generated prompt
$\Upsilon(1S)$ resonances in proton-proton collisions at
a center-of-mass energy of 14 TeV by means of our code implemented
in PYTHIA employing the same colour-octet MEs
of table 1 with AP evolution on. We present in figure 2
our theoretical curves for the $\Upsilon(1S)$ differential and
integrated cross sections as a function of $p_T$, including both
direct production and feeddown from higher resonance states.
\begin{figure}[htb]
\centerline{\hbox{
\psfig{figure=upsi14_merge_ap_d.eps,height=6.5cm,width=8.cm}
\psfig{figure=xsect_merge_upsi14_d.eps,height=6.5cm,width=8.cm}
}}
\caption{The same as in figure 2 for {\em direct} $\Upsilon(1S)$
production at the LHC.}
\end{figure}
In figure 3 we show our prediction for {\em direct} $\Upsilon(1S)$
production. This
is especially interesting if LHC detectors will be able to
discriminate among such different sources of resonance
production.
To this end we generated $\Upsilon(1S)$ events
through both the CSM and COM making use of the following parameters
\begin{itemize}
\item $<O_1^{\Upsilon(1S)}(^3S_1)>{\mid}_{direct}=9.28$ GeV$^3$
(from \cite{schuler})
\item $<O_8^{\Upsilon(1S)}(^3S_1)>{\mid}_{direct}=0.114$ GeV$^3$
\item $M_5^{\Upsilon(1S)}=6.0$ GeV$^3$
\end{itemize}
The first value corresponds to the CSM ME for direct production.
The $<O_8^{\Upsilon(1S)}(^3S_1)>$ ME was obtained after removing
the $\Upsilon(2S)+\Upsilon(3S)$ contribution according to
the discussion made in section 3.1.1, i.e. under the assumption that
a fraction $7.7\%$ from the $42.3\%$ in table 3 should be assigned to
indirect production. Finally the $M_5^{\Upsilon(1S)}$ value is based on
the assumption that this channel mainly contributes to direct production.
\vskip 1.cm
\section{Summary}
In this paper we have analyzed CDF measurements on $\Upsilon(1S)$
production cross sections at the Tevatron in a Monte Carlo framework.
Higher-order QCD effects such as initial-state radiation of gluons
and AP evolution
of splitting gluons into ($b\overline{b}$) states were taken into
account. On the other hand, since different sources
of $\Upsilon(1S)$ production were not
experimentally separated along the full accessible $p_T$-range
we have included all of them in the generation and later fit. Only
for $p_T>8$ GeV feeddown from $\chi_{bJ}$ states was
experimentally separated out from
direct production. We used such results as a consistency check
of our analysis and to draw some conclusions summarized below.
The numerical value of the
$<O_8^{\Upsilon(1S)}(^3S_1)>{\mid}_{tot}$ matrix element
should be ascribed almost totally to ${\Upsilon(nS)}$ states.
This finding may be surprising when confronted with other
results obtained from previous analyses \cite{cho,schuler},
where the contribution to the ${\Upsilon(1S)}$ yield
through the colour-octet
$\chi_{bJ}$ channels was thought as dominant
\cite{schuler,beneke,tkabladze}.
On the contrary, we concluded from tables 2
and 3 that the {\em colour-singlet production} by itself
can account for the feeddown of $\Upsilon(1S)$ from ${\chi}_{bJ}$
states. (Notice however that experimental uncertainties still
leave some room for a possible COM contribution but to a much lesser
extent than previously foreseen \cite{cho,schuler}.)
On the other hand the different production channels
are consistent (or can be made consistent)
with the experimental relative fractions shown in table 2,
after some reasonable assumptions.
We have extended our study to LHC collider
experiments ($\sqrt{s}=14$ TeV center-of-mass energy).
In figure 2 we present our predictions for prompt
production rates (i.e. including direct and indirect
production) while in figure 3 we show our prediction
for direct production alone.
Lastly we conclude that the foreseen yield of $\Upsilon(1S)$'s
at LHC energy will be large enough even at
high-$p_T$ to perform a detailed analysis of the colour-octet
production mechanism and should be
included in the B-physics programme of the LHC experiments,
probably deserving (together with charmonia) a dedicated
data-taking \vspace{0.1in} trigger.\par
\subsection*{Acknowledgments}
We acknowledge the working subgroup on $b$-production of
the Workshop on the Standard Model (and more) at the LHC, especially
M. Kraemer and M. Mangano, for
comments and valuable discussions. We also thank R. Cropp and
G. Feild for their assistance
on some experimental issues concerning bottomonia production
at the \vspace{0.3in} Tevatron.
\thebibliography{References}
\bibitem{tdr} ATLAS detector and physics performance Technical
Design Report, CERN/LHCC/99-15.
\bibitem{mas0} M.A. Sanchis-Lozano and B. Cano, Nucl. Phys.
B (Proc. Suppl.) 55A (1997) 277.
\bibitem{mas1} B. Cano-Coloma and M.A. Sanchis-Lozano, Phys. Lett.
{\bf B406} (1997) 232.
\bibitem{mas2} B. Cano-Coloma and M.A. Sanchis-Lozano, Nucl. Phys.
{\bf B508} (1997) 753.
\bibitem{mas3} M.A. Sanchis-Lozano, Nucl. Phys. B (Proc. Suppl.) 75B (1999)
191.
\bibitem{pythia} T. Sj\"{o}strand, Comp. Phys. Comm. {\bf 82} (1994) 74.
\bibitem{braaten} E. Braaten and S. Fleming, Phys. Rev. Lett. {\bf 74}
(1995) 3327.
\bibitem{bodwin} G.T. Bodwin, E. Braaten, G.P. Lepage, Phys. Rev. {\bf D51}
(1995) 1125.
\bibitem{csm} G. Sch\"{u}ler, CERN-TH -7170-94, hep-ph/9403387.
\bibitem{fermi} CDF Collaboration, Phys. Rev. Lett. {\bf 69} (1992) 3704.
\bibitem{cho} P. Cho and A.K. Leibovich, Phys. Rev. {\bf D53} (1996) 6203.
\bibitem{fermi1} G. Feild {\em et al.}, CDF note 5027.
\bibitem{fermi2} CDF Collaboration, CDF note 4392.
\bibitem{montp} M.A. Sanchis-Lozano, Montpellier QCD Conference,
hep-ph/9907497.
\bibitem{pdg} C. Caso {\em et al.}, Particle Data Group, EPJ {\bf C3} (1998) 1.
\bibitem{schuler} G. Sch\"{u}ler, Int. J. Mod. Phys. {\bf A12} (1997) 3951.
\bibitem{eichten} E.J. Eichten and C. Quigg, Phys. Rev. {\bf D52} (1995) 1726.
\bibitem{beneke2} M. Beneke and M. Kr\"{a}mer, Phys. Rev. {\bf D55} (1997)
5269.
\bibitem{tkabladze} A. Tkabladze, DESY preprint 99-082, hep-ph/9907210.
\bibitem{beneke} M. Beneke, CERN-TH/97-55, hep-ph/9703429.
\end{document}
|
1,477,468,750,642 | arxiv | \section{Introduction}
\section{$\psi(4040)/\psi(4160) \to \eta J/\psi$ }
Experimentally well established structures
$\psi(4040)$, $\psi(4160)$, and
$\psi(4415)$ resonances above
the $D\bar{D}$ production threshold are of great interest but not well
understood, even decades after their first
observation.
BESIII accumulated a $478$~pb$^{-1}$ data sample at a center-of-mass (CMS)
energy of $\sqrt{s}=4.009$~GeV. Using this data sample, the processes
$e^+e^-\to\etaJ/\psi$ and $\pi^0J/\psi$ cross section are measured~\cite{liu}.
In this analysis, the $J/\psi$ is reconstructed through its
decays into lepton pairs while $\eta/\pi^0$ is
reconstructed in the $\gamma\gamma$ final state. After imposing all of some selection criteria,
a clear $J/\psi$ signal is observed in the $\mu^+\mu^-$ mode while
indications of a peak
around 3.1~GeV/c$^2$ also exist in the $e^+e^-$ mode.
A significant $\eta$ signal is observed
in $M(\gamma\gamma)$ in both $J/\psi\to \mu^+\mu^-$ and
$J/\psi\to e^+e^-$, as shown in Fig.~\ref{fit-mgg}. No
significant $\pi^0$ signal is observed. The $M(\gamma\gamma)$ invariant mass distributions are fitted using an unbinned
maximum likelihood method. For the $\eta$
signal, the statistical significance is larger than $10\sigma$ while
that for the $\pi^0$ signal is only $1.1\sigma$.
The Born cross section for $e^+e^-\to \etaJ/\psi$ is measured to be
$(32.1\pm 2.8 \pm 1.3)$~pb, and the Born cross section is found to be less
than 1.6~pb at the 90\% confidence level (C.L.) for $e^+e^-\to
\pi^0J/\psi$.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[height=2.95cm]{Fig3.eps}
\includegraphics[height=2.95cm]{Fig4.eps}
\includegraphics[height=2.95cm]{Fig5.eps}
\caption{ Distributions of $M(\gamma \gamma)$ between 0.2~GeV/c$^2$ and 0.9~GeV/c$^2$
for $J/\psi\to \mu^+\mu^-$ (left panel) and for
$J/\psi\to e^+e^-$ (middle panel) and distribution of $M(\gamma \gamma)$ below 0.3~GeV/c$^2$
for $J/\psi\to \mu^+\mu^-$ (right panel). Dots with error
bars are data in $J/\psi$ mass signal region, and the green shaded
histograms are from normalized $J/\psi$ mass sidebands. The curves
show the total fit and the background term.} \label{fit-mgg}
\end{center}
\end{figure*}
Belle used 980~fb$^{-1}$ data to study the process
$e^+e^- \to \etaJ/\psi$ via ISR~\cite{wangxl}. $\eta$ is reconstructed in the $\gamma \gamma$
and $\pi^+ \pi^- \pi^0$ final states. Due to the high background level from Bhabha
scattering, the $J/\psi\to e^+e^-$ mode is not used in conjunction
with the decay mode $\eta\to \gamma \gamma$.
Clear $\eta$ and $J/\psi$ signals could be observed.
A dilepton pair is considered as a $J/\psi$ candidate
if $M_{\ell^+\ell^-}$ is within $\pm 45~{\rm MeV}/c^2$ of the $J/\psi$ nominal mass.
The $\eta$ signal region is defined as $M_{\pi^+\pi^-\pi^0} \in [0.5343,
0.5613]~{\rm GeV}/c^2$ and $M_{\gamma\gamma}\in [0.5,0.6]~{\rm GeV}/c^2$.
$-1~({\rm GeV}/c^2)^2 < M_{\rm rec}^2 < 2.0~({\rm GeV}/c^2)^2$ is required to select ISR candidates, where $M_{\rm rec}^2$ is
the square of the mass recoiling against the $\etaJ/\psi$ system.
After event selections, an unbinned maximum likelihood fit is performed to the mass
spectra $M_{\etaJ/\psi}\in [3.8,4.8]~{\rm GeV}/c^2$ from the signal
candidate events and $\eta$ and $J/\psi$ sideband events
simultaneously, as shown in Fig.~\ref{fit}. The fit to the signal
events includes two coherent $P$-wave Breit-Wigner functions,
$BW_1$ for $\psi(4040)$ and $BW_2$ for $\psi(4160)$, and an
incoherent second-order polynomial background.
Statistical significance is $6.5\sigma$ for $\psi(4040)$ and $7.6\sigma$ for
$\psi(4160)$. There are two solutions with equally good fit quality:
${\cal B}(\psi(4040)\to\etaJ/\psi)\cdot\Gamma_{e^+e^-}^{\psi(4040)} =
(4.8\pm0.9\pm1.4)~\rm eV$ and
${\cal B}(\psi(4160)\to\etaJ/\psi)\cdot\Gamma_{e^+e^-}^{\psi(4160)} =
(4.0\pm0.8\pm1.4)~\rm eV$ for one solution and
${\cal B}(\psi(4040)\to\etaJ/\psi)\cdot\Gamma_{e^+e^-}^{\psi(4040)} =
(11.2\pm1.3\pm1.9)~\rm eV$ and
${\cal B}(\psi(4160)\to\etaJ/\psi)\cdot\Gamma_{e^+e^-}^{\psi(4160)} =
(13.8\pm1.3\pm2.0)~\rm eV$ for the other solution, where the first
errors are statistical and the second are systematic. The partial widths to $\etaJ/\psi$ are found to be
about $1~\rm MeV$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=5.0cm, angle=-90]{two-psi.epsi}
\caption{The $\etaJ/\psi$ invariant mass distribution and the fit
results. The points with error bars show the data while the shaded
histogram is the normalized $\eta$ and $J/\psi$ background from the
sidebands. The curves show the best fit on signal candidate events
and sideband events simultaneously and the contribution from each
Breit-Wigner component.The dashed curves at each peak show the
two solutions.} \label{fit}
\end{center}
\end{figure}
\section{Some results on $\eta_c$ and $\eta_c(2S)$}
The $\eta_c$ mass and width have large uncertainties.
The measured results of the $\eta_c$ mass and width from $J/\psi$ radiative transitions
and two-photon fusion and $B$ decays have large inconsistence.
The most recent study by the CLEO-c experiment, using both $\psi(2S) \to
\gamma\eta_c$ and $J/\psi\to \gamma\eta_c$, pointed out a
distortion of the $\eta_c$ line shape in $\psi(2S)$ decays.
With a $\psi(2S)$ data sample of $1.06\times 10^8$ events, BESIII
reported measurements of the $\eta_c$ mass and
width using the radiative transition $\psi(2S) \to \gamma \eta_c$~\cite{bes3-etac}.
Six modes are used to
reconstruct the $\eta_c$: $K_SK^+\pi^-$, $\kk\piz$, $\eta\pp$, $K^{0}_{S} K^{\pm}\pi^{\mp}\pi^{+}\pi^{-}$,
$\kk\pp\piz$, and $3(\pp)$, where the $K_S^0$ is reconstructed in
$\pi^+\pi^-$, and the $\eta$ and $\pi^0$ in $\gamma\gamma$ decays.
Figure~\ref{fig:metac} shows the $\eta_c$ invariant mass
distributions for selected $\eta_c$ candidates, together with the
estimated backgrounds. A clear $\eta_c$
signal is evident in every decay mode.
Assuming 100\% interference between the
$\eta_c$ and the non-resonant amplitude, an unbinned
simultaneous maximum likelihood fit was performed.
In the fit, the $\eta_c$ mass, width, and relative phases
are free parameters, and the mass and width are constrained to be the
same for all decay modes. Two solutions of relative phase are found for every decay mode,
one represents constructive interference, the other for
destructive. The measured mass is $M = 2984.3\pm 0.6 (stat.)\pm 0.6(syst.)~\mathrm{MeV}/c^2$ and width
$\Gamma = 32.0 \pm 1.2 (stat.)\pm 1.0(syst.)~\rm MeV$. The interference is significant,
which indicates previous measurements of the $\eta_c$ mass and width via radiative
transitions may need to be rechecked. The results are consistent with
that from photon-photon fusion and $B$ decays; this may partly clarify the discrepancy puzzle.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=4.1cm]{kskpi_fit_withbkg.eps}
\includegraphics[width=4.1cm]{kkpi0_fit_withbkg.eps}
\includegraphics[width=4.1cm]{pipieta_fit_withbkg.eps}
\includegraphics[width=4.1cm]{ksk3pi_fit_withbkg.eps} \hspace{0.07cm}
\includegraphics[width=4.1cm]{2k2pipi0_fit_withbkg.eps}
\includegraphics[width=4.1cm]{sixpi_fit_withbkg.eps}
\end{center}
\caption{The $M(X_i)$ invariant mass distributions for the decays
$K_SK^+\pi^-$, $\kk\piz$, $\eta\pp$, $K^{0}_{S} K^{\pm}\pi^{\mp}\pi^{+}\pi^{-}$, $\kk\pp\piz$ and $3(\pp)$,
respectively, with the fit results (for the constructive solution)
superimposed. Points are data and the various curves are the total fit
results. Signals are shown as short-dashed lines; the non-resonant
components as long-dashed lines; and the interference between them
as dotted lines.
Shaded histograms are (in red/yellow/green) for (continuum/$\pi^0
X_i$/other $\psi(2S)$ decays) backgrounds.
The continuum backgrounds for $K_SK^+\pi^-$ and $\eta\pp$ decays are
negligible.}
\label{fig:metac}
\end{figure*}
Similarly the properties of the $\eta_{c}(2S)$ are not well-established either.
The $\eta_{c}(2S)$ was first observed by the Belle collaboration in the
process $B^\pm\to K^\pm \eta_{c}(2S)$, $\eta_{c}(2S)\to K_S^0K^\pm
\pi^\mp$. It was confirmed in the
two-photon production of $K_S^0K^\pm
\pi^\mp$, and in the double-charmonium production process
$e^+e^-\to J/\psi c\bar{c}$. Combining the world-average
values with the most recent results from Belle and
BaBar on two-photon fusion into hadronic final states other than
$K_S^0K^\pm
\pi^\mp$, one
obtains updated averages of the $\eta_{c}(2S)$ mass and width of
$3637.7\pm 1.3~{\rm MeV}/c^2$ and $10.4\pm 4.2~{\rm MeV}$,
respectively. $\eta_{c}(2S)$ was also observed in six-prong final states in two-proton
processes including $3(\pi^+\pi^-)$, $K^+K^-2(\pi^+\pi^-)$, $2(K^+K^-)\pi^+\pi^-$,
$K^{0}_{S} K^{\pm}\pi^{\mp}\pi^{+}\pi^{-}$ by Belle collaboration.
The measured averaged mass and width of $\eta_{c}(2S)$ are
$3636.9\pm1.1\pm2.5\pm5.0$ MeV/$c^2$ and $9.9\pm3.2\pm2.6\pm2.0$ MeV/$c^2$.
The results were reported in ICHEP2010 meeting, but the results are still
preliminary up to date.
Recently BESIII collaboration searched for the M1 radiative transition
$\psi(2S) \rightarrow \gamma \eta_{c}(2S)$ by reconstructing the exclusive
$\eta_{c}(2S) \rightarrow K^{0}_{S} K^{\pm}\pi^{\mp}\pi^{+}\pi^{-}$ decay using
1.06 $\times$ $10^{8}$ $\psi(2S)$ events~\cite{bes3-etac2s}.
The final mass spectrum of $K^{0}_{S} K^{\pm}\pi^{\mp}\pi^{+}\pi^{-}$
and the fitting results are shown in Fig.~\ref{fig:fitting_total}.
The fitting function consists of the following components:
$\eta_{c}(2S)$, $\chi_{cJ}(J= 0, 1, {\rm and}~2)$ signals and
$\psi(2S) \rightarrow K^{0}_{S} K^{\pm}\pi^{\mp}\pi^{+}\pi^{-}$,
$\psi(2S) \rightarrow \pi^{0} K^{0}_{S} K^{\pm}\pi^{\mp}\pi^{+}\pi^{-}$, ISR,
and phase space backgrounds.
The result for the yield of $\eta_{c}(2S)$ events is $57\pm17$ with a significance of 4.2$\sigma$.
The measured mass of the $\eta_{c}(2S)$ is 3646.9 $\pm 1.6(stat.) \pm 3.6(syst.)$ $\mathrm{MeV}/c^2$, and the width
is 9.9 $\pm 4.8(stat.) \pm 2.9(syst.)$ $\mathrm{MeV}/c^2$. The product branching fraction is measured to be
${\cal B}(\psi(2S) \rightarrow \gamma \eta_{c}(2S)) \times {\cal B}(\eta_{c}(2S) \rightarrow K^{0}_{S} K^{\pm}\pi^{\mp}\pi^{+}\pi^{-})$ =
(7.03 $\pm 2.10(stat.) \pm 0.70(syst.)$) $\times$ $10^{-6}$.
This measurement complements a previous BESIII measurement of $\psi(2S) \rightarrow \gamma\eta_{c}(2S)$ with $\eta_{c}(2S) \rightarrow K_SK^+\pi^-$ and $K\bar{K}\pi$.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=3.5in,height=2.5in]{bes-fig3.eps}
\end{center}
\caption{
The results of fitting the mass spectrum for $\chi_{cJ}$ and $\eta_{c}(2S)$. The black dots are the data,
the blue long-dashed line shows the $\chi_{cJ}$ and $\eta_{c}(2S)$ signal shapes, the cyan dotted line represents
the phase space contribution, the violet dash-dotted line shows the continuum data contribution, the green
dash-double-dotted line shows the contribution of $\psi(2S) \to K^{0}_{S} K^{\pm}\pi^{\mp}\pi^{+}\pi^{-}$, and the red dashed line is the contribution
of $\psi(2S) \to \pi^{0} K^{0}_{S} K^{\pm}\pi^{\mp}\pi^{+}\pi^{-}$.
}
\label{fig:fitting_total}
\end{figure*}
\section{Evidence of the $1^3D_2~c\bar{c}$ state (X(3823))}
During the last decade, a number of new charmonium
($c\bar{c}$)-like states were observed, many of which are candidates for
exotic states. The observation of a
$D$-wave $c\bar{c}$ meson and its decay modes would test phenomenological
models. The undiscovered $1^3D_2~c\bar{c}$ $(\psi_2$) and $1^3D_3~c\bar{c}~(\psi_3)$ states are
expected to have significant branching fractions to $\chi_{c1}\gamma$ and
$\chi_{c2}\gamma$, respectively. So Belle used $772\times 10^{6}$ $B\overline{B}$ events
to search for the possible structures in $\chi_{c1} \gamma$ and $\chi_{c2} \gamma$ mass spectra
in the processes $B \to \chi_{c1} \gamma K$ and
$B \to \chi_{c2} \gamma K$ decays, where the $\chi_{c1}$ and $\chi_{c2}$
decay to $J/\psi \gamma$~\cite{x3823}. The $J/\psi$ meson is reconstructed via its decays to $\ell^+\ell^-$
($\ell =$ $e$ or $\mu$).
The $M_{\chi_{c1}\gamma}$ distribution from $B^{\pm} \to (\chi_{c1} \gamma) K^{\pm}$
and $B^{0} \to (\chi_{c1} \gamma) K_S^{0}$ decays was shown in Fig.~\ref{fig:sim},
where there is a significant narrow peak at
3823 MeV/$c^2$, denoted hereinafter as $X(3823)$. No signal
of $X(3872) \to \chi_{c1}\gamma$ is seen.
To extract the mass of the $X(3823)$,
a simultaneous fit to $B^{\pm} \to (\chi_{c1}\gamma) K^{\pm}$
and $B^0 \to (\chi_{c1}\gamma) K_S^0$ is performed, assuming that
$\mathcal{B}(B^\pm \to X(3823) K^\pm)/\mathcal{B}(B^0 \to X(3823)K^0)$ =
$\mathcal{B}(B^\pm \to \psi' K^\pm)/\mathcal{B}(B^0 \to \psi' K^0)$.
The mass of the $X(3823)$ is measured to be
$3823.1\pm1.8({stat.})\pm 0.7{(syst.)}$ MeV$/c^2$ and
signal significance is estimated to be 3.8$\sigma$ with systematic uncertainties included.
The measured branching fraction product
$\mathcal{B}(B^{\pm} \to X(3823) K^{\pm}) \mathcal{B}(X(3823) \to \chi_{c1}\gamma)$
is $(9.7 \pm 2.8 \pm 1.1)\times 10^{-6}$. No
evidence is found for $X(3823)\to \chi_{c2}\gamma$.
The properties of the $X(3823)$ are consistent with those
expected for the $\psi_2~(1 ^3D_2~ c\bar{c})$ state.
\begin{figure}[h!]
\begin{center}
\includegraphics[height=55mm,width=80mm]{use_new_paper_sim.eps}
\caption{\label{fig:sim} Two-dimensional unbinned extended
maximum likelihood fit projection of $M_{\chi_{c1}\gamma}$
distribution for the simultaneous fit of $B^{\pm} \to (\chi_{c1} \gamma) K^{\pm}$
and $B^{0} \to (\chi_{c1} \gamma) K_S^{0}$ decays
for $M_{\rm bc} > 5.27 $ GeV$/c^2$. }
\end{center}
\end{figure}
\section{Search for $X(1835)$}
In the radiative decay $J/\psi\to\gamma \pi^+\pi^-\eta'$, the BESII
Collaboration observed a resonance, the $X(1835)$,
with a statistical significance of 7.7$\sigma$.
Recently the structure has been confirmed by BESIII
in the same process with
$2.25\times 10^8$ $J/\psi$ events.
Many theoretical models have been proposed to interpret its underlying structure.
Some interpret $X(1835)$ as radial excitation of $\eta^{'}$,
a $p\bar{p}$ bound state, a glueball
candidate, or a $\eta_{c}$-glueball mixture.
Belle first tried to search for the $X(1835)$
in the two-photon process $\gamma \gamma \to \eta^{\prime}\pi^+\pi^-$
using a 673 fb$^{-1}$ data sample with $\eta^{\prime}\to\eta\pi^+\pi^-$,
and $\eta\to \gamma\gamma$~\cite{zhangcc}.
Significant background reduction is achieved
by applying a $|\sum{\vec{p}_{t}^{\,*}}|$ requirement ($|\sum{\vec{p}_{t}^{\,*}}| < 0.09~$ GeV$/c$), which is determined by taking the absolute value of the vector sum of the
transverse momenta of $\eta^{\prime}$ and the $\pi^+\pi^-$ tracks
in the $e^+e^-$ center-of-mass system.
The $|\sum{\vec{p}_{t}^{\,*}}|$ distribution for the signal peaks
at small values, while that for both backgrounds
decreases toward $|\sum{\vec{p}_{t}^{\,*}}| = 0$
due to vanishing phase space.
The resulting $\eta^{\prime}\pi^+\pi^-$ invariant mass distribution
was shown in Fig.~\ref{Fig fit result for x1835}. According to existing observations, two resonances,
$X(1835)$ and $\eta(1760)$, have been reported
in the lower mass region above the $\eta^\prime\pi^+\pi^-$ threshold.
A fit with the $X(1835)$ and $\eta(1760)$ signals plus their interference
is performed to the lower-mass events. Here, the $X(1835)$ mass and width are fixed at the BES value.
There are two solutions with equally good fit quality;
the results are shown in Fig.~\ref{Fig fit result for x1835}.
In either solution,
the statistical significance is
$2.9\sigma$ for the $X(1835)$ and $4.1\sigma$ for the $\eta(1760)$.
Upper limits on the product $\Gamma_{\gamma\gamma} {\cal B}(\eta^{\prime}\pi^+\pi^-)$ for the $X(1835)$
at the $90\%$ C.L. are determined to be
$35.6$ eV$/c^2$ and $83$ eV$/c^2$
for the constructive- and destructive-interference solutions,
respectively.
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{x1835_s1s2int_3f_phi1.eps}
\includegraphics[width=6cm]{x1835_s1s2int_3f_phi2.eps}
\end{center}
\caption{Results of a combined fit for the $X(1835)$ and $\eta(1760)$
with interference between them.
The points with error bars are data.
The thick solid line is the fit; the thin solid line is the total
background.
The thick dashed (dot-dashed, dotted) line is the fitted signal for the
$\eta(1760)$ ($X(1835)$, the interference term between them).
The left (right) panel represents the solution with
constructive (destructive) interference.
}\label{Fig fit result for x1835}
\end{figure}
C-even glueballs can be studied in the process
$e^+e^- \to\gamma^{*} \to H+\mathcal{G}_J$,
where $H$ denotes a $c\bar{c}$ quark pair or charmonium state
and $\mathcal{G}_J$ is a glueball.
So if the $X(1835)$ was a candidate of glueball, it can also be searched for in the process
$e^+e^-\to J/\psi X(1835)$ at $\sqrt{s}\approx10.6$~GeV at Belle
using a data sample of 672 fb$^{-1}$.
After all the event selections, the $M_{\rm recoil}$ distributions of the $J/\psi$
are shown in Fig.~\ref{fitx}. An unbinned simultaneous maximum likelihood
fit to the $M_{\rm recoil}$ distributions was performed for the $\mu^+\mu^-$ and $e^+e^-$ channels
in the region of 0.8 GeV/c$^2<M_{\rm recoil}<2.8$ GeV/$c^2$, which constrains the expected signal from $J/\psi \to \mu^+\mu^-$
and $J/\psi \to e^+e^-$ to be consistent with the ratio of
$\varepsilon_i$ and ${\cal B}_i$,
where $\varepsilon_i$ and ${\cal B}_i$ are the efficiency and branching
fraction for the two channels, respectively.
No significant evidence of $X(1835)$ is found, and an
upper limit is set on its cross section times the branching fraction:
$\sigma_{\rm Born}(e^+e^- \to J/\psi X(1835)) \cdot$
{${\cal B}(X(1835)\to \ >2$ charged tracks)} $< 1.3 \ {\rm fb}$ at 90\%
C.L. This upper limit is three orders of magnitude smaller than the cross section of
prompt production of $J/\psi$.
According to this work, no evidence was found to support
the hypothesis of $X(1835)$ to be a glueball produced with
$J/\psi$ in the Belle experiment.
\begin{figure}
\begin{center}
\includegraphics[scale=0.14]{aa.eps}
\includegraphics[scale=0.14]{bb.eps}
\put(-300,90){(a)}
\put(-125,90){(b)}
\end{center}
\caption{\label{fitx} The data points are for the distributions of
the recoil mass against $J/\psi$ reconstructed from (a) $\mu^+\mu^-$ and (b) $e^+e^-$.
The histograms represent the backgrounds
from the $J/\psi$ sideband; the hatched histograms represent
charmed- plus $uds$-quark backgrounds.
The solid lines are results of the fits and the dashed lines are background shapes.}
\end{figure}
\section{$\eta\eta$ mass spectra}
According to lattice QCD predictions, the
lowest mass glueball with $J^{PC}=0^{++}$ is in the mass region from $1.5$
to $1.7$~GeV/$c^2$. However, the mixing of the pure glueball with
nearby $q \bar q$ nonet mesons makes the identification of the
glueballs difficult in both experiment and theory.
Radiative $J/\psi$ decay is a gluon-rich process and
has long been regarded as one of the most promising hunting grounds for
glueballs. In particular, for a $J/\psi$ radiative decay to two
pseudoscalar mesons, it offers a very clean laboratory to search for
scalar and tensor glueballs because only intermediate states with
$J^{PC}=even^{++}$ are possible.
Recently the study of $J/\psi \to
\gamma \eta \eta$ was made by BESIII using
$2.25\times 10^{8}$ $J/\psi$ events~\cite{bes3-2eta},
where the $\eta$ meson is detected in its $\gamma\gamma$
decay. There are six resonances, $f_{0}(1500)$, $f_{0}(1710)$,
$f_{0}(2100)$, $f_{2}^{'}(1525)$, $f_{2}(1810)$, $f_{2}(2340)$, as
well as $0^{++}$ phase space and $J/\psi\to\phi\eta$ included in the
basic solution. The masses and widths of
the resonances, branching ratios of $J/\psi$ radiative decaying to X
and the statistical significances are summarized in Table~\ref{mwb}.
The comparisons of the $\eta\eta$ invariant mass spectrum,
$\cos\theta_{\eta}$, $\cos\theta_{\gamma}$ and $\phi_{\eta}$
distributions between the data and the partial wave analysis (PWA) fit projections
are displayed in Fig.~\ref{fig:pwafitresult}.
The results show that the
dominant $0^{++}$ and $2^{++}$ components are from
the $f_0(1710)$, $f_0(2100)$, $f_0(1500)$, $f_2'(1525)$, $f_2(1810)$ and $f_2(2340)$.
\begin{table}[ph]
\tbl{Summary of the PWA results, including the masses and widths for resonances, branching ratios of
$J/\psi\to\gamma$X, as well as the significance. The first errors are statistical and the second ones are systematic.
The statistic significances here are obtained according to the changes of the log likelihood.}
{\begin{tabular}{ccccc}
\hline\hline Resonance &Mass(MeV/$c^{2}$) &Width(MeV/$c^{2}$) &${\cal B}{(J/\psi\to\gamma X\to\gamma \eta\eta)}$ &Significance\\ \hline
$f_{0}(1500)$ &1468$^{+14+23}_{-15-74}$ &136$^{+41+28}_{-26-100}$ &$(1.65^{+0.26+0.51}_{-0.31-1.40})\times10^{-5}$ &8.2~$\sigma$ \\%\hline
$f_{0}(1710)$ &1759$\pm6^{+14}_{-25}$ &172$\pm10^{+32}_{-16}$ &$(2.35^{+0.13+1.24}_{-0.11-0.74})\times10^{-4}$ &25.0~$\sigma$ \\%\hline
$f_{0}(2100)$ &2081$\pm13^{+24}_{-36}$ &273$^{+27+70}_{-24-23}$ &$(1.13^{+0.09+0.64}_{-0.10-0.28})\times10^{-4}$ &13.9~$\sigma$ \\%\hline
$f_{2}^{'}(1525)$ &1513$\pm5^{+4}_{-10}$ &75$^{+12+16}_{-10-8}$ &$(3.42^{+0.43+1.37}_{-0.51-1.30})\times10^{-5}$ &11.0~$\sigma$ \\%\hline
$f_{2}(1810)$ &1822$^{+29+66}_{-24-57}$ &229$^{+52+88}_{-42-155}$ &$(5.40^{+0.60+3.42}_{-0.67-2.35})\times10^{-5}$ &6.4~$\sigma$ \\%\hline
$f_{2}(2340)$ &2362$^{+31+140}_{-30-63}$ &334$^{+62+165}_{-54-100}$ &$(5.60^{+0.62+2.37}_{-0.65-2.07})\times10^{-5}$ &7.6~$\sigma$ \\\hline \hline
\end{tabular} \label{mwb}}
\end{table}
\begin{figure*}[htbp]
\vskip -0.1cm
\centering
{\includegraphics[height=4.5cm]{metaeta.eps}
\put(-130,7){(a)}\put(-75,100){$\chi^{2}/N_{bin}$$=$$1.72$}}
{\includegraphics[height=4.5cm]{cosgamma.eps}
\put(-130,7){(b)}\put(-100,70){$\chi^{2}/N_{bin}$$=$$1.19$}}
{\includegraphics[height=4.5cm]{costhe.eps}
\put(-130,7){(c)}\put(-100,110){$\chi^{2}/N_{bin}$$=$$0.69$}}
{\includegraphics[height=4.5cm]{phi.eps}
\put(-130,7){(d)\put(30,60){$\chi^{2}/N_{bin}$$=$$0.68$}}
}
\caption{Comparisons between data and PWA fit projections: (a) the
invariant mass spectrum of $\eta\eta$, (b)-(c) the polar angle of
the radiative photon in the $J/\psi$ rest frame and $\eta$ in the
$\eta\eta$ helicity frame, and (d) the azimuthal angle of $\eta$ in
the $\eta\eta$ helicity frame. The black dots with error bars are
data with background subtracted, and the solid histograms show the
PWA projections.}
\vskip -0.5cm
\label{fig:pwafitresult}
\end{figure*}
$\eta \eta$ mass spectrum was also ever studied by Belle in two-photon
process $\gamma \gamma \to \eta \eta$ using 393 fb$^{-1}$ data~\cite{belle-2eta}.
This pure neutral final states are
selected with energy sum and cluster counting triggers, both of which information are provided by
a CsI(Tl) electromagnetic calorimeter. The background was subtracted by studying sideband events in
two-dimensional $M_1(\gamma \gamma)$ versus $M_2(\gamma \gamma)$ distributions. Further background
effects are studied using $|\sum{\vec{p}_{t}^{\,*}}|$
distribution. Figure~\ref{cs-gg} shows the total cross sections.
For the lower energy region $1.16~\hbox{GeV} < W < 2.0$ GeV, a PWA was performed to
the differential cross section as shown in Fig.~\ref{total_cs}.
In addition to the known $f_2(1270)$ and $f_2'(1525)$,
a tensor meson $f_2(X)$ is needed to describe $D_2$ wave, which may correspond to $f_2(1810)$ state,
and the mass, width and product of the two-photon decay width and branching fraction $\Gamma_{\gamma\gamma}B(\eta\eta)$
for $f_2(X)$ are obtained to be $1737\pm9$ MeV/$c^2$, $228^{+21}_{-20}$ MeV and $5.2^{+0.9}_{-0.8}$ eV, respectively.
\begin{figure}[h]
\centering
\begin{minipage}{14pc}
\includegraphics[width=12pc]{fig07.eps}
\caption{\label{cs-gg} (a) The cross section integrated
over $|\cos \theta^{\ast}|< 0.9$ and (b) over
$|\cos \theta^\ast|< 1.0$ for $W < 2.0$~GeV. Here $\theta^\ast$
is the angle of $\eta$ in two-photon system. The dotted curve
shows the size of the systematic uncertainty.}
\end{minipage}\hspace{1.5pc}%
\begin{minipage}{14pc}
\includegraphics[width=12pc]{fig14.eps}
\caption{\label{total_cs} Total cross sections and fitted curves for
the nominal fit in the high mass region (solid curve).
Dotted (dot-dashed) curves are $|S|^2$ ($|D_2|^2$) from the fit.}
\end{minipage}
\end{figure}
\section{$\omega \omega$, $\omega \phi$ and $\phi \phi$ mass spectra}
An anomalous near-threshold enhancement, denoted as the $X(1810)$, in the $\omega \phi$ invariant-mass spectrum
in the process $J/\psi \to \gamma \omega \phi$ was reported by the BESII experiment via PWA.
The analysis indicated that the $X(1810)$ quantum number assignment
favored $J^{PC}=0^{++}$ over $J^{PC}=0^{-+}$ or $2^{++}$ with
a significance of more than 10$\sigma$. The mass and width are
$M = 1812^{+19}_{-26}(stat.)\pm18$(syst.) MeV/$c^2$ and
$\Gamma = 105\pm20(stat.)\pm28(syst.)$ MeV/$c^2$, respectively, and the product branching
fraction ${\cal B}$($J/\psi\to\gamma$ $X(1810)$) ${\cal B}$($X(1810)$$\to\omega \phi$)
=$[2.61\pm0.27(stat.)\pm0.65(syst.)]\times10^{-4}$ was measured.
Possible interpretations for the $X(1810)$ include a tetraquark state, a hybrid,
or a glueball state etc., a dynamical effect arising from intermediate meson rescattering,
or a threshold cusp of an attracting resonance.
\begin{figure}
\begin{center}
\includegraphics[scale=0.32]{pwa_mwf.eps}
\end{center}
\caption{\label{pwa_mwf} The $K^+K^-\pi^+\pi^-\pi^0$ invariant-mass distribution between data and PWA fit projections.}
\end{figure}
A PWA that uses a tensor covariant amplitude for the
$J/\psi \to \gamma \omega \phi$ process was performed again
in order to confirm the $X(1810)$ using $(225.3\pm2.8) \times
10^{6} J/\psi$ events~\cite{bes3-x1810}. A PWA was performed on the selected $J/\psi \to \gamma \omega \phi$
candidate events to study the properties of the $\omega \phi$ mass threshold enhancement.
In the PWA, the enhancement is denoted as $X$, and the
decay processes are described with sequential 2-body or 3-body decays:
$J/\psi\to\gamma X, X\to\omega \phi$, $\omega\to\pi^+\pi^-\pi^0$ and $\phi\toK^+K^-$. The amplitudes
of the 2-body or 3-body decays are constructed with a covariant tensor
amplitude method. Finally, together with the contributions of the $X(1810)$ and phase-space, additional
needed components are listed in Table~\ref{optimalres} for the best solution of the PWA fit.
The $J^{PC}=0^{++}$ assignment for the $X(1810)$ has by far
the highest log likelihood value among the different $J^{PC}$ hypotheses,
and the statistical significance of the $X(1810)$ is more than 30$\sigma$.
The mass and width of the $X(1810)$
are determined to be $M=1795\pm7(stat.)^{+13}_{-5}(syst.)\pm19(mod.)$ MeV/$c^2$ and
$\Gamma=95\pm10(stat.)^{+21}_{-34}(syst.)\pm75(mod.)$ MeV/$c^2$ and the product branching fraction is measured to be
${\cal B}(J/\psi\to\gamma X(1810))\times{\cal B}(X(1810)\to\omega \phi)=(2.00\pm0.08(stat.)^{+0.45}_{-1.00}(syst.)\pm1.30(mod.))\times10^{-4}$.
The contributions of each component of the best solution of the PWA fit
are shown in Fig.~\ref{pwa_mwf}. The enhancement is not compatible with being due either
to the $X(1835)$ or the $X(p\bar{p})$, due to the different mass and spin-parity.
The search for other possible states
decaying to $\omega \phi$ would be interesting.
\begin{table}[ph]
\tbl{Results from the best PWA fit solution.}
{\begin{tabular}{cccccc}
\hline\hline
Resonance&J$^{PC}$&M(MeV$/c^2$)&$\Gamma$(MeV$/c^2$)&Events&Significance\\\hline
$X(1810)$&0$^{++}$&$1795\pm7$&$95\pm10$&$1319\pm52$&$>30\sigma$\\\hline
f$_{2}$(1950)&2$^{++}$&1944&472&$665\pm40$&20.4$\sigma$\\\hline
f$_{0}$(2020)&0$^{++}$&1992&442&$715\pm45$&13.9$\sigma$\\\hline
$\eta(2225)$&0$^{-+}$&2226&185&$70\pm30$&$6.4\sigma$\\\hline
phase space&0$^{-+}$&--- &--- &$319\pm24$&9.1$\sigma$\\\hline\hline
\end{tabular} \label{optimalres}}
\end{table}
In the two-photon processes
$\gamma\gamma\to \omegaJ/\psi$ and $\phi
J/\psi$, a state $X(3915)$ and an evidence for
$X(4350)$ were observed.
It is very natural to extend the above theoretical picture to similar
states coupling to $\omega\phi$, since the only difference between such
states and the $X(3915)$ or $X(4350)$ is
the replacement of the $c\bar{c}$ pair with a pair of light quarks.
States coupling to $\omega\omega$ or $\phi\phi$ could also provide information on the
classification of the low-lying states coupled to pairs of light vector
mesons.
The $\gamma \gamma \to VV$ cross sections are shown in
Fig.~\ref{cross-section}~\cite{gg2vv}.
The fraction of cross sections for
different $J^P$ values as a function of $M(VV)$ is also shown in
Fig.~\ref{cross-section}. We conclude that there are at least two
different $J^P$ components ($J=0$ and $J=2$) in each of the three
final states. The inset also shows the distribution of the cross
section on a semi-logarithmic scale, where, in the high
energy region, we fit the $W^{-n}_{\gamma \gamma}$ dependence of
the cross section.
We observe
clear structures at $M(\omega\phi)\sim 2.2$~GeV/$c^2$, $M(\phi \phi)\sim 2.35$~GeV/$c^2$,
and $M(\omega\omega)\sim 2.0$~GeV/$c^2$. While there are substantial spin-zero components in
all three modes, there are also spin-two components near threshold.
\begin{figure}
\begin{center}
\includegraphics[width=1.1in, angle=-90]{fig2a.epsi}
\includegraphics[width=1.1in, angle=-90]{fig2b.epsi}
\includegraphics[width=1.1in, angle=-90]{fig2c.epsi}
\put(-257,-12){ \bf (a)}
\put(-140,-11){ \bf (b)}
\put(-25,-11){ \bf (c)}
\end{center}
\caption{The cross sections of $\gamma \gamma \to \omega \phi$
(a), $\phi \phi$ (b), and $\omega \omega$ (c)
are shown as points with error
bars. The fraction contributions for different $J^P$ values as a
function of $M(VV)$ are shown as the points and squares with error bars.} \label{cross-section}
\end{figure}
\section{Conclusion}
I have reviewed some results on the charmonium and light hadron spectroscopy
mainly from BESIII and Belle experiments,
including the observation of $\psi(4040)/\psi(4160) \to \eta J/\psi$,
some measurements on the $\eta_c/\eta_c(2S)$ resonance parameters and their decays,
the evidence of the $\psi_2(1^3D_2)$ state in the $\chi_{c1}\gamma$ mass spectrum,
the X(1835) research in more processes, and the analysis of the $\eta \eta$,
$\omega \phi$, $\phi\phi$ and $\omega \omega$ mass spectra.
\section*{Acknowledgments}
This work is supported partly by the Fundamental Research Funds for the Central Universities of China (303236).
|
1,477,468,750,643 | arxiv | \section{Introduction and preliminaries}
The study of disjointness in hypercyclicity was initiated in 1970 by Bernal-Gonz\'{a}lez \cite{B1} and B\`{e}s and Peris \cite{BP}, respectively. Since then, disjoint hypercyclicity was investigated by many authors, we recommend \cite{BMP}, \cite{BMS}, \cite{BMPS}, \cite{Sh} and \cite{SA} for recent works on this subject.
The new notions, disjoint hypercyclic operator and disjoint supercyclic operator are derived from the much older notions of hypercyclic operator and supercyclic operator in linear dynamics. Let $X$ be a separable infinite dimensional complex Banach space, we denote by $L(X)$ the set of all continuous and linear operators on $X.$ An operator $T\in L(X)$ is said to be \emph{hypercyclic} if there is some vector $x\in X$ such that the \emph{orbit} $\mathrm{Orb}(T,x)=\{T^nx : n\in \mathbb{N}\}$ (where $\mathbb{N} = \{0, 1, 2, 3, \ldots\}$) is dense in $X$. In such a case, $x$ is called a \emph{hypercyclic vector} for $T.$ Similarly, $T$ is said to be \emph{supercyclic} if there exists an $x\in X$ such that $\mathbb{C}\cdot\mathrm{Orb}(T,x)=\{\lambda T^nx : n\in \mathbb{N}, \lambda\in \mathbb{C}\}$ is dense in $X.$ For the background about hypercyclicity and supercyclicity we refer to the excellent monographs by Bayart and Matheron \cite{BM} and by Grosse-Erdmann and Peris Manguillot \cite{GM}.
$N \geq 2,$ hypercyclic (respectively, supercyclic) operators $T_1, T_2, \ldots, T_N $ acting on the same space $X$ are said to be \emph{disjoint} or \emph{d-hypercyclic}(respectively, \emph{d-supercyclic}) if their direct sum $\oplus_{m=1}^{N}T_m$ has a hypercyclic (respectively, supercyclic) vector of the form $(x, x, \cdots, x)$ in $X^N.$ $x$ is called a \emph{d-hypercyclic}(respectively, \emph{d-supercyclic}) vector for $T_1, T_2, \ldots, T_N.$ If the set of d-hypercyclic (respectively, d-supercyclic) vectors is dense in $X,$ we say $T_1, T_2, \ldots, T_N$ are
\emph{densely d-hypercyclic}(respectively, \emph{densely d-supercyclic}).
In the study of linear dynamics, one large source of examples is the class of weighted shifts. In \cite{S1} and \cite{SH}, Salas characterized the hypercyclic and supercyclic weighted shifts on $\ell^p(\mathbb{Z}) \; (1\leq p < \infty)$ respectively.
The characterizations for weighted shifts on $\ell^p(\mathbb{Z})\; (1\leq p < \infty)$ to be disjoint hypercyclic and disjoint supercyclic were provided in \cite{BP}, \cite{MO} and \cite{LZ}. As generalizations of weighted shifts on $\ell^p(\mathbb{Z})\; (1\leq p < \infty)$, Grosse-Erdmann \cite{GE} studied the hypercyclicity of weighted pseudo-shifts on F-sequence spaces. Hazarika and Arora considered the hypercyclic operator weighted shifts on $\ell^2(\mathbb{Z},\mathcal{K})$ in \cite{HA}. And the equivalent conditions for the weighted pseudo-shifts and operator weighted shifts to be supercyclic were obtained in \cite{LZ2}. Inspired by these results, in \cite{WZ} we characterized the disjoint hypercyclic powers of weighted pseudo-shifts as below.
\begin{theorem}\label{d-hyper1}
$[20, \mbox{ Theorem } 2.1]$
Let $I$ be a countably infinite index set and $X$ be a Banach sequence space over $I$, in which $(e_i)_{i\in I}$ is an OP-basis. Let $\varphi : I \rightarrow I$ be an injective map. $N\geq 2,$ for each $1 \leq l \leq N$, $T_{l} = T_{b^{(l)}, \varphi} : X \rightarrow X$ be a weighted pseudo-shift generated by $\varphi$ with weight sequence $b^{(l)} = (b_i^{(l)})_{i\in I}.$ Then for any integers $1\leq r_1 < r_2 < \cdots\cdots < r_{N},$ the following are equivalent:
\begin{enumerate}
\item $T_{1} ^{r_1}, T_{2} ^{r_2}, \ldots, T_{N} ^{r_N}$ are densely d-hypercyclic.
\item $(\alpha)$ The mapping $\varphi : I\rightarrow I$ has no periodic points.
\quad \; $(\beta)$ There exists an increasing sequence $(n_k)_{k\geq 1}$ of positive integers such that for every $i\in I$ we have:
(H1) $\mbox{ If } 1 \leq l \leq N,$
\begin{eqnarray*}
\left\{\begin{array}{ll}
\left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v(i)} ^{(l)}\right)^{-1} e_{\varphi^{r_l n_k}(i)} \rightarrow 0,\\
\\
\left( \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v(i)} ^{(l)}\right) e_{\psi^{r_l n_k}(i)} \rightarrow 0
\end{array}\right.
\mbox{ in } X, \mbox{ as } k\rightarrow \infty.
\end{eqnarray*}
(H2) If $1 \leq s < l \leq N,$
\begin{eqnarray*}
\left\{\begin{array}{ll}
\left( \prod\limits_{v = 0}^{r_s n_k -1} b_{\varphi^v(i)} ^{(s)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v (\varphi^{r_s n_k}(i))}^{(l)} \right) e_{\psi^{(r_l-r_s) n_k}(i)} \rightarrow 0,\\
\\
\left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v(i)} ^{(l)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v (\varphi^{r_l n_k}(i))} ^{(s)}\right) e_{\varphi^{(r_l-r_s) n_k}(i)} \rightarrow 0\\
\end{array}\right.
\mbox{ in } X, \mbox{ as } k\rightarrow \infty.
\end{eqnarray*}
\item $T_1^{r_1 }, T_{2} ^{r_2}, \ldots,T_{N }^{r_N }$ satisfy the d-Hypercyclicity Criterion.
\end{enumerate}
\end{theorem}
\begin{remark}
In paper \cite{WZ}, Theorem \ref{d-hyper1} holds for weighted pseudo-shifts on an F-sequence space.
\end{remark}
Note that, the weighted pseudo-shifts $T_{b^{(1)}, \varphi}, T_{b^{(1)}, \varphi},\ldots, T_{b^{(N)}, \varphi}$ in the above theorem are determined by different weights $b^{(l)}\;(1 \leq l \leq N)$ and the same injective mapping $\varphi$. In this article, we continue our research by considering the disjoint hypercyclicity and disjoint supercyclicity of finite weighted pseudo-shifts that are generated by different weights and different injective mappings.
The following criteria play an important role in our main results. The first criterion is due to B\`{e}s and Peris \cite{BP} and the second one is due to Martin \cite{MO}.
\begin{definition} \label{d-hypercriterion}
Let $(n_k)_k$ be a strictly increasing sequence of positive integers. We say that $T_{1}, T_{2}, \ldots, T_{N} \in L(X)$ satisfy the d-Hypercyclicity Criterion with respect to $(n_k)_k$ provided there exist dense subsets $X_0, X_1, \ldots, X_N$ of $X$ and mappings $S_{l,k} : X_l \rightarrow X (1 \leq l \leq N, k\in \mathbb{N})$ satisfying
\begin{eqnarray*}
T_l^{n_k} \;\xrightarrow[k\rightarrow \infty]{}\;\; 0 \;\;\; \mbox{pointwise on } X_0,
\end{eqnarray*}
\begin{eqnarray*}
S_{l,k}\;\;\;\xrightarrow[k\rightarrow \infty]{} 0 \;\;\; \mbox{pointwise on } X_l, \mbox{ and }
\end{eqnarray*}
\begin{eqnarray}\label{1.1}
(T_l^{n_k}S_{i,k} - \delta _{i,l} Id_{X_i})\;\;\;\xrightarrow[k\rightarrow \infty]{} 0 \;\;\; \mbox{pointwise on } X_i\; (1\leq i\leq N).
\end{eqnarray}
In general,we say that $T_{1}, T_{2}, \ldots, T_{N} $ satisfy the d-Hypercyclicity Criterion if there exists some sequence $(n_k)_k$ for which \eqref{1.1} is satisfied.
\end{definition}
If $T_{1}, T_{2}, \ldots, T_{N} $ satisfy the d-Hypercyclicity Criterion with respect to a sequence $(n_k)_k$. Then $T_{1}, T_{2}, \ldots, T_{N} $ are densely d-hypercyclic.
\begin{definition} \label{d-supercriterion}
Let $X$ be a Banach space, $(n_k)_k$ be a strictly increasing sequence of positive integers and $N \geq 2$. We say that $T_{1}, T_{2}, \ldots, T_{N} \in L(X)$ satisfy the d-Supercyclicity Criterion with respect to $(n_k)_k$ provided there exist dense subsets $X_0, X_1, \ldots, X_N$ of $X$ and mappings $S_{l,k} : X_l \rightarrow X (1 \leq l \leq N, k\in \mathbb{N})$ such that for $1\leq l\leq N,$
\begin{description}
\item[(i)]$(T_l^{n_k}S_{i,k} - \delta _{i,l} Id_{X_i})\;\;\;\xrightarrow[k\rightarrow \infty]{} 0 \;\;\; \mbox{pointwise on } X_i\; (1\leq i\leq N);$
\item[(ii)]$\lim\limits_{k\rightarrow \infty}\left\|T_l^{n_k}x\right\| \cdot \left\|\sum\limits_{j=1}^{N} S_{j,k} y_j \right\|=0 \;\;\; \mbox{for } x\in X_0, y_j\in X_j.$
\end{description}
\end{definition}
Let $N \geq 2$ and $T_{1}, T_{2}, \ldots, T_{N}\in L(X)$ satisfy the d-Supercyclicity Criterion. Then $T_{1}, T_{2}, \ldots, T_{N} $ have a dense set of d-supercyclic vectors.
To proceed further we recall some terminology about the sequence spaces and the weighted pseudo-shifts. For a comprehensive survey we recommend Grosse-Erdmann's paper \cite{GE}.
\begin{definition}
$\textbf{(Sequence\;Space)}$\;\; If we allow an arbitrary countably infinite set $I$ as an index set, then a \emph{sequence space over} $I$ is a subspace of the space $\omega(I)=\mathbb{C}^{I}$ of all scalar families $(x_i)_{i\in I}.$ The space $\omega(I)$ is endowed with its natural product topology.
A \emph{topological sequence space $X$ over $I$} is a sequence space over $I$ that is endowed with a linear topology in such a way that the inclusion mapping $X\hookrightarrow \omega(I)$ is continuous or, equivalently, that every \emph{coordinate functional} $f_i: X\rightarrow \mathbb{C}, (x_k)_{k\in I}\mapsto x_i(i \in I)$ is continuous. A \emph{Banach $($Hilbert, F-$)$ sequence space over $I$} is a topological sequence space over $I$ that is a Banach $(\mbox{Hilbert, F-})$ space.
\end{definition}
\begin{definition}
$\textbf{(OP-basis)}$\;\;
By $(e_i)_{i\in I}$ we denote the canonical unit vectors $e_i=(\delta_{ik})_{k\in I}$ in a topological sequence space $X$ over $I.$ We say $(e_i)_{i\in I}$ is an $OP-basis$ or ($Ovsepian\; Pe{\l}czy\acute{n}ski \; basis$) if $\mbox{span}\{e_i: i\in I\}$ is a dense subspace of $X$ and
the family of $coordinate\; projections$ $x\mapsto x_i e_i(i \in I)$ on $X$ is equicontinuous.
Note that in a Banach sequence space over $I,$ the family of coordinate projections is equicontinuous if and only if $\sup_{i\in I}||e_i|| ||f_i||< \infty.$
\end{definition}
\begin{definition}
$\textbf{(Pseudo-shift Operator)}$\;\;
Let $X$ be a Banach sequence
space over $I$. Then a continuous linear operator $T : X \rightarrow X$ is called a \textit{ weighted pseudo-shift} if there is a sequence $(b_i)_{i\in I}$ of non-zero scalars and an injective mapping $\varphi : I \rightarrow I$ such that
$$T(x_i)_{i\in I}=(b_i x_{\varphi(i)})_{i\in I}$$
for $(x_i) \in X.$ We then write $T = T_{b,\varphi},$ and $(b_i)_{i\in I}$ is called the \textit{ weight sequence}.
\end{definition}
\begin{remark}
\begin{enumerate}
\item If $T = T_{b,\varphi} : X \rightarrow X$ is a weighted pseudo-shift, then
each $T^n (n\geq 1)$ is also a weighted pseudo-shift as follows
\[
T^{n}(x_i)_{i\in I} = (b_{n,i}x_{\varphi^n(i)})_{i\in I}
\]
where
$$\varphi^n(i) =(\varphi \circ \varphi \circ \cdots \circ \varphi)(i)\;\;\; (\textit{n}- \mbox{fold})$$
\begin{eqnarray*}
b_{n,i}&=&b_ib_{\varphi(i)} \cdots b_{\varphi^{n-1}(i)} = \prod\limits_{v=0}^{n-1}b_{\varphi^{v}(i)}.
\end{eqnarray*}
\item We consider the inverse $\psi = \varphi^{-1} :\varphi(I) \rightarrow I$ of the mapping $\varphi.$ We also set
\[
b_{\psi(i)} = 0 \;\;\; \mbox{ and }\;\;\; e_{\psi(i)} = 0 \;\;\; \mbox{if } i\in I\setminus \varphi(I),
\]
i.e. when $\psi(i)$ is `` undefined ". Then
for all $i \in I,$
$$T_{b,\varphi} e_i = b_{\psi(i)} e_{\psi(i)}.$$
\item We denote $\psi^n = \psi \circ \psi \circ \cdots \circ \psi$ (\textit{n}-fold), and we set $b_{\psi^{n}(i)} = 0 \mbox{ and } e_{\psi^{n}(i)} = 0$ when $\psi^{n}(i)$ is `` undefined ".
\end{enumerate}
\end{remark}
\begin{definition}
Let $\varphi: I \rightarrow I$ be a map on $I$ and let $(\varphi^n)_n$ be the sequence of iterates of the mapping $\varphi$ (that is, $\varphi^n : = \varphi \circ \varphi \circ \cdots \circ \varphi$ ($n$-fold)). We call
$(\varphi^n)_n$ is a \textit{run-away sequence} if for each pair of finite subsets $I_0 \subset I,$ there exists an $n_0 \in \mathbb{N}$ such that $\varphi^n(I_0)\cap I_0 = \emptyset$ for every $n \geq n_0.$
Let $\varphi_1: I \rightarrow I$ and $\varphi_2 :I \rightarrow I$ be two maps on $I.$ We call the sequence $(\varphi_1^n)_n$ is run-away with $(\varphi_2^n)_n$ if for any finite subset $I_0 \subset I,$ there exists an $n_0 \in \mathbb{N}$ such that $\varphi_1^n(I_0)\cap \varphi_2^n(I_0) = \emptyset$ for every $n \geq n_0.$
\end{definition}
\section{Disjoint hypercyclic powers of weighted pseudo-shifts}
In this section, let $X$ be a Banach sequence space over $I.$ We will characterize disjoint hypercyclic weighted pseudo-shifts on $X$ generated by different weights and different injective maps. Which is a generalization of Theorem \ref{d-hyper1}.
The following is the main result in this section.
\begin{theorem}\label{main 1}
Let $X$ be a Banach sequence space over $I$, in which $(e_i)_{i\in I}$ is an OP-basis. Let integers $1\leq r_1 < r_2 < \cdots < r_{N}$ be given. $N\geq 2,$ for each $1 \leq l \leq N$, let $T_{l} = T_{b^{(l)}, \varphi_{l}} : X \rightarrow X$ be a weighted pseudo-shift with the weight sequence $b^{(l)} = (b_i^{(l)})_{i\in I}$ and the injective mapping $\varphi_{l}.$ If for any $1 \leq s < l \leq N,$ the sequence $((\varphi_{s}^{r_s})^n)_n$ is run-away with $((\varphi_{l}^{r_l})^n)_n$. Then the following are equivalent:
\begin{enumerate}
\item $T_{1} ^{r_1}, T_{2} ^{r_2}, \ldots, T_{N} ^{r_N}$ are densely d-hypercyclic.
\item\quad $(\alpha)$ For each $1 \leq l \leq N$, the mapping $\varphi_{l} : I\rightarrow I$ has no periodic points.
\quad\;\;\;\;\;$(\beta)$ There exists an increasing sequence $(n_k)_{k\geq 1}$ of positive integers such that for every $i\in I$ we have:
(H1) $\mbox{ If } 1 \leq l \leq N,$
\begin{eqnarray*}
\left\{\begin{array}{ll}
\lim\limits_{k\rightarrow \infty}\left\|\left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_{l}(i)} ^{(l)}\right)^{-1} e_{\varphi^{r_l n_k}_{l}(i)}\right\| = 0,\\
\\
\lim\limits_{k\rightarrow \infty}\left\|\left( \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v_{l}(i)} ^{(l)}\right) e_{\psi^{r_l n_k}_{l}(i)}\right\| = 0
\end{array}\right.
\end{eqnarray*}
(H2) If $1 \leq s < l \leq N,$
\begin{eqnarray*}
\left\{\begin{array}{ll}
\lim\limits_{k\rightarrow \infty}\left\|\left( \prod\limits_{v = 0}^{r_s n_k -1} b_{\varphi^v_{s}(i)} ^{(s)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v_{l} (\varphi^{r_s n_k}_{s}(i))}^{(l)} \right) e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))} \right\| = 0,\\
\\
\lim\limits_{k\rightarrow \infty}\left\| \left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_{l}(i)} ^{(l)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v_{s} (\varphi^{r_l n_k}_{l}(i))} ^{(s)}\right) e_{\psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i))} \right\| = 0
\end{array}\right.
\end{eqnarray*}
\item $T_1^{r_1 }, T_{2} ^{r_2}, \ldots,T_{N }^{r_N }$ satisfy the d-Hypercyclicity Criterion.
\end{enumerate}
\end{theorem}
\begin{proof}
$(1)\Rightarrow (2).$
Since $T_{1} ^{r_1}, T_{2} ^{r_2}, \ldots, T_{N} ^{r_N}$ are d-hypercyclic, $T_{l}$ is hypercyclic for each $1 \leq l \leq N.$ In \cite{GE}, Grosse-Erdmann proved that if the weighted pseudo-shift $T_{b^{(l)}, \varphi_{l}}$ is hypercyclic, then the mapping $\varphi_{l} : I\rightarrow I$ has no periodic points, which gives that $(\varphi_{l}^n)_n$ is a run-away sequence.
By assumption $I$ is a countably infinite set, we fix $$I : =\{i_1, i_2, \cdots, i_n,\cdots \}$$ and set $I_k: = \{i_1, i_2, \cdots, i_k\}$ for each integer $k$ with $k\geq 1.$
To complete the proof of $(\beta)$, we first show that for any integers $k, N_0$ with $k\geq 1$ and $N_0 \in \mathbb{\mathbb{N}},$ there is an integer $n_k > N_0$ such that for every $i\in I_k,$
$\mbox{ if } 1 \leq l \leq N,$
\begin{eqnarray} \label{a1}
\left\{\begin{array}{ll}
(i)\; \left\| \left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_l(i)} ^{(l)}\right)^{-1} e_{\varphi^{r_l n_k}_l(i)}\right\| < \frac{1}{k},\\
\\
(ii)\; \;\; \left\|\left( \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v_l(i)} ^{(l)}\right) e_{\psi^{r_l n_k}_l(i)}\right\| < \frac{1}{k},
\end{array}\right.
\end{eqnarray}
if $1 \leq s < l \leq N,$
\begin{eqnarray}\label{a2}
\left\{\begin{array}{ll}
(i)\;\left\|\left( \prod\limits_{v = 0}^{r_s n_k -1} b_{\varphi^v_s (i)} ^{(s)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v_l (\varphi^{r_s n_k}_s(i))} ^{(l)}\right) e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))}\right\| < \frac{1}{k}, \\
\\
(ii)\; \; \left\| \left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_{l}(i)} ^{(l)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v_{s} (\varphi^{r_l n_k}_{l}(i))} ^{(s)}\right) e_{\psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i))}\right\| < \frac{1}{k}. \\
\end{array}\right.
\end{eqnarray}
Let integers $k\geq 1$ and $N_0 \in \mathbb{N}$ be given. Notice that $(e_i)_{i\in I}$ is an OP-basis, by the equicontinuity of the coordinate projections in $X,$ there is some $\delta_k > 0$ so that for $x = (x_i)_{i\in I} \in X,$
\begin{eqnarray}\label{a3}
||x_i e_i|| < \frac{1}{2k} \;\; \mbox{ for all } i\in I, \mbox{ if } ||x|| < \delta_k.
\end{eqnarray}
Since for each $1\leq l \leq N$ the sequence $(\varphi_{l}^n)_n$ is run-away and for $1\leq s < l \leq N,$ $((\varphi_{s}^{r_s})^n)_n$ is run-away with $((\varphi_{l}^{r_l})^n)_n$, there exists an integer $\widetilde{N_0}\in \mathbb{N}$ such that
\begin{eqnarray}\label{a4}
\left\{\begin{array}{ll}
(i)\;\varphi^{r_ln}_l(I_k)\cap I_k = \emptyset \; (1\leq l \leq N),\\
\\
(ii)\;\psi_{l}^{r_ln}(\varphi_{s}^{r_s n}(I_k)\cap \varphi^{r_ln}_l(I))\cap I_k = \emptyset \; (1\leq s < l \leq N)
\end{array}\right.
\end{eqnarray}
for all $n\geq \widetilde{N_0}.$
By the density of d-hypercyclic vectors in $X$ and the continuous inclusion of $X$ into $\mathbb{K}^I,$ we can find a d-hypercyclic vector $x = (x_i)_{i\in I}\in X$ and an integer $n_k > \max\{N_0, \widetilde{N_0}\}$ such that
\begin{eqnarray}\label{a5}
\left\{\begin{array}{ll}
(i) \; \|x-\sum_{i\in I_k}e_i\| < \delta_k,\\
\;\\
(ii) \; \sup\limits_{i\in I_k}|x_i-1|\leq \frac{1}{2}\\
\end{array}\right.
\end{eqnarray}
and for each $1\leq l \leq N,$
\begin{eqnarray}\label{a6}
\left\{\begin{array}{ll}
(i) \; \left\|y^{(l)} -\sum_{i\in I_k}e_i\right\| < \delta_k,\\
\;\\
(ii) \; \sup\limits_{i\in I_k}| y_i^{(l)}-1|\leq \frac{1}{2}, \\
\end{array}\right.
\end{eqnarray}
where $y^{(l)} := T_l^{r_ln_k}x = (\prod\limits_{v=0}^{r_ln_k-1}b_{\varphi^v_l(i)}^{(l)}x_{\varphi^{r_ln_k}_l(i)})_{i\in I} = (y^{(l)}_i)_{i\in I}.$ \\
By \eqref{a3}, the first inequality in \eqref{a5} implies that
\begin{eqnarray}\label{a7}
\|x_i e_i\| < \frac{1}{2k} \;\;\;\mbox{ if } i\notin I_k,
\end{eqnarray}
thus by the first inequality in \eqref{a4}, for each $1\leq l \leq N$ we have that
\begin{eqnarray}\label{a8}
\left\|x_{\varphi^{r_ln_k}_l(i)}e_{\varphi^{r_ln_k}_l(i)}\right\| < \frac{1}{2k}\;\; \mbox{ for } i\in I_k.
\end{eqnarray}
By the second inequality in \eqref{a6}, for each $i\in I_k$ and $1\leq l \leq N,$
\begin{eqnarray*}
\left|\prod\limits_{v=0}^{r_ln_k-1}b_{\varphi^v_l(i)}^{(l)}x_{\varphi^{r_ln_k}_l(i)}-1\right|\leq \frac{1}{2},
\end{eqnarray*}
which means that
\begin{eqnarray}\label{a9}
\left\{\begin{array}{ll}
(i)\;x_{\varphi^{r_ln_k}_l(i)} \neq 0, \\
\\
(ii)\;\left|\left(\prod\limits_{v=0}^{r_ln_k-1}b_{\varphi^v_l(i)}^{(l)}x_{\varphi^{r_ln_k}_l(i)}\right)^{-1}\right| \leq 2.\\
\end{array}\right.
\end{eqnarray}
For each $i\in I_k$ and $1\leq l \leq N,$ by \eqref{a8} and \eqref{a9}, it follows that
\begin{eqnarray*}
\left\| \left(\prod \limits_{v=0}^{r_ln_k-1} b_{\varphi^{v}_l(i)}^{(l)}\right)^{-1}e_{\varphi^{r_ln_k}_l(i)}\right\|& = & \left\|\frac{1}{\left(\prod \limits_{v=0}^{r_ln_k-1} b_{\varphi^{v}_l(i)}^{(l)}\right)x_{\varphi^{r_ln_k}_l(i)}} x_{\varphi^{r_ln_k}(i)}e_{\varphi^{r_ln_k}_l(i)}\right\|\\
& \leq & 2\left\|x_{\varphi^{r_ln_k}_l(i)}e_{\varphi^{r_ln_k}_l(i)}\right\| \\
& < & \frac{1}{k}.
\end{eqnarray*}
We deduce from \eqref{a4} and the definition of $\psi$ that
\begin{eqnarray}\label{a10}
\left\{\begin{array}{ll}
\psi^{r_ln_k}_l(I_k\cap \varphi^{r_ln_k}_l(I))\cap I_k = \emptyset \;\; \mbox{ for }1\leq l \leq N,\\
\\
\\ \psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(I_k)\cap \varphi^{r_sn_k}_s(I))\cap I_k = \emptyset\;\; \mbox{ for }1\leq s < l \leq N.
\end{array}\right.
\end{eqnarray}
By \eqref{a3}, the first inequality in \eqref{a6} implies
\begin{eqnarray}\label{a11}
\left\|\prod\limits_{v=0}^{r_ln_k-1}b_{\varphi^v_l(i)}^{(l)}x_{\varphi^{r_ln_k}_l(i)}e_i\right\| < \frac{1}{2k}\;\;\mbox{ for } i\notin I_k \mbox{ and } 1\leq l \leq N.
\end{eqnarray}
Note that for each $1\leq l \leq N,$
$$e_{\psi^{r_ln_k}_l(i)} = 0 \;\;\;\mbox{ if }\; i\in I_k\backslash \varphi^{r_ln_k}_l(I)$$
and for $1\leq s < l \leq N,$
$$e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))} = 0 \;\;\;\mbox{ if }\; i\in \varphi^{r_sn_k}_s(I_k)\backslash \varphi^{r_ln_k}_l(I),$$
$$e_{\psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i))} = 0 \;\;\;\mbox{ if }\; i\in \varphi^{r_ln_k}_l(I_k)\backslash \varphi^{r_sn_k}_s(I).$$
So by \eqref{a11}, \eqref{a10} and $(ii)$ of \eqref{a4} we obtain that:
For each $i\in I_k$ and $1\leq l \leq N,$
\begin{eqnarray}\label{a12}
&\;&\left\|\prod\limits_{v=0}^{r_ln_k-1}b_{\varphi^v_l(\psi^{r_ln_k}_l(i))}^{(l)}x_{\varphi^{r_ln_k}_l(\psi^{r_ln_k}_l(i))}e_{\psi^{r_ln_k}_l(i)}\right\|\nonumber\\
&\;&=\left\|\prod\limits_{v=1}^{r_ln_k} b_{\psi^v_l(i)}^{(l)}x_ie_{\psi^{r_ln_k}_l(i)}\right\|< \frac{1}{2k}.
\end{eqnarray}
For each $i\in I_k$ and $1\leq s < l \leq N,$
\begin{eqnarray}\label{a13}
&\;&\left\|\prod\limits_{v=0}^{r_ln_k-1}b_{\varphi^v_l(\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i)))}^{(l)}x_{\varphi^{r_ln_k}_l(\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i)))}e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))}\right\|\nonumber\\
&\;&=\left\|\prod\limits_{v=1}^{r_ln_k}b_{\psi^v_l(\varphi_s^{r_sn}(i))}^{(l)}x_{\varphi^{r_sn_k}_s(i)}e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))}\right\|
< \frac{1}{2k}
\end{eqnarray}
and
\begin{eqnarray}\label{a14}
&\;&\left\|\prod\limits_{v=0}^{r_sn_k-1}b_{\varphi^v_s(\psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i)))}^{(s)}x_{\varphi_s^{r_sn_k}(\psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i)))}e_{\psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i))}\right\|\nonumber\\
&\;&=\left\|\prod\limits_{v=1}^{r_sn_k}b_{\psi^v_s(\varphi_l^{r_ln_k}(i))}^{(s)}x_{\varphi^{r_ln_k}_l(i)}e_{\psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i))}\right\|
< \frac{1}{2k}.
\end{eqnarray}
By the second inequality in \eqref{a5},
\begin{eqnarray}\label{a15}
0 < \frac{1}{|x_i|} \leq 2 \;\; \mbox{ for } i\in I_k.
\end{eqnarray}
Now by \eqref{a12} and \eqref{a15} we get that for each $i\in I_k$ and $1\leq l \leq N,$
\begin{eqnarray*}
\left\| \left(\prod \limits_{v=1}^{r_ln_k} b_{\psi^{v}_l(i)}^{(l)}\right)e_{\psi^{r_ln_k}_l(i)}\right\|
&=&\left\|\frac{1}{x_i} \left(\prod \limits_{v=1}^{r_ln_k} b_{\psi^{v}_l(i)}^{(l)}\right)x_{i}e_{\psi^{r_ln_k}_l(i)}\right\|\\
&\leq& 2 \left\| \left(\prod \limits_{v=1}^{r_ln_k} b_{\psi^{v}_l(i)}^{(l)}\right)x_{i}e_{\psi^{r_ln_k}_l(i)}\right\|\\
&<& \frac{1}{k}.
\end{eqnarray*}
Similarly, \eqref{a9}, \eqref{a13} and \eqref{a14} give that for each $i\in I_k$ and $1\leq s < l \leq N,$
\begin{eqnarray*}
&\;& \left\|\left( \prod\limits_{v = 0}^{r_s n_k -1} b_{\varphi^v_s (i)} ^{(s)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v_l (\varphi^{r_s n_k}_s(i))} ^{(l)}\right) e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))}\right\|\nonumber\\
&\;& =\left\|\left( \prod\limits_{v = 0}^{r_s n_k -1} b_{\varphi^v_s (i)} ^{(s)}x_{\varphi^{r_sn_k}_s(i)}\right)^{-1} \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v_l (\varphi^{r_s n_k}_s(i))} ^{(l)}x_{\varphi^{r_sn_k}_s(i)} e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))}\right\|\nonumber\\
&\;&\leq 2 \left\|\prod\limits_{v = 1}^{r_l n_k} b_{\psi^v_l (\varphi^{r_s n_k}_s(i))} ^{(l)}x_{\varphi^{r_sn_k}_s(i)} e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))}\right\|< \frac{1}{k}
\end{eqnarray*}
and
\begin{eqnarray*}
&\;&\left\| \left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_l (i)} ^{(l)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v_s (\varphi^{r_l n_k}_l(i))} ^{(s)}\right) e_{\psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i))}\right\|\\
&\;& = \left\|\left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_l (i)} ^{(l)}x_{\varphi^{r_ln_k}_l(i)}\right)^{-1}\left(\prod\limits_{v = 1}^{r_s n_k} b_{\psi^v_s (\varphi^{r_l n_k}_l(i))} ^{(s)}\right) x_{\varphi^{r_ln_k}_l(i)} e_{\psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i))}\right\|\\
&\;&<\frac{1}{k}.
\end{eqnarray*}
Taking $k=1, 2, 3, \ldots$ in the above argument, we can define inductively an increasing sequence $(n_k)_{k\geq 1}$ of positive integers by letting $n_k$ be a positive integer satisfying \eqref{a1} and \eqref{a2} for $N_0 = n_{k-1}$ (where we set $N_0 = 0$ when $k = 1$). Since for any fixed $i\in I,$ there exists an integer $m_0 \in \mathbb{N}$ such that $i\in I_k$ for all $k \geq m_0,$ the sequence
$(n_k)_{k\geq 1}$ satisfies \textit{(H1)} and \textit{(H2)}.
$(2)\Rightarrow (3).$ Suppose (2) holds and let $(n_k)_{k \geq 1}$ be an increasing sequence of positive integers satisfying \textit{(H1)} and \textit{(H2)}.
Set $X_0 = X_1 = \cdots = X_N = span \{ e_i : i\in I\}$ which are dense in $X.$ For each $1 \leq l \leq N$ and integer $n\geq 1,$ we consider the linear mapping $S_{l,n} : X_l \rightarrow X$ given by
$$ S_{l, n}(e_i) = \left(\prod \limits_{v=0}^{r_ln_k-1} b_{\varphi^{v}_l(i)}^{(l)}\right)^{-1}e_{\varphi^{r_ln}_l(i)}\;\;\;\;\;(i\in I).$$
Since
$T_l^{r_ln} e_i = \left(\prod \limits_{v=1}^{r_ln} b_{\psi^{v}_l(i)}^{(l)}\right)e_{\psi^{r_ln}_l(i)}\;(n\geq 1),$
we have $T_l^{r_ln} S_{l,n} (e_i) = e_i$ for $i\in I$ and $n\geq 1.$ \\
By \textit{(H1)}, for any $i \in I$ and $1 \leq l \leq N,$
$$\lim \limits_{k\rightarrow \infty}S_{l, n_k}(e_i) = \lim \limits_{k\rightarrow \infty}\left(\prod \limits_{v=0}^{r_ln_k-1} b_{\varphi^{v}_l(i)}^{(l)}\right)^{-1}e_{\varphi^{r_ln_k}_l(i)} = 0$$
and
$$\lim \limits_{k\rightarrow \infty}T_l^{r_ln_k} e_i = \lim \limits_{k\rightarrow \infty}\left(\prod \limits_{v=1}^{r_ln_k} b_{\psi^{v}_l(i)}^{(l)}\right)e_{\psi^{r_ln_k}_l(i)} = 0.$$
An easy calculation gives that for any $i \in I$ and $1 \leq s < l \leq N,$
\begin{eqnarray*}
\left\{\begin{array}{ll}
T_l^{r_ln_k}S_{s, n_k}(e_i)=\left( \prod\limits_{v = 0}^{r_s n_k -1} b_{\varphi^v_s(i)} ^{(s)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v_l (\varphi^{r_s n_k}_s(i))}^{(l)} \right) e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))},\\
T_s^{r_sn_k}S_{l, n_k}(e_i)= \left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_l(i)} ^{(l)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v_s (\varphi^{r_l n_k}_l(i))} ^{(s)}\right) e_{\psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i))},\\
\end{array}\right.
\end{eqnarray*}
therefore by \textit{(H2)}, $T_l^{r_ln_k}S_{s, n_k}(e_i)$ and $T_s^{r_sn_k}S_{l, n_k}(e_i)$ both tend to $0$ for any $i\in I.$ Then we conclude by using linearity.
$(3)\Rightarrow (1).$ This implication is obvious.
\end{proof}
\begin{remark}
For each $1 \leq l \leq N$, the mapping $\varphi_{l} : I\rightarrow I$ has no periodic points does not implies $(\varphi_{s}^{r_sn})_n$ is run-away with $(\varphi_{l}^{r_ln})_n$ for $1 \leq s < l \leq N.$ For example, if we let $I = \mathbb{Z},$ define $\varphi_1(i) = i+2$ and $\varphi_2(i) = i+1$ for $i\in \mathbb{Z}.$ Let $r_1 = 1, r_2 = 2.$ Clearly, for $l =1,2,$ $\varphi_{l}$ has no periodic points. But for any positive integer $n\geq 1,$ $\varphi_1^{r_1 n} = \varphi_2^{r_2 n}.$
\end{remark}
Now we illustrate Theorem \ref{main 1} with the following example. The definitions about shifts on weighted $L^p$ spaces of directed trees are borrowed from Mart\'{\i}nez-Avenda\~{n}o \cite{MRA}.
\begin{example}
Let $T = (V,E)$ be a infinite unrooted directed tree such that $T$ have no vertices of outdegree larger than $1.$ Let $1\leq p < \infty,$ and $\lambda = \{\lambda_v\}_{v\in V}$ be a sequence of positive numbers such that $\sup\limits_{u\in V,v=chi(u)}\frac{\lambda_{v}}{\lambda_u} < \infty$ and $\sup\limits_{u\in V,v=chi^2(u)}\frac{\lambda_{v}}{\lambda_u} < \infty$. We denote by $L^p(T, \lambda)$ the space of complex-valued functions $f: V\rightarrow \mathbb{C}$ such that
$$\sum\limits_{v\in V}|f(v)|^p\lambda_{v} < \infty.$$
$L^p(T, \lambda)$ is a Banach space with the norm $$\|f\|_p = \left(\sum\limits_{v\in V}|f(v)|^p\lambda_{v} \right)^{\frac{1}{p}}.$$
Now we define the shifts $S_1$ and $S_2$ on $L^p(T, \lambda)$ by
$$(S_1f)(v) = f(par(v))\;\; \mbox { for } f\in L^p(T, \lambda)$$
and
$$(S_2f)(v) = f(par^2(v))\;\; \mbox { for } f\in L^p(T, \lambda).$$
The weight assumption ensure that $S_1$ and $S_2$ are bounded on $L^p(T, \lambda).$
Let $f$ be any vector in $L^p(T, \lambda),$ if we identify $f$ by $(f(v))_{v\in V},$ then $L^p(T, \lambda)$ can be seen as a Banach sequence space over $I : = V.$ Let $u\in V,$ and denote by $\chi_u$ the characteristic function of vertex $u.$ Define $e_u: = \chi_u.$ Clearly, $(e_u)_{u\in V}$ is an OP-basis of $L^p(T, \lambda).$
In this interpretation, for $l=1,2,$ $S_l$ is a weighted pseudo-shift $T_{b^{(l)}, \varphi_l}$ with
\begin{eqnarray*}
b^{(l)}_v = 1 \mbox{ and } \varphi_l(v) = par^l(v)\;\;\;( v\in V).
\end{eqnarray*}
Thus for $l=1,2,$ the inverse $\psi_l = \varphi_l^{-1} : par^l(V)\rightarrow V$ is given by
\begin{eqnarray*}
\psi_l(v) = chi^l(v)\;\;\;\mbox{ for } v\in par^l(V).
\end{eqnarray*}
If we set
\begin{eqnarray*}
b^{(l)}_{chi(v)} = 0 \mbox{ and }\lambda_{chi(v)}=0 \;\;\mbox{ if } v\in V\backslash par(V),
\end{eqnarray*}
then by Theorem \ref{main 1}, $S_1, S_2^2$ are densely d-hypercyclic if and only if there exists an increasing sequence $(n_k)_{k\geq 1}$ of positive integers such that for every $v\in V$ and $l=1,2,$
\begin{eqnarray*}
\lim \limits_{k\rightarrow \infty}\left\|\left( \prod\limits_{t = 0}^{l n_k -1} b_{\varphi^t_{l}(v)} ^{(l)}\right)^{-1} e_{\varphi^{l n_k}_{l}(v)}\right\|=\lim \limits_{k\rightarrow \infty}\left\|\chi_{par^{l^2n_k}(v)} \right\|=\lim \limits_{k\rightarrow \infty} \left(\lambda_{par^{l^2n_k}(v)}\right)^{\frac{1}{p}}=0,
\end{eqnarray*}
\begin{eqnarray*}
\lim \limits_{k\rightarrow \infty}\left\|\left( \prod\limits_{t = 1}^{l n_k} b_{\psi^t_{l}(v)} ^{(l)}\right) e_{\psi^{l n_k}_{l}(v)}\right\|=\lim \limits_{k\rightarrow \infty}\left\|\chi_{chi^{l^2n_k}(v)} \right\|=\lim \limits_{k\rightarrow \infty} \left(\lambda_{chi^{l^2n_k}(v)}\right)^{\frac{1}{p}}=0
\end{eqnarray*}
and for $s=1, l=2,$
\begin{eqnarray*}
&\;&\lim \limits_{k\rightarrow \infty}\left\|\left( \prod\limits_{t = 0}^{s n_k -1} b_{\varphi^t_{s}(v)} ^{(s)}\right)^{-1}
\left( \prod\limits_{t = 1}^{l n_k} b_{\psi^t_{l} (\varphi^{s n_k}_{s}(v))}^{(l)} \right) e_{\psi_{l}^{ln_k}(\varphi_{s}^{s n_k}(v))} \right\|\\
&\;& = \lim \limits_{k\rightarrow \infty}\left\|\chi_{chi^{(l^2-s^2)n_k}(v)} \right\|
=\lim \limits_{k\rightarrow \infty} \left(\lambda_{chi^{3n_k}(v)}\right)^{\frac{1}{p}}=0,
\end{eqnarray*}
\begin{eqnarray*}
&\;&\lim \limits_{k\rightarrow \infty}\left\|\left( \prod\limits_{t = 0}^{l n_k -1} b_{\varphi^t_{l}(v)} ^{(l)}\right)^{-1}
\left( \prod\limits_{t = 1}^{s n_k} b_{\psi^t_{s} (\varphi^{l n_k}_{l}(v))} ^{(s)}\right) e_{\psi_{s}^{sn_k}(\varphi_{l}^{l n_k}(v))}\right\|\\
&\;& = \lim \limits_{k\rightarrow \infty}\left\|\chi_{par^{(l^2-s^2)n_k}(v)} \right\|
=\lim \limits_{k\rightarrow \infty} \left(\lambda_{par^{3n_k}(v)}\right)^{\frac{1}{p}}=0.
\end{eqnarray*}
If we let $s_1, s_2\in \mathbb{R}$ with $1 < s_1 \leq s_2.$ Select an arbitrary fixed vertex and call it $\omega^*.$ For each $u\in V,$ set
\begin{eqnarray*}
\lambda_u = \left\{\begin{array}{ll}
\frac{1}{s_1^d} \;\; \mbox{ if } \omega^* \mbox{ is a descendant of } u,\\
\\
\frac{1}{s_2^d} \;\; \mbox{ if } \omega^* \mbox{ is an ancestor of } u,\\
\end{array}\right.
\end{eqnarray*}
where $d = \mbox{ dist } (u, \omega^*).$\\
Fix $u\in V$ with $d = \mbox{ dist } (u, \omega^*)$ . Also, let $n > d,$ we then have that
\begin{eqnarray*}
\lambda_{par^{n}(u)}= (\frac{1}{s_1})^{n\pm d},
\end{eqnarray*}
where the plus sign corresponds to the case where $\omega^*$ is a descendant of $u$ and the minus sign corresponds to the case where $\omega^*$ is an ancestor of $u.$
And
\begin{eqnarray*}
\lambda_{chi^{n}(u)} = \left\{\begin{array}{ll}
\frac{1}{s_2^{n - d}} \;\; \mbox{ if } \omega^* \mbox{ is a descendant of } u \mbox{ and } u\in par^n(V),\\
\\
\frac{1}{s_2^{n + d}} \;\; \mbox{ if } \omega^* \mbox{ is an ancestor of } u \mbox{ and } u\in par^n(V),\\
\\
0 \;\; \;\;\;\;\;\;\mbox{ if } u\in V\backslash par^n(V).
\end{array}\right.
\end{eqnarray*}
Therefore $\lim \limits_{n\rightarrow \infty}\lambda_{par^{l^2n}(u)} = \lim \limits_{n\rightarrow \infty}\lambda_{chi^{l^2n}(u)}=0$ for $l=1,2$ and $\lim \limits_{n\rightarrow \infty}\lambda_{par^{3n}(u)} = \lim \limits_{n\rightarrow \infty}\lambda_{chi^{3n}(u)}=0.$
\end{example}
\section{Disjoint supercyclic powers of weighted pseudo-shifts}
In this section, we will extend the characterizations in Theorem \ref{main 1} from d-hypercyclicity to d-supercyclicity and will present some corollaries.
\begin{theorem}\label{Pd-superbi1}
Let $X$ be a Banach sequence space over $I$, in which $(e_i)_{i\in I}$ is an OP-basis. Let integers $1\leq r_1 < r_2 < \cdots < r_{N}$ be given. $N\geq 2,$ for each $1 \leq l \leq N$, let $T_{l} = T_{b^{(l)}, \varphi_l} : X \rightarrow X$ be a weighted pseudo-shift with weight sequence $b^{(l)} = (b_i^{(l)})_{i\in I}$ and injective mapping $\varphi_{l}.$ If for each $1 \leq l \leq N,$ $(\varphi^n_l)_n$ is a run-away sequence and for $1 \leq s < l \leq N,$ the sequence $(\varphi_{s}^{r_sn})_n$ is run-away with $(\varphi_{l}^{r_ln})_n$. Then the following statements are equivalent:
\begin{enumerate}
\item $T_{1} ^{r_1}, T_{2} ^{r_2}, \ldots, T_{N} ^{r_N}$ have a dense set of d-supercyclic vectors.
\item There exists an increasing sequence $(n_k)_{k\geq 1}$ of positive integers such that:
(H1)For any $i, j\in I$ and $1 \leq l, s\leq N,$
\begin{eqnarray*}
\lim \limits_{k\rightarrow \infty} \left\|\left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_l(i)} ^{(l)}\right)^{-1} e_{\varphi_l^{r_l n_k}(i)} \right\|
\left\|\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v_s(j)} ^{(s)}\right) e_{\psi^{r_s n_k}_s(j)}\right\|=0.
\end{eqnarray*}
(H2)For every $i\in I$ and $1 \leq s < l \leq N,$
\begin{eqnarray*}
\left\{\begin{array}{ll}
(i)\;\lim \limits_{k\rightarrow \infty} \left\|\left( \prod\limits_{v = 0}^{r_s n_k -1} b_{\varphi^v_s(i)} ^{(s)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v_l (\varphi^{r_s n_k}_s(i))}^{(l)} \right) e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))} \right\|= 0,\\
\\
(ii)\;\lim \limits_{k\rightarrow \infty} \left\|\left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_l(i)} ^{(l)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v_s (\varphi^{r_l n_k}_l(i))} ^{(s)}\right) e_{\psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i))}\right\|=0.\\
\end{array}\right.
\end{eqnarray*}
\item $T_1^{r_1 }, T_{2} ^{r_2}, \ldots,T_{N }^{r_N }$ satisfy the d-Supercyclicity Criterion.
\end{enumerate}
\end{theorem}
\begin{proof}
$(1)\Rightarrow (2).$
Fix
$$I : =\{i_1, i_2, \cdots, i_n,\cdots \}$$ and for each $k\in \mathbb{N}$ with $k\geq 1$ set $I_k: = \{i_1, i_2, \cdots, i_k\}.$
To prove $(2)$, it is enough to verify that for any positive integer $k\geq 1$ and any $N_0 \in \mathbb{\mathbb{N}},$ there is an integer $n_k > N_0$ such that:
For any $i, j\in I_k$ and $1 \leq l, s\leq N,$
\begin{eqnarray*}
\left\|\left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_l(i)} ^{(l)}\right)^{-1} e_{\varphi^{r_l n_k}_l(i)} \right\|
\left\|\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v_s(j)} ^{(s)}\right) e_{\psi^{r_s n_k}_s(j)}\right\|< \frac{1}{k}.
\end{eqnarray*}
For every $i\in I_k$ and $1 \leq s < l \leq N,$
\begin{eqnarray*}
\left\{\begin{array}{ll}
(i)\;\left\|\left( \prod\limits_{v = 0}^{r_s n_k -1} b_{\varphi^v_s (i)} ^{(s)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v_l (\varphi^{r_s n_k}_s(i))} ^{(l)}\right) e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))}\right\| < \frac{1}{k}, \\
\\
(ii)\; \; \left\| \left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_l (i)} ^{(l)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v_s (\varphi^{r_l n_k}_l(i))} ^{(s)}\right) e_{\psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i))}\right\| < \frac{1}{k}. \\
\end{array}\right.
\end{eqnarray*}
Let integers $k\geq 1$ and $N_0 \in \mathbb{N}$ be given. By assumption there is some $\delta_k > 0$ such that for any $x = (x_i)_{i\in I} \in X,$
\begin{eqnarray*}
||x_i e_i|| < \frac{1}{2k} \;\; \mbox{ for } i\in I, \mbox{ if } ||x|| < \delta_k.
\end{eqnarray*}
Let $\widetilde{N_0}\in \mathbb{N}$ be the integer such that
\begin{eqnarray*}
\left\{\begin{array}{ll}
(i)\;\varphi^{r_ln}_l(I_k)\cap I_k = \emptyset \; (1\leq l \leq N),\\
\\
(ii)\;\psi_{l}^{r_ln}(\varphi_{s}^{r_s n}(I_k)\cap \varphi^{r_ln}_l(I))\cap I_k = \emptyset \; (1\leq s < l \leq N)
\end{array}\right.
\end{eqnarray*}
for all $n\geq \widetilde{N_0}.$
Since the d-supercyclic vectors are dense in $X,$ there exists a d-suppercyclic vector $x = (x_i)_{i\in I}\in X$ such that
\begin{eqnarray*}
\left\|x-\sum\limits_{i\in I_k}e_i\right\| < \delta_k.
\end{eqnarray*}
Now, let $A = \{\alpha (T_1^{r_1n}x, T_2^{r_2n}x,\ldots, T_N^{r_Nn}x) : \alpha \in \mathbb{C}, n\in \mathbb{N}\}.$ Obviously, $A$ is dense in $X^N.$ For every $p\in \mathbb{N},$ let $B_p = \{\alpha (T_1^{r_1n}x, T_2^{r_2n}x,\ldots, T_N^{r_Nn}x) : \alpha \in \mathbb{C}, n\in \mathbb{N}, n \leq p\}.$ Since $X$ is an infinite dimensional Banach space, $A\setminus B_p$ remains dense in $X^N$ for any $p\in \mathbb{N}.$ Thus we can find a complex number $\alpha \neq 0$
and an integer $n_k > \mbox{ max } \{N_0, \widetilde{N_0}\}$ such that for each $1\leq l \leq N,$
\begin{eqnarray*}
\left\|\alpha y^{(l)} -\sum_{i\in I_k}e_i\right\| < \delta_k,
\end{eqnarray*}
where $y^{(l)} := T_l^{r_ln_k}x = (\prod\limits_{v=0}^{r_ln_k-1}b_{\varphi^v_l(i)}^{(l)}x_{\varphi^{r_ln_k}_l(i)})_{i\in I}=(y^{(l)}_i)_{i\in I}.$\\
By the continuous inclusion of $X$ into $\mathbb{K}^I,$ we can in addition assume that
\begin{eqnarray*}
\left\{\begin{array}{ll}
(i) \; \sup\limits_{i\in I_k}|x_i-1|\leq \frac{1}{2}\\
\;\\
(ii)\sup\limits_{i\in I_k}|\alpha y_i^{(l)}-1|\leq \frac{1}{2} \;\;\mbox{ for }1\leq l \leq N.\\
\end{array}\right.
\end{eqnarray*}
It follows that for any $i\in I_k,$
\begin{eqnarray}\label{3.8}
\left\{\begin{array}{ll}
(i)\; 0 < \frac{1}{|x_i|} \leq 2, \\
\\
(ii)\;x_{\varphi^{r_ln_k}_l(i)} \neq 0\;\; (1\leq l \leq N),\\
\\
(iii)\;0<\left|\left(\alpha\prod\limits_{v=0}^{r_ln_k-1}b_{\varphi^v_l(i)}^{(l)}x_{\varphi^{r_ln_k}_l(i)}\right)^{-1}\right| \leq 2\;\; (1\leq l \leq N).\\
\end{array}\right.
\end{eqnarray}
Repeating a similar argument as in the proof of Theorem \ref{main 1}, one can obtain that:
For each $i\in I_k$ and $1\leq l \leq N,$
\begin{eqnarray}\label{3.9}
\left\| \left(\alpha\prod \limits_{v=0}^{r_ln_k-1} b_{\varphi^{v}_l(i)}^{(l)}\right)^{-1}e_{\varphi^{r_ln_k}_l(i)}\right\|
< \frac{1}{k}
\end{eqnarray}
and
\begin{eqnarray}\label{3.10}
\left\| \left(\alpha\prod \limits_{v=1}^{r_ln_k} b_{\psi^{v}_l(i)}^{(l)}\right)e_{\psi^{r_ln_k}_l(i)}\right\|<\frac{1}{k}.
\end{eqnarray}
For each $i\in I_k$ and $1\leq s < l \leq N,$
\begin{eqnarray}\label{3.11}
\left\|\alpha\prod\limits_{v=1}^{r_ln_k}b_{\psi^v_l(\varphi_s^{r_sn}(i))}^{(l)}x_{\varphi^{r_sn_k}_s(i)}e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))}\right\|
< \frac{1}{2k}
\end{eqnarray}
and
\begin{eqnarray}\label{3.12}
\left\|\alpha\prod\limits_{v=1}^{r_sn_k}b_{\psi^v_s(\varphi_l^{r_ln_k}(i))}^{(s)}x_{\varphi^{r_ln_k}_l(i)}e_{\psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i))}\right\|
< \frac{1}{2k}.
\end{eqnarray}
Hence by \eqref{3.9} and \eqref{3.10} for any $i, j\in I_k$ and $1 \leq l, s\leq N,$
\begin{eqnarray*}
&\;&\left\|\left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_l(i)} ^{(l)}\right)^{-1} e_{\varphi^{r_l n_k}_l(i)} \right\|
\left\|\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v_s(j)} ^{(s)}\right) e_{\psi^{r_s n_k}_s(j)}\right\|\\
&\;& =\left\|\left(\alpha \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_l(i)} ^{(l)}\right)^{-1} e_{\varphi^{r_l n_k}_l(i)} \right\|
\left\|\left(\alpha \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v_s(j)} ^{(s)}\right) e_{\psi^{r_s n_k}_s(j)}\right\|
<\frac{1}{k^2} \leq \frac{1}{k}.
\end{eqnarray*}
By \eqref{3.8}, \eqref{3.11} and \eqref{3.12} for each $i\in I_k$ and $1\leq s < l \leq N,$
\begin{eqnarray}
&\;& \left\|\left( \prod\limits_{v = 0}^{r_s n_k -1} b_{\varphi^v_s (i)} ^{(s)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v_l (\varphi^{r_s n_k}_s(i))} ^{(l)}\right) e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))}\right\|\nonumber\\
&\;& =\left\|\left(\alpha \prod\limits_{v = 0}^{r_s n_k -1} b_{\varphi^v_s (i)} ^{(s)}x_{\varphi^{r_s n_k}_s(i)}\right)^{-1} \alpha \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v_l (\varphi^{r_s n_k}_s(i))} ^{(l)}x_{\varphi^{r_sn_k}_s(i)} e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))}\right\|\nonumber\\
&\;&\leq 2 \left\|\alpha\prod\limits_{v = 1}^{r_l n_k} b_{\psi^v_l (\varphi^{r_s n_k}_s(i))} ^{(l)}x_{\varphi^{r_sn_k}_s(i)} e_{\psi_{l}^{r_ln_k}(\varphi_{s}^{r_s n_k}(i))}\right\|< \frac{1}{k}
\end{eqnarray}
and
\begin{eqnarray*}
&\;&\left\| \left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_l (i)} ^{(l)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v_s (\varphi^{r_l n_k}_l(i))} ^{(s)}\right) e_{ \psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i))}\right\|\\
&\;& = \left\|\left(\alpha \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v_l (i)} ^{(l)}x_{\varphi^{r_ln_k}_l(i)}\right)^{-1}\left(\alpha \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v_s (\varphi^{r_l n_k}_l(i))} ^{(s)}\right) x_{\varphi^{r_ln_k}_l(i)} e_{\psi_{s}^{r_sn_k}(\varphi_{l}^{r_l n_k}(i))}\right\|\\
&\;& <\frac{1}{k}.
\end{eqnarray*}
Thus we complete the proof.
$(2)\Rightarrow (3).$ Suppose (2) holds and let $(n_k)_{k \geq 1}$ be an increasing sequence of positive integers satisfying \textit{(H1)} and \textit{(H2)}.
Let's show that $T_{1} ^{r_1}, T_{2} ^{r_2}, \ldots, T_{N} ^{r_N}$ satisfy the d-Supercyclicity Criterion. Set $X_0 = X_1 = \cdots = X_N = span \{ e_i : i\in I\}.$ For each $1 \leq l \leq N$ and $n\in \mathbb{N}$ with $n\geq 1,$ we consider the linear mappings: $S_{l,n} : X_l \rightarrow X$ given by
$$ S_{l, n}(e_i) = \left(\prod \limits_{v=0}^{r_ln-1} b_{\varphi^{v}_l(i)}^{(l)}\right)^{-1}e_{\varphi^{r_ln}_l(i)}\;\;\;\;\;(i\in I).$$
The same argument as used in the proof of $(2)\Rightarrow (3)$ in Theorem \ref{main 1} yields that $(i)$ of Definition \ref{d-supercriterion} is satisfied. So we just need to check that $T_{1} ^{r_1}, T_{2} ^{r_2}, \ldots, T_{N} ^{r_N}$ satisfy condition $(ii)$ in Definition \ref{d-supercriterion} with respect to $(n_k)_{k \geq 1}$. Let $y_0, y_1, \ldots, y_N \in span \{ e_i : i\in I\},$ there exists an $k_0 \in \mathbb{N}$ such that
\begin{eqnarray*}
y_i = \sum\limits_{j\in I_{k_0}} y_{i,j}e_j \;\; (0 \leq i \leq N).
\end{eqnarray*}
Set $C : = \max \{|y_{i,j}| : 0 \leq i \leq N, j\in I_{k_0}\}.$
By \textit{(H1)}, for any $i,j \in I$ and $1 \leq l, s \leq N,$
\begin{eqnarray*}
&\;&\lim \limits_{k\rightarrow \infty}\|T_l^{r_l n_k} e_i\|\|S_{s, n_k}e_j\|\\
&\;&= \lim \limits_{k\rightarrow \infty}\left\|\left(\prod \limits_{v=1}^{r_ln_k} b_{\psi^{v}_l(i)}^{(l)}\right)e_{\psi^{r_ln_k}_l(i)}\right\|
\left\|\left(\prod \limits_{v=0}^{r_s n_k-1} b_{\varphi^{v}_s(j)}^{(s)}\right)^{-1}e_{\varphi^{r_s n_k}_s(j)}\right\|=0.
\end{eqnarray*}
It follows that
\begin{eqnarray*}
&\;&\left\|T_l^{r_l n_k} y_0\right\| \left\|\sum \limits_{s=1}^{N}S_{s, n_k}y_s\right\|\\
&=&\left\|\sum\limits_{j\in I_{k_0}}y_{0,j}\prod \limits_{v=1}^{r_ln_k} b_{\psi^{v}_l(j)}^{(l)}e_{\psi^{r_ln_k}_l(j)}\right\|
\left\|\sum \limits_{s=1}^{N}\sum\limits_{j\in I_{k_0}}y_{s,j}\left(\prod \limits_{v=0}^{r_s n_k-1} b_{\varphi^{v}_s(j)}^{(s)}\right)^{-1}e_{\varphi^{r_s n_k}_s(j)}\right\|\\
&\leq&C^2 \left(\sum\limits_{j\in I_{k_0}}\left\|\prod \limits_{v=1}^{r_ln_k} b_{\psi^{v}_l(j)}^{(l)}e_{\psi^{r_ln_k}_l(j)}\right\|\right) \left(\sum \limits_{s=1}^{N}\sum\limits_{j\in I_{k_0}}\left\|\left(\prod \limits_{v=0}^{r_s n_k-1} b_{\varphi^{v}_s(j)}^{(s)}\right)^{-1}e_{\varphi^{r_s n_k}_s(j)}\right\|\right)\\
\\
& \overrightarrow{k\rightarrow\infty}& 0.
\end{eqnarray*}
$(3)\Rightarrow (1).$ This implication is obvious.
\end{proof}
Next, we consider the special case $\varphi_1 = \varphi_2 = \cdots\cdots = \varphi_N$ in Theorem \ref{Pd-superbi1}.
\begin{corollary}\label{Pd-superbi}
Let $X$ be a Banach sequence space over $I$, in which $(e_i)_{i\in I}$ is an OP-basis. $N\geq 2,$ for each $1 \leq l \leq N$, let $T_{l} = T_{b^{(l)}, \varphi} : X \rightarrow X$ be a weighted pseudo-shift with weight sequence $b^{(l)} = (b_i^{(l)})_{i\in I}.$ Then for any integers $1\leq r_1 < r_2 < \cdots\cdots < r_{N},$ the following are equivalent:
\begin{enumerate}
\item $T_{1} ^{r_1}, T_{2} ^{r_2}, \ldots, T_{N} ^{r_N}$ have a dense set of d-supercyclic vectors.
\item $(\alpha)$ The mapping $\varphi : I\rightarrow I$ has no periodic points.
\quad \; $(\beta)$ There exists an increasing sequence $(n_k)_{k\geq 1}$ of positive integers such that:
(H1)For any $i, j\in I$ and $1 \leq l, s\leq N$ we have
\begin{eqnarray*}
\lim \limits_{k\rightarrow \infty} \left\|\left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v(i)} ^{(l)}\right)^{-1} e_{\varphi^{r_l n_k}(i)} \right\|
\left\|\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v(j)} ^{(s)}\right) e_{\psi^{r_s n_k}(j)}\right\|=0.
\end{eqnarray*}
(H2)For every $i\in I$ and any $1 \leq s < l \leq N,$
\begin{eqnarray*}
\left\{\begin{array}{ll}
(i)\;\lim \limits_{k\rightarrow \infty} \left\|\left( \prod\limits_{v = 0}^{r_s n_k -1} b_{\varphi^v(i)} ^{(s)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v (\varphi^{r_s n_k}(i))}^{(l)} \right) e_{\psi^{(r_l-r_s) n_k}(i)} \right\|= 0,\\
\\
(ii)\;\lim \limits_{k\rightarrow \infty} \left\|\left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v(i)} ^{(l)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v (\varphi^{r_l n_k}(i))} ^{(s)}\right) e_{\varphi^{(r_l-r_s) n_k}(i)}\right\|=0.\\
\end{array}\right.
\end{eqnarray*}
\item $T_1^{r_1 }, T_{2} ^{r_2}, \ldots,T_{N }^{r_N }$ satisfy the d-Supercyclicity Criterion.
\end{enumerate}
\end{corollary}
\begin{proof}
By Theorem \ref{Pd-superbi1}, we just need to prove that $(1)$ implies $(\alpha).$ Suppose on the contrary that $\varphi$ has a periodic point, then there exists an $i\in I$ and an integer $M\geq 1$ such that $\varphi ^M (i) = i.$ For each $1 \leq l \leq N$ and any $x\in X,$ the entry of $T_l^{r_ln} x$ at position $i$ is $\left(\prod \limits_{v=0}^{r_ln-1}b_{\varphi^v(i)}^{(l)}\right)x_{\varphi^{r_ln}(i)}.$ Since $\varphi ^M (i) = i,$ both $\left(b_{\varphi^v(i)}^{(l)}\right)_v$ and $(x_{\varphi^{r_ln}(i)})_n$ are periodic sequences. Which implies that $\left\{\frac{\left(\prod \limits_{v=0}^{r_1n-1}b_{\varphi^v(i)}^{(1)}\right)x_{\varphi^{r_1n}(i)}}{\left(\prod \limits_{v=0}^{r_2n-1}b_{\varphi^v(i)}^{(2)}\right)x_{\varphi^{r_2n}(i)}}\right\}_{n\in\mathbb{N}}$ can not be dense in $\mathbb{K},$ it follows that the set $$\left\{\left(\alpha\prod \limits_{v=0}^{r_1n-1}b_{\varphi^v(i)}^{(1)} x_{\varphi^{r_1n}(i)}, \alpha \prod \limits_{v=0}^{r_2n-1}b_{\varphi^v(i)}^{(2)}x_{\varphi^{r_2n}(i)}\right) : \alpha \in \mathbb{C},n\in\mathbb{N}\right\}$$ can not be dense in $\mathbb{K}^2.$ By continuous inclusion of $X$ into $\mathbb{K}^I,$ the set $$\{\alpha\left(T_1^{r_1n} x, T_2^{r_2n} x, \ldots, T_N^{r_Nn} x \right): \alpha\in \mathbb{C}, n\in \mathbb{N}\}$$ can not be dense in $X^N,$ which is contrary to condition (1). Hence $\varphi$ has no periodic points.
\end{proof}
If the mapping $\varphi$ satisfy that each $i\in I$ lies outside $\varphi^n(I)$ for all sufficiently large n, which implies in particular that the sequence $(\varphi^n)_n$ is run-away. In this case, for every $i\in I,$ $ e_{\psi^n(i)} = 0$ is eventually 0 when $n$ is large enough by the definition of $\psi^n.$ Thus $(H1)$ and $(i)$ of $(H2)$ in Corollary \ref{Pd-superbi} is automatically satisfied. Therefore the following result holds.
\begin{corollary}\label{uni}
Let $X$ be a Banach sequence space over $I$, in which $(e_i)_{i\in I}$ is an OP-basis. Let integers $1\leq r_1 < r_2 < \cdots\cdots < r_{N}$ be given. For each $1 \leq l \leq N$, let $T_{l} = T_{b^{(l)},\varphi} : X \rightarrow X$ be a weighted pseudo-shift with weight sequence $b^{(l)} = (b_i^{(l)})_{i\in I},$ so that each $i\in I$ lies outside $\varphi^n(I)$ for all sufficiently large n. Then the following assertions are equivalent:
\begin{enumerate}
\item $T_{1} ^{r_1}, T_{2} ^{r_2}, \ldots, T_{N} ^{r_N}$ are densely d-supercyclic.
\item There exists an increasing sequence $(n_k)_{k\geq 1}$ of positive integers such that for every $i\in I$ we have:
\begin{eqnarray*}
\lim \limits_{k\rightarrow \infty} \left\| \left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v(i)} ^{(l)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v (\varphi^{r_l n_k}(i))} ^{(s)}\right) e_{\varphi^{(r_l-r_s) n_k}(i)}\right\|= 0\;\;(1 \leq s < l \leq N).
\end{eqnarray*}
\item $T_1^{r_1 }, T_{2} ^{r_2}, \ldots, T_{N }^{r_N }$ satisfy the d-Supercyclicity Criterion.
\end{enumerate}
\end{corollary}
\begin{example}
Let $X = \ell^2(\mathbb{N}),$ and integers $1\leq r_1 < r_2 < \cdots\cdots < r_{N}\;(N\geq 2)$ be given. For each $1\leq l \leq N,$ let $(a_{l,n})_{n=1}^\infty$ be a bounded sequence of nonzero scalars and $T_l$ be the unilateral backward weighted shift on $X$ defined by $$T_l e_0 =0 \mbox{ and } T_l e_j = a_{l,j}e_{j-1} \;\;\mbox{ for }j\geq 1,$$
where $(e_j)_{j\in \mathbb{N}}$ is the canonical basis of $\ell^2(\mathbb{N}).$ Clearly, in this case, $X$ is a Banach sequence space over $I = \mathbb{N}$ with OP-basis $(e_j)_{j\in \mathbb{N}}.$ Each $T_l$ is the pseudo-shift $T_{b^{(l)},\varphi}$ with
$$b_i^{(l)} = a_{l, i+1} \mbox{ and } \varphi (i) = i+1 \mbox{ for any } i\in \mathbb{N}.$$
By Corollary \ref{uni}, $T_{1} ^{r_1}, T_{2} ^{r_2}, \ldots, T_{N} ^{r_N}$ are densely d-supercyclic if and only if
there exists an increasing sequence $(n_k)_{k\geq 1}$ of positive integers such that for every $i\in I$ and $1 \leq s < l \leq N,$
\begin{eqnarray*}
&\;& \lim \limits_{k\rightarrow \infty} \left\| \left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v(i)} ^{(l)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v (\varphi^{r_l n_k}(i))} ^{(s)}\right) e_{\varphi^{(r_l-r_s) n_k}(i)}\right\| \\
&\;&= \lim \limits_{k\rightarrow \infty}\left| \left( \prod\limits_{v = i+1}^{i+r_l n_k } a_{l,v}\right)^{-1}
\left(\prod\limits_{v = i+(r_l-r_s)n_k+1}^{i+r_l n_k} a_{s,v}\right) \right| =0.
\end{eqnarray*}
\end{example}
\begin{remark}
Recently, disjoint hypercyclic and disjoint supercyclic weighted translations on locally compact group $G$ were studied in \cite{CC} and \cite{ZZ}. It is easy to see that when $G$ is discrete, these weighted translations are special cases of pseudo-shifts. Theorefore by Theorem \ref{d-hyper1} and Corollary \ref{Pd-superbi}, we can also get the equivalent conditions for weighted translations on locally compact discrete groups to be disjoint hypercyclic and disjoint supercyclic.
\end{remark}
\section{Disjoint supercyclic operator weighted shifts on $\ell^2(\mathbb{Z,\mathcal{K}})$ }
The bilateral operator weighted shifts on space $\ell^2(\mathbb{Z,\mathcal{K}})$ were studied by Hazarika and Arora in \cite{HA}. In \cite{WZ}, we proved that the bilateral operator weighted shifts are special cases of weighted pseudo-shifts. In this section, we will use Corollary \ref{Pd-superbi} to characterize the d-supercyclicity of bilateral operator weighted shifts on $\ell^2(\mathbb{Z,\mathcal{K}})$. First of all, let's recall some terminology.
Let $\mathcal{K}$ be a separable complex Hilber space with an orthonormal basis $\{f_k\}_{k=0}^{\infty}.$ Define a separable Hilbert space
$$\ell^2(\mathbb{Z,\mathcal{K}}):=\{x=(\ldots, x_{-1}, [x_0], x_1, \ldots): x_i\in \mathcal{K} \mbox{ and } \sum\limits_{i\in\mathbb{Z}}||x_i||^2<\infty\}$$
under the inner product $\langle x, y\rangle=\sum\limits_{i\in\mathbb{Z}}\langle x_i, y_i\rangle_{\mathcal{K}}$.
Let $\{A_{n}\}_{n=-\infty}^{\infty}$ be a uniformly bounded sequence of invertible positive diagonal operators on $\mathcal{K}$. The bilateral forward and backward operator weighted shifts on $\ell^2(\mathbb{Z,\mathcal{K}})$ are defined as follows:
$(i)$ The bilateral forward operator weighted shift $T$ on $\ell^2(\mathbb{Z,\mathcal{K}})$ is defined by
$$ T(\ldots, x_{-1}, [x_0], x_1, \ldots)=(\ldots, A_{-2} x_{-2}, [A_{-1} x_{-1}], A_0 x_{0}, \ldots).$$
Since $\{A_{n}\}_{n=-\infty}^{\infty}$ is uniformly bounded, $T$ is bounded and $||T||=\sup \limits_{i\in \mathbb{Z}}||A_i||<\infty.$ For $n > 0$,
$$T^{n}(\ldots, x_{-1}, [x_0], x_1, \ldots)=(\ldots, y_{-1}, [y_0], y_1,\ldots),$$ where $y_j=\prod \limits_{s=0}^{n-1}A_{j+s-n}x_{j-n}.$
$(ii)$ The bilateral backward operator weighted shift $T$ on $\ell^2(\mathbb{Z,\mathcal{K}})$ is defined by
$$ T(\ldots, x_{-1}, [x_0], x_1, \ldots)=(\ldots, A_{0} x_{0}, [A_{1} x_{1}], A_2 x_{2}, \ldots).$$
Then
$$T^{n}(\ldots, x_{-1}, [x_0], x_1, \ldots)=(\ldots, y_{-1}, [y_0], y_1, \ldots),$$
where $y_j=\prod \limits_{s=1}^{n}A_{j+s}x_{j+n}.$
Since each $A_n$ is an invertible diagonal operator on $\mathcal{K},$ we conclude that
\begin{eqnarray*}
||A_n||=\sup \limits_{k}||A_n f_k||\mbox{ and } ||A_n^{-1}||=\sup \limits_{k}||A_n^{-1} f_k||.
\end{eqnarray*}
Now we are ready to state the main result.
\begin{theorem}\label{forward}
Let $N \geq 2,$ for each $ 1 \leq l \leq N,$ let $T_l$ be a bilateral forward operator weighted shift on $\ell^2(\mathbb{Z,\mathcal{K}})$ with weight sequence $\{A_n^{(l)}\}_{n=-\infty}^{\infty},$ where $\{A_n^{(l)}\}_ {n=-\infty}^{\infty}$ is a uniformly bounded sequence of positive invertible diagonal operators on $\mathcal{K}.$
Then for any integers $1\leq r_1 < r_2 < \cdots\cdots < r_{N},$ the following statements are equivalent:
\begin{enumerate}
\item $T_{1} ^{r_1}, T_{2} ^{r_2}, \ldots, T_{N} ^{r_N}$ are densely d-supercyclic.
\item There exists an increasing sequence $(n_k)_{k \geq 1}$ of positive integers such that:
For every $i_1, i_2\in \mathbb{N},$ $j_1, j_2\in \mathbb{Z}$ and
$1 \leq l, s \leq N,$
\begin{eqnarray*}
\lim\limits_{k\rightarrow \infty}\left\|\left(\prod\limits_{v=j_1-r_ln_k}^{j_1-1}(A_v^{(l)})^{-1}\right)f_{i_1}\right\| \left\|\left(\prod\limits_{v=j_2}^{j_2+r_sn_k-1}A_v^{(s)}\right) f_{i_2}\right\|=0.
\end{eqnarray*}
For every $i\in \mathbb{N},$ $j\in \mathbb{Z}$ and $1 \leq s < l \leq N,$
\begin{eqnarray*}
\left\{\begin{array}{ll}
\lim\limits_{k\rightarrow\infty}\left\|\left(\prod\limits_{v=j-r_sn_k}^{j-1}(A_v^{(s)})^{-1}\right)\left(\prod\limits_{v=j-r_sn_k}^{j+(r_l-r_s)n_k-1}A_v^{(l)}\right) f_{i}\right\|=0,\\
\\
\lim\limits_{k\rightarrow \infty}\left\|\left(\prod\limits_{v=j-r_ln_k}^{j-1}(A_v^{(l)})^{-1}\right)\left(\prod\limits_{v=j-r_ln_k}^{j-(r_l-r_s)n_k-1}A_v^{(s)} \right) f_{i}\right\|=0.
\end{array}\right.
\end{eqnarray*}
\end{enumerate}
\end{theorem}
\begin{proof}
In \cite{WZ}, we proved that $\ell^2(\mathbb{Z,\mathcal{K}})$ is a Hilbert sequence space over $I:= \mathbb{N} \times \mathbb{Z}$ with an OP-basis $(e_{i,j})_{(i,j)\in I}.$ Where for $(i, j)\in I,$ $e_{i, j}:= (\ldots, z_{-1}, [z_0], z_1, \ldots) \in \ell^2(\mathbb{Z,\mathcal{K}})$ be defined by
\begin{eqnarray*}
z_{k} =\left\{\begin{array}{ll}
f_{i} \;\;\;\; \mbox{if } k=j,\\
\\
0 \;\;\;\;\;\; \mbox{if } k \neq j.
\end{array}\right.
\end{eqnarray*}
By assumption, for each $1 \leq l \leq N$ $\{A_n^{(l)}\}_{n\in \mathbb{Z}}$ is a uniformly bounded sequence of positive invertible diagonal operators on $\mathcal{K},$ there exist a uniformly bounded sequence of positive sequences $\{(a_{i, n}^{(l)})_{i\in \mathbb{N}}\}_{n\in \mathbb{Z}}$
such that for each $n\in \mathbb{Z},$
$$A_{n}^{(l)}f_i = a_{i,n}^{(l)}f_i \;\mbox{ and }\; (A_{n}^{(l)})^{-1}f_i = (a_{i,n}^{(l)})^{-1}f_i \;\;\;\mbox{ for } i\in \mathbb{N}.$$
In this interpretation each $T_l$ is a weighted pseudo-shift $T_{b^{(l)}, \varphi}$ on $\ell^2(\mathbb{Z,\mathcal{K}})$ with
$$b_{i,j}^{(l)} = a_{i, j-1}^{(l)} \;\mbox{ and }\; \varphi(i, j) = (i, j-1) \;\;\;\;\mbox{ for } (i, j) \in I.$$
It follows from Corollary \ref{Pd-superbi} that $T_{1} ^{r_1}, T_{2} ^{r_2}, \ldots, T_{N} ^{r_N}$ are densely d-supercyclic if and only if there exists an increasing sequence $(n_k)$ of positive integers such that:
For any $(i_1, j_1), (i_2, j_2) \in I$ and $1 \leq l, s\leq N,$
\begin{eqnarray*}
&\;&\lim \limits_{k\rightarrow \infty} \left\|\left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v(i_1, j_1)} ^{(l)}\right)^{-1} e_{\varphi^{r_l n_k}(i_1, j_1)} \right\|
\left\|\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v(i_2, j_2)} ^{(s)}\right) e_{\psi^{r_s n_k}(i_2, j_2)}\right\|\\
&\;&=\lim \limits_{k\rightarrow \infty}\left\| \left(\prod \limits_{v=0}^{r_ln_k-1}a_{(i_1, j_1-v-1)}^{(l)}\right)^{-1}e_{(i_1, j_1-r_ln_k)}\right\| \left\|\left(\prod \limits_{v=1}^{r_sn_k}a_{(i_2, j_2+v-1)}^{(s)}\right)e_ {(i_2, j_2+r_sn_k)}\right\| \\
&\;&=\lim\limits_{k\rightarrow \infty}\left\|\left(\prod\limits_{v=j_1-r_ln_k}^{j_1-1}(A_v^{(l)})^{-1}\right)f_{i_1}\right\| \left\|\left(\prod\limits_{v=j_2}^{j_2+r_sn_k-1}A_v^{(s)}\right) f_{i_2}\right\|=0.
\end{eqnarray*}
For every $(i, j)\in I$ and $1 \leq s < l \leq N,$
\begin{eqnarray*}
&\;&\lim \limits_{k\rightarrow \infty} \left\|\left( \prod\limits_{v = 0}^{r_s n_k -1} b_{\varphi^v(i)} ^{(s)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_l n_k} b_{\psi^v (\varphi^{r_s n_k}(i))}^{(l)} \right) e_{\psi^{(r_l-r_s) n_k}(i)} \right\|\\
&\;&=\lim\limits_{k\rightarrow\infty}\left\|\left(\prod\limits_{v=j-r_sn_k}^{j-1}(A_v^{(s)})^{-1}\right)\left(\prod\limits_{v=j-r_sn_k}^{j+(r_l-r_s)n_k-1}A_v^{(l)}\right) f_{i}\right\|=0.
\end{eqnarray*}
and
\begin{eqnarray*}
&\;&\lim \limits_{k\rightarrow \infty} \left\|\left( \prod\limits_{v = 0}^{r_l n_k -1} b_{\varphi^v(i)} ^{(l)}\right)^{-1}
\left( \prod\limits_{v = 1}^{r_s n_k} b_{\psi^v (\varphi^{r_l n_k}(i))} ^{(s)}\right) e_{\varphi^{(r_l-r_s) n_k}(i)}\right\|\\
&\;&= \lim\limits_{k\rightarrow \infty}\left\|\left(\prod\limits_{v=j-r_ln_k}^{j-1}(A_v^{(l)})^{-1}\right)\left(\prod\limits_{v=j-r_ln_k}^{j-(r_l-r_s)n_k-1}A_v^{(s)} \right) f_{i}\right\|=0.
\end{eqnarray*}
Which completes the proof.
\end{proof}
In \cite{FS}, Feldman considered the hypercyclicity of bilateral weighted shifts on $\ell^2(\mathbb{Z})$ that are invertible. Motivated by Feldman's work, in \cite{WZ} we showed that if the weight sequence $\{A_n\}_{n=-\infty}^{\infty}$ are assumed to satisfy that there exists some $m > 0$ such that $||A_n^{-1}||\leq m$ for all $n < 0$ (or for all $n > 0$), then the characterizing conditions for d-hypercyclicity simplify. Now, we establish the d-supercyclic conditions for this special case. The proof is similar to that in \cite{WZ}, here we omit it.
\begin{corollary}\label{same}
Let $T$ be a bilateral forward operator weighted shift on $\ell^2(\mathbb{Z,\mathcal{K}})$ with weight sequence $\{ A_n\}_{n=-\infty}^{\infty},$ where $\{A_n\}_{n=-\infty}^{\infty}$ is a uniformly bounded sequence of positive invertible diagonal operators on $\mathcal{K},$ and there exists $m > 0$ such that $||A_n^{-1}||\leq m$ for all $n < 0$ (or for all $n > 0$).
Then for any integer $N\geq 2,$ the following are equivalent:
\begin{enumerate}
\item $T, T^{2}, \ldots, T^N$ are densely d-supercyclic.
\item There exists an increasing sequence $(n_k)_{k\geq 1}$ of positive integers such that:
For every $i_1, i_2 \in \mathbb{N}$,
\begin{eqnarray*}
\lim\limits_{k\rightarrow \infty}\left\|\left(\prod\limits_{v=1}^{ln_k}(A_{-v})^{-1}\right) f_{i_1}\right\| \left\|\left(\prod\limits_{v=1}^{N n_k}A_{v}\right) f_{i_2}\right\| = 0 \;\;\;(1 \leq l \leq N).
\end{eqnarray*}
and
\begin{eqnarray*}
\lim\limits_{k\rightarrow \infty}\left\|\left(\prod\limits_{v=1}^{N n_k}(A_{-v})^{-1}\right) f_{i_1}\right\| \left\|\left(\prod\limits_{v=1}^{l n_k}A_{v}\right) f_{i_2}\right\| = 0 \;\;\;(1 \leq l \leq N).
\end{eqnarray*}
For every $i\in \mathbb{N}$,
\begin{eqnarray*}
\lim\limits_{k\rightarrow \infty}\left\|\left(\prod\limits_{v=1}^{ln_k}(A_{-v})^{-1}\right) f_{i}\right\| = \lim\limits_{k\rightarrow \infty}\left\|\left(\prod\limits_{v=1}^{ln_k}A_{v}\right) f_{i}\right\| = 0 \;\;\;(1 \leq l \leq N-1).
\end{eqnarray*}
\end{enumerate}
\end{corollary}
\begin{example}
For each $s\in \mathbb{N},$ let
$$ \mathcal{C}_s = \{ 2^{2s+1}-(2s+1),\ldots, 2^{2s+1}-1\}, $$
$$\mathcal{D}_s =\{2^{2s+1} ,\ldots, 2^{2s+1}+(2s+1)-1\}$$
and
$$ \mathcal{C} = \bigcup\limits_{s=0}^\infty \mathcal{C}_s, \; \mathcal{D} = \bigcup\limits_{s=0}^\infty \mathcal{D}_s, \; \mathcal{E} =\bigcup\limits_{s=0}^\infty \{-2^{2s+1}\}.$$
Let $\{ A_n\}_{n=-\infty}^{\infty}$ be a uniformly bounded sequence of positive invertible diagonal operators on $\mathcal{K},$ defined as follows:
\begin{eqnarray*}
&\mbox{If }& n\in \mathcal{C} : A_n(f_k)=\left\{\begin{array}{ll} \frac{1}{2}f_k,\;\;\;\;0\leq k \leq n,\\
\\
f_k,\;\;\;\;\;k > n.
\end{array}\right.\\
\\
&\mbox{If }& n\in \mathcal{D} : A_n(f_k)=\left\{\begin{array}{ll} 2f_k,\;\;\;\;0\leq k \leq n,\\
\\
f_k,\;\;\;\;\;k > n.
\end{array}\right.\\
\\
&\mbox{If }& n\in \mathcal{E} : A_n(f_k)=\left\{\begin{array}{ll} 2f_k,\;\;\;\;0\leq k \leq -n,\\
\\
f_k,\;\;\;\;\;k > -n.
\end{array}\right.\\
\\
&\mbox{If }& n\in \mathbb{Z}\backslash (\mathcal{C}\cup \mathcal{D}\cup\mathcal{E}) :
A_n(f_k) = f_k\;\;\;\;\mbox{for all } k\geq 0.
\end{eqnarray*}
Let $T$ be the bilateral forward operator weighted shift on $\ell^2(\mathbb{Z,\mathcal{K}})$ with weight sequence $\{ A_n\}_{n=-\infty}^{\infty}.$ Then $T, T^{2}$ are d-supercyclic.
\end{example}
\begin{proof}
Notice that for any $n\in \mathbb{Z},$ $\frac{1}{2} \leq \|A_n\| \leq 2,$ we use Corollary \ref{same} to give the proof. Let $(n_k)_{k\geq 1} = (2^{2k+1})_{k\geq 1}.$ Then for each $i\in \mathbb{N},$
\begin{eqnarray*}
\left\|\left(\prod\limits_{v=1}^{n_k}(A_{-v})^{-1}\right) f_{i}\right\|\leq (\frac{1}{2})^{k-i+1}\rightarrow 0 \mbox{ as } k\rightarrow \infty
\end{eqnarray*}
and
\begin{eqnarray*}
\left\|\left(\prod\limits_{v=1}^{n_k}A_{v}\right) f_{i}\right\|\leq (\frac{1}{2})^{2k-i}\rightarrow 0 \mbox{ as } k\rightarrow \infty.
\end{eqnarray*}
Since $2^{2k+1}+(2k+1)-1< 2\cdot2^{2k+1}<2^{2k+3}-(2k+3),$
\begin{eqnarray}\label{4.1}
(\frac{1}{2})^i\leq \left\|\left(\prod\limits_{v=1}^{2n_k}A_{v}\right) f_{i}\right\|\leq (\frac{1}{2})^{2k-i}\leq 2^i,
\end{eqnarray}
hence for any $i_1, i_2 \in I,$
\begin{eqnarray*}
\lim\limits_{k\rightarrow \infty}\left\|\left(\prod\limits_{v=1}^{n_k}(A_{-v})^{-1}\right) f_{i_1}\right\| \left\|\left(\prod\limits_{v=1}^{2 n_k}A_{v}\right) f_{i_2}\right\|=0,
\end{eqnarray*}
also by the fact $2^{2k+1}< 2\cdot2^{2k+1}<2^{2k+3},$ it is easy to see that
\begin{eqnarray*}
\lim\limits_{k\rightarrow \infty}\left\|\left(\prod\limits_{v=1}^{2n_k}(A_{-v})^{-1}\right) f_{i_1}\right\| \left\|\left(\prod\limits_{v=1}^{n_k}A_{v}\right) f_{i_2}\right\| = 0
\end{eqnarray*}
and
\begin{eqnarray*}
\lim\limits_{k\rightarrow \infty}\left\|\left(\prod\limits_{v=1}^{2n_k}(A_{-v})^{-1}\right) f_{i_1}\right\| \left\|\left(\prod\limits_{v=1}^{2n_k}A_{v}\right) f_{i_2}\right\| = 0.
\end{eqnarray*}
It follows from Corollary \ref{same}, $T, T^2$ are d-supercyclic.
But it follows from \eqref{4.1} that $\left\|\left(\prod\limits_{v=1}^{2n_k}A_{v}\right) f_{i}\right\| \nrightarrow 0 \mbox{ as } k\rightarrow \infty.$ By Theorem \ref{d-hyper1} $T, T^{2}$ are not d-hypercyclic.
\end{proof}
\bibliographystyle{amsplain}
|
1,477,468,750,644 | arxiv | \section{Introduction}
Time-series data is frequently used in various real-world systems, especially in multivariate scenarios such as server machines, water treatment plants, spacecraft, etc. Detecting an anomalous event in such time-series data is crucial to managing those systems~\cite{su2019smdomnianomaly,mathur2016swatwadi,hundman2018smapmsl,6684530,blazquez2021review}. To solve this problem, several classical approaches have been developed in the past~\cite{fox1972outliers,zhang2005network,ma2003time,liu2008isolationforest}. However, due to the limited capacity of their approaches, they could not fully capture complex, non-linear, and high-dimensional patterns in the time-series data.
Recently, various unsupervised approaches employing deep learning architectures have been proposed. Such works include adopting architectures such as recurrent neural networks (RNN)~\cite{hundman2018smapmsl}, variational autoencoders (VAE)~\cite{xu2018unsupervised}, generative adversarial networks (GAN)~\cite{madgan}, graph neural networks (GNN)~\cite{gdn}, and combined architectures~\cite{zong2018dagmm,su2019smdomnianomaly,shen2020thoc,audibert2020usad,park2018lstmvae}. These deep learning approaches have brought significant performance improvements in time-series anomaly detection. However, most deep learning-based methods have shown several downsides. First, they require a long training time due to complex calculations, hindering applications where fast and efficient training is needed. Second, they need a significant amount of effort to tune model hyperparameters (e.g., sliding window size) for a given dataset, which can be costly in real-world applications.
\begin{figure}[t]
\centering
\includegraphics[width=0.98\columnwidth]{Figure1.pdf}
\caption{Implicit neural representation for multivariate time-series data.}
\label{Figure1INRinTimeseries}
\end{figure}
In our paper, we propose Implicit Neural Representation-based Anomaly Detection (INRAD), a novel approach that performs anomaly detection in multivariate time-series data by adopting implicit neural representation (INR). Figure~\ref{Figure1INRinTimeseries} illustrates the approach of INR in the context of multivariate time-series data. Unlike conventional approaches where the values are passed as input to the model (usually processed via sliding window etc.), we directly input time to a multi-layer perceptron (MLP) model. Then the model tries to represent the values of that time, which is done by minimizing a mean-squared loss between the model output and the ground truth values. In other words, we train a MLP model to represent the time-series data itself. Based on our observation that the INR represents abnormal data relatively poorly compared to normal data, we use the representation error as the anomaly score for anomaly detection. Adopting such a simple architecture design using MLP naturally results in a fast training time. Additionally, we propose a temporal encoding technique that improves efficiency for the model to represent time-series data, resulting in faster convergence time.
In summary, the main contributions of our work are:
\begin{itemize}
\item We propose INRAD, a novel time-series anomaly detection method that only uses a simple MLP which maps time into its corresponding value.
\item We introduce a temporal encoding technique to represent time-series data efficiently.
\item We conduct extensive experiments while using the same set of hyperparameters over all five real-world benchmark datasets. Our experimental results show that our proposed method outperforms previous state-of-the-art methods in terms of not only accuracy, but also training speed in a highly robust manner.
\end{itemize}
\section{Related Work}
In this section, we review previous works for time-series anomaly detection and implicit neural representation.
\subsection{Time-Series Anomaly Detection}
Since the first study on this topic was conducted by \cite{fox1972outliers}, time-series anomaly detection has been a topic of interest over the past decades~\cite{6684530,blazquez2021review}. Traditionally, various methods, including autoregressive moving average (ARMA)~\cite{galeano2006outlier} and autoregressive integrated moving average (ARIMA) model~\cite{zhang2005network}-based approaches, one-class support vector machine-based method~\cite{ma2003time} and isolation-based method \cite{liu2008isolationforest} have been widely introduced for time-series anomaly detection. However, these classical methods either fail to capture complex and non-linear temporal characteristics or are very sensitive to noise, making them infeasible to be applied on real-world datasets.
Recently, various unsupervised deep learning-based approaches have successfully improved performance in complex multivariate time-series anomaly detection tasks. As one of the well-known unsupervised models, autoencoder (AE)-based approaches~\cite{sakurada2014anomaly} capture the non-linearity between variables. Recurrent neural networks (RNNs) are a popular architecture choice used in various methods~\cite{hundman2018smapmsl,malhotra2016lstm} for capturing temporal dynamics of time series data. Generative models are also used in the literature, namely generative adversarial networks~\cite{madgan} and variational autoencoder (VAE)-based approaches~\cite{xu2018unsupervised}. Graph neural network-based approach~\cite{gdn} is also proposed to capture the complex relationship between variables in the multivariate setting. Furthermore, methodologies combining multiple architectures are also proposed, such as AE with the Gaussian mixture model~\cite{zong2018dagmm} or AE with GANs~\cite{audibert2020usad}, stochastic RNN with a planar normalizing flow~\cite{su2019smdomnianomaly}, deep support vector data description~\cite{deepsvdd} with dilated RNN~\cite{drnn}, and VAE with long short term memory (LSTM) networks~\cite{park2018lstmvae}.
Despite remarkable improvements via those above deep learning-based approaches, most of the approaches produce good results at the expense of training speed and generalizability. Such long training time with costly hyperparameter tuning for each dataset results in difficulties applying to practical scenarios~\cite{audibert2020usad}.
\subsection{Implicit Neural Representation}
Recently, implicit neural representations (or coordinate-based representations) have gained popularity, mainly in 3D deep learning. Generally, it trains a MLP to represent a single data instance by mapping the coordinate (e.g., $xyz$-coordinates) to the corresponding values of the data. This approach has been proven to have expressive representation capability with memory efficiency. As one of the well-known approaches, occupancy networks~\cite{mescheder2019occunet} train a binary classifier to predict whether a point is inside or outside the data to represent. DeepSDF~\cite{park2019deepsdf} directly regresses a signed distance function that returns a signed distance to the closest surface when the position of a 3D point is given. Instead of occupancy networks or signed distance functions, NeRF~\cite{mildenhall2020nerf} proposes to map an MLP to the color and density of the scene to represent. SIREN~\cite{sitzmann2020siren} proposes using sinusoidal activation functions in MLPs to facilitate high-resolution representations. Since then, various applications, including view synthesis~\cite{martin2021nerfw} and object appearance~\cite{saito2019pifu} have been widely studied.
However, the application of INR to time-series data has been relatively underdeveloped. Representation of time-varying 3D geometry has been explored~\cite{niemeyer2019occupancyflow}, but they do not investigate multivariate time-series data. Although SIREN~\cite{sitzmann2020siren} showed the capability to represent audio, its focus was limited to the high-quality representation of the input signals. To the best of our knowledge, this is the first work to use INR to solve the problem of time-series anomaly detection.
\section{INRAD Framework}
In this section, we define the problem that we aim to solve, and then we present our proposed INRAD based on the architecture proposed by \cite{sitzmann2020siren}. Next, we describe our newly designed temporal encoding technique in detail. Finally, we describe the loss function to make our model represent input time signals and describe the anomaly score used during the detection procedure. Figure~\ref{Figure2Overview} describes the overview of the proposed method.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{Figure2_5.pdf}
\caption{The overview of the proposed Implicit Neural Representation-based Anomaly Detection (INRAD). (a) From the given time-series data, we perform temporal encoding and represent time as a real-valued vector. (b) An MLP using periodic activation functions represents the given data by mapping the time processed by temporal encoding to the corresponding values. (c) After the model converges, we calculate the representation error and use this as the anomaly score for detection.}
\end{figure*}\label{Figure2Overview}
\subsection{Problem Statement}
In this section, we formally state the problem of multivariate time-series anomaly detection as follows.
We first denote multivariate time-series data as $X = \{({t_1}, \mathbf{x}_{t_1}), ({t_2}, \mathbf{x}_{t_2}), ({t_3}, \mathbf{x}_{t_3}),...,({t_N}, \mathbf{x}_{t_N})\}$, where $t_i$ denotes a timestamp, $\mathbf{x}_{t_i}$ denotes corresponding values at the timestamp, and $N$ denotes the number of observed values. As we focus on multivariate data, $\mathbf{x}_{t_{i}}$ is a $d$-dimensional vector representing multiple signals. The goal of time-series anomaly detection is to output a sequence, $Y = \{y_{t_1}, y_{t_2}, y_{t_3},...,y_{t_N}\}$, where $y_{t_i} \in \{0, 1\}$ denotes the abnormal or normal status at $t_i$. In general, 1 indicates the abnormal state while 0 indicates normal state.
\subsection{Implicit Neural Representation of Time-Series Data}
To represent a given time-series data, we adopt the architecture proposed by \cite{sitzmann2020siren}, which leverages periodic functions as activation functions in the MLP model, resulting in a simple yet powerful model capable of representing various signals, including audio. After preprocessing the time coordinate input via an encoding function $\phi$, our aim is to learn a function $f$ that maps the encoded time $\phi(t_i)$ to its corresponding value $\mathbf{x}_{t_i}$ of the data.
We can describe the MLP $f$ by first describing each fully-connected layer and stacking those layers to get the final architecture. Formally, the $l$th fully-connected layer $f_{l}$ with hidden dimension $m_l$ can be generally described as $f_{l}(\mathbf{h}_{l-1}) = \sigma(\mathbf{W}_{l}\mathbf{h}_{l-1} + \mathbf{b}_l)$, where $\mathbf{h}_{l-1} \in \mathbb{R}^{m_{l-1}}$ represents the output of the previous layer $f_{l-1}$, $\mathbf{W}_{l} \in \mathbb{R}^{m_{l} \times m_{l-1}}$ and $\mathbf{b}_{l} \in \mathbb{R}^{m_{l}}$ are learnable weights and biases, respectively, and $\sigma$ is a non-linear activation function. Here, sine functions are used as $\sigma$, which enables accurate representation capabilities of various signals. In practice, a scalar $\omega_0$ is multiplied such that the $l$th layer is $f_l = \sin (\omega_0 \cdot \mathbf{W}_{l}\mathbf{h}_{l-1} + \mathbf{b}_l)$, in order for the input to span multiple periods of the sine function.
Finally, by stacking a total of $L$ layers with an additional linear transformation at the end, we now have our model $f(\phi(t_i)) = \mathbf{W}(f_{L} \circ f_{L-1} \circ \cdots \circ f_1)(\phi(t_i)) + \mathbf{b}$ which maps the input $t_i$ to the output $f(\phi(t_i)) \in \mathbb{R}^{d}$.
\subsection{Temporal Encoding}\label{temporal encoding}
As INR has been mainly developed to represent 2D or 3D graphical data, encoding time coordinate for INR has rarely been studied. Compared to graphical data, which the number of points in each dimension is fairly limited to around thousands, the number of timestamps is generally much larger and varies among different datasets. Also, training and test data need to be considered regarding their chronological order (training data usually comes first). These observations with a real-world time-series data motivate us to design a new encoding strategy such that 1) the difference between $\phi(t_i)$ and $\phi(t_{i+1})$ is not affected by the length of the time sequence 2) chronological order between train and test data is preserved after encoding 3) timestamps from real-world data are naturally represented using its standard time scale rather than relying on the sequential index of time-series data. These desired properties are not satisfied with the encoding strategy applied in~\cite{sitzmann2020siren} (which we call vanilla encoding), where it normalizes coordinates in the range $[-1, 1]$.
We now describe our temporal encoding, a simple yet effective method which satisfies conditions mentioned above. The key idea is to directly utilize the timestamp data present in the time-series data (we can assign arbitrary timestamps if none is given). \color{black} We first represent $t_i$ into a 6-dimensional vector ${\bf k} = [k_{yr}, k_{mon}, k_{day}, k_{hr}, k_{min}, k_{sec}] \in \mathbb{R}^{6}$, each dimension representing year, month, day, hour, minute, and second respectively. Here, $k_{yr},k_{mon},k_{day},k_{hr},k_{min}$ are all positive integers, while $k_{sec} \in [0,60)$. Note that this can flexibly change depending on the dataset. For instance, if the timestamp does not include minute and second information, we use a 4-dimensional vector (i.e., $[k_{yr},k_{mon},k_{day},k_{hr}]$ and $k_{hr} \in [0, 24)$).
Next, we normalize the vectorized time information. With a pre-defined year $k'_{yr}$, we first set $[k'_{yr},1,1,0,0,0]$ (January 1st 00:00:00 at year $k'_{yr}$) as [-1,-1,-1,-1,-1,-1]. Now, let us represent the current timestamp of interest as $\mathbf{k}^{curr}$. We normalize the $j$-th dimension of the current timestamp $\mathbf{k}^{curr}$ by the following linear equation:
\begin{equation}\label{temporalencoding}
n_j^{curr} = -1 + \dfrac{1 - (-1)}{N_{j}-1} \times (k_j^{curr}-\mathbb{I}(j=1)k'_{yr})
\end{equation}
where $n_i^{curr}$ is the $j$th dimension of the normalized vector $\mathbf{n}^{curr} \in \mathbb{R}^{6}$ and $\mathbb{I}$ is an indicator function. For the values of $N_i$, we set $N_2 = 12, N_3 = 31, N_4 = 24, N_5 = 60, N_6 = 60$ to match the standard clock system. We assume that $N_1$ is pre-defined by the user. In short, we define a temporal encoding function $\phi$ that transforms a scalar $t$ into $\mathbf{n}_i$ ($\phi: t_i \mapsto \mathbf{n}_i$). In our method, we will by default use this temporal encoding unless otherwise stated.
\subsection{Loss Function}
As we aim the model to represent the input time-series data, we compare the predicted value at each timestamp $t_i$to its ground-truth value $\mathbf{x}_{t_i}$. Therefore, we minimize the following loss function:
\begin{equation}
\mathcal{L} = \dfrac{1}{n} \sum_{i=1}^{n} ||\mathbf{x}_{t_{i}} - f({\phi}(t_{i}))||^{2}
\end{equation}
where $||\cdot||$ indicates the l2 norm of a vector.
\subsection{Anomaly Score and Detection Procedure}
Our proposed representation error-based anomaly detection strategy is built on the observation that values at an anomalous time are difficult to represent, resulting in relatively high representation error. By our approach described above, the given data sequence $X$ is represented by an MLP function $f$. We now perform anomaly detection with this functional representation by defining the representation error as the anomaly score. Formally, the anomaly score $a_{t_i}$ at a specific timestamp $t_i$ is defined as $a_{t_i} = |\mathbf{x}_{t_i} - f({\phi}(t_i))|$, where $|\cdot|$ indicates the l1 norm of a vector. Anomalies can be detected by comparing the anomaly score $a_{t_i}$ with the pre-defined threshold $\tau$.
In our approach, we first use the training data to pre-train our model $f$ and then re-train the model to represent the given test data to obtain the representation error as an anomaly score for the detection.
\section{Experiments}
In this section, we perform various experiments to answer the following research questions:
\begin{itemize}
\item {\bf RQ1:} Does our method outperform various state-of-the-art methods, even with a fixed hyperparameter setting?
\item {\bf RQ2:} How does our proposed temporal encoding affect the performance and convergence time?
\item {\bf RQ3:} Does our method outperform various state-of-the-art methods in terms of training speed?
\item {\bf RQ4:} How does our method behave in different hyperparameter settings?
\end{itemize}
\begin{table}[t]
\centering
\begin{tabular}{r|c|c|c|c} \toprule
Datasets & Train & Test & Features & Anomalies \\\midrule
SMD & 708405 & 708420 & 28$\times$38 & 4.16 (\%)\\
SMAP & 135183 & 427617 & 55$\times$25 & 13.13 (\%)\\
MSL & 58317 & 73729 & 27$\times$55 & 10.72 (\%)\\
SWaT & 496800 & 449919 & 51 & 11.98 (\%)\\
WADI & 1048571 & 172801 & 123 & 5.99 (\%)\\
\bottomrule
\end{tabular}
\caption{Statistics of the datasets used in our experiments.}
\label{TableDatasetStatistics}
\end{table}
\subsection{Dataset}
We use five real-world benchmark datasets, SMD~\cite{su2019smdomnianomaly}, SMAP \& MSL~\cite{hundman2018smapmsl}, SWaT \& WADI~\cite{mathur2016swatwadi}, for anomaly detection for multivariate time-series data, which contain ground-truth anomalies as labels. Table \ref{TableDatasetStatistics} summarizes the statistics of each dataset, which we further describe its detail in the supplementary material.
In our experiments, we directly use timestamps included in the dataset for SWAT and WADI. We arbitrarily assign timestamps for the other three datasets since no timestamps representing actual-time information are given.
\begin{table*}[t]
\centering
\fontsize{9}{10}\selectfont
\begin{tabular}{r|ccc|ccc|ccc|ccc|ccc} \toprule
& \multicolumn{3}{c}{SMD} & \multicolumn{3}{c}{SMAP} & \multicolumn{3}{c}{MSL} & \multicolumn{3}{c}{SWaT} & \multicolumn{3}{c}{WADI}\\\cmidrule{2-16}
Method & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1\\ \midrule
IF & 59.4 & 85.3 & 0.70 & 44.2 & 51.1 & 0.47 & 56.8 & 67.4 & 0.62 & 96.2 & 73.1 & 0.83 & 62.4 & 61.5 & 0.62\\
LSTM-VAE & 87.0 & 78.8 & 0.83 & 71.6 & 98.8 & 0.83 & 86.0 & 97.6 & 0.91 & 71.2 & 92.6 & 0.80 & 46.3 & 32.2 & 0.38 \\
DAGMM & 67.3 & 84.5 & 0.75 & 63.3 & 99.8 & 0.78 & 75.6 & 98.0 & 0.85 & 82.9 & 76.7 & 0.80 & 22.3 & 19.8 & 0.21\\
OmniAnomaly & 98.1 & 94.4 & 0.96 & 75.9 & 97.6 & 0.85 & 91.4 & 88.9 & 0.90 & 72.2 & 98.3 & 0.83 & 26.5 & 98.0 & 0.41\\
USAD & 93.1 & 94.4 & 0.96 & 77.0 & 98.3 & 0.86 & 88.1 & 97.9 & 0.93 & 98.7 & 74.0 & 0.85 & 64.5 & 32.2 & 0.43\\
THOC & 73.2 & 78.8 & 0.76 & 79.2 & 99.0 & 0.88 & 78.9 & 97.4 & 0.87 & 98.0 & 70.6 & 0.82 & - & - & - \\ \midrule
$\text{INRAD}^{\text{c}}_{\text{van}}$ & 94.7 & 97.8 & {0.96} & 80.0 & 99.3 & 0.89 & 93.6 & 98.1 & {\bf 0.96} & 96.9 & 88.7 & 0.93 & 60.2 & 67.0 & 0.63 \\
$\text{INRAD}^{\text{c}}_{\text{temp}}$ & 98.0 & 98.3 & {\bf 0.98} & 83.2 & 99.1 & 0.90 & 92.1 & 99.0 & 0.95 & 93.0 & 96.3 & 0.95 & 78.4 & 99.9 & 0.88 \\ \midrule
$\text{INRAD}_{\text{van}}$ & 98.0 & 98.6 & {\bf 0.98} & 84.0 & 99.4 & {0.91} & 90.4 & 99.0 & {0.95} & 96.4 & 91.7 & {0.94} & 77.1 & 66.5 & {0.71} \\
$\text{INRAD}_{\text{van}^{*}}$ & 95.0 & 96.4 & {0.95} & 82.6 & 99.3 & {0.90} & 91.7 & 98.7 & {0.95} & 84.2 & 84.7 & {0.84} & 72.4 & 72.8 & {0.73} \\
$\text{INRAD}_{\text{temp}}$ & 98.2 & 97.5 & {\bf 0.98} & 85.8 & 99.5 & {\bf 0.92} & 93.3 & 99.0 & {\bf 0.96} & 95.6 & 98.8 & {\bf 0.97} & 88.9 & 99.1 & {\bf 0.94} \\ \bottomrule
\end{tabular}
\caption{Anomaly detection accuracy results in terms of precision(\%), recall(\%), and F1-score, on five real-world benchmark datasets. $\text{INRAD}_{\text{van}}$, $\text{INRAD}_{\text{van}^{*}}$, and $\text{INRAD}_{\text{temp}}$ adopts the vanilla, vanilla$^{*}$, and temporal encoding, respectively. Also, $\text{INRAD}^{\text{c}}_{\text{van}}$ and $\text{INRAD}^{\text{c}}_{\text{temp}}$ indicates that the experiment was run on the cold-start setting with each encoding.}
\label{TablePerformanceComparison}
\end{table*}
\subsection{Baseline methods}
We demonstrate the performance of our proposed method, INRAD, by comparing with the following six anomaly detection methods:
\begin{itemize}
\item {\bf IF}~\cite{liu2008isolationforest}: Isolation forests (IF) is the most well-known isolation-based anomaly detection method, which focuses on isolating abnormal instances rather than profiling normal instances.
\item {\bf LSTM-VAE}~\cite{park2018lstmvae}: LSTM-VAE uses a series of connected variational autoencoders and long-short-term-memory layers for anomaly detection.
\item {\bf DAGMM}~\cite{zong2018dagmm}: DAGMM is an unsupervised anomaly detection model which utilizes an autoencoder and the Gaussian mixture model in an end-to-end training manner.
\item {\bf OmniAnomaly}~\cite{su2019smdomnianomaly}: OmniAnomaly employs a stochastic recurrent neural network for multivariate time-series anomaly detection to learn robust representations with a stochastic variable connection and planar normalizing flow.
\item {\bf USAD}~\cite{audibert2020usad}: USAD utilizes an encoder-decoder architecture with an adversely training framework inspired by generative adversarial networks.
\item {\bf THOC}~\cite{shen2020thoc}: THOC combines a dilated recurrent neural network~\cite{drnn} for extracting multi-scale temporal features with the deep support vector data description~\cite{deepsvdd}.
\end{itemize}
\subsection{Evaluation Metrics}
We use precision (P), recall (R), F1-score (F1) for evaluating time-series anomaly detection methods. Since these performance measures depend on the way threshold is set on the anomaly scores, previous works proposed a strategy such as applying extreme value theory~\cite{siffer2017anomaly}, using a dynamic error over a time window \cite{hundman2018smapmsl}. However, not all methodologies develop a mechanism to select a threshold in different settings, and many previous works~\cite{audibert2020usad,su2019smdomnianomaly,xu2018unsupervised} adopt the best F1 score for performance comparison, where the optimal global threshold is chosen by trying out all possible thresholds on detection results. We also use the point-adjust approach~\cite{xu2018unsupervised}, widely used in evaluation~\cite{audibert2020usad,su2019smdomnianomaly,shen2020thoc}. Specifically, if any point in an anomalous segment is correctly detected, other observations in the segment inside the ground truth are also regarded as correctly detected.
Therefore, we adopt the best F1-score (short F1 score hereafter) and the point-adjust approach for evaluating the anomaly detection performance to directly compare with the aforementioned state-of-the-art methods.
\subsection{Hyperparameters and Experimental Setup}
To show the robustness of our proposed method, we conduct experiments using the same hyperparameter setting for all benchmark datasets.
The details of the experimental setting are as follows. For the model architecture, we use a 3-layer MLP with sinusoidal activations with 256 hidden dimensions each (refer to Figure~\ref{Figure2Overview}(b)). Following~\cite{sitzmann2020siren}, we set $\omega_0 = 30$ except for the first layer, which is set to $3000$. During training, we use the Adam optimizer~\cite{kingma2014adam} with a learning rate of 0.0001 and $(\beta_1, \beta_2) = (0.9, 0.99)$. Additionally, we use early stopping with patience 30. Our code and data are released at https://github.com/KyeongJoong/INRAD
\subsection{RQ 1. Performance Comparison}
Table~\ref{TablePerformanceComparison} shows the performance comparison results of our proposed method $\text{INRAD}_{\text{temp}}$ and its variants, along with other baseline approaches on five benchmark datasets. We use the reported accuracy values of baselines (except THOC~\cite{shen2020thoc}) from the previous work \cite{audibert2020usad}, which achieves state-of-the-art performance in the identical experimental setting with ours, such as datasets, train/test split, and evaluation metrics. Note that results of THOC on the WADI dataset are omitted due to an out-of-memory issue.
Overall, our proposed $\text{INRAD}_{\text{temp}}$ consistently achieves the highest F1 scores over all datasets. Especially, the performance improvement over the next best method achieves 0.32 in terms of F1 score on the WADI dataset, where most other approaches show relatively low performance. On other datasets, we still outperform the second-best performance of other baselines by 0.02 to 0.12. Considering that the single hyperparameter setting restriction is only applied for our method, this shows that our approach can provide superior performance in a highly robust manner in various datasets.
As we adopt the representation error-based detection strategy, it is possible that our method detects anomalies without training data by directly representing the test set. We hypothesize that the test set already contains an overwhelming portion of normal samples from which the model can still learn temporal dynamics of normal patterns in the given data without any complex model architectures (e.g., RNN and its variants). To distinguish from the original method, we denote this variant as $\text{INRAD}^{\text{c}}_{\phi}$ ($\phi=$ 'van' or 'temp' for vanilla and temporal encoding, respectively), which we also investigate its performance. We observe that both $\text{INRAD}^{\text{c}}_{\text{van}}$ and $\text{INRAD}^{\text{c}}_{\text{temp}}$ generally achieves slightly inferior performance to $\text{INRAD}_{\text{temp}}$ except for WADI. This result shows that utilizing the training dataset has performance benefits, especially when the training data is much longer than the test data.
\begin{table}[t]
\centering
\fontsize{9}{10}\selectfont
\begin{tabular}{r|c|c|c} \toprule
Method & SMD & SMAP & MSL \\ \midrule
LSTM-VAE & 3.807 & 0.987 & 0.674 \\
OmniAnomaly & 77.32 & 16.66 & 15.55 \\
USAD & 0.278 & 0.034 & 0.029 \\
THOC & 0.299 & 0.07 & 0.066 \\\midrule
INRAD & 0.243 & 0.024 & 0.020 \\ \bottomrule
\end{tabular}
\caption{Comparison of training time (sec) per epoch.}
\label{TableTrainingTimePerEpoch}
\end{table}
\subsection{RQ 2. Effect of temporal encoding}
We study the effect of our temporal encoding method by comparing it with two encoding methods: vanilla and its variant, vanilla$^{*}$. Vanilla encoding normalizes the indices $[1,2, \cdots, M]$ of the training data to $[-1, 1]$, and the indices of the test data are also mapped to $[-1, 1]$. On the other hand, vanilla$^{*}$ encoding is derived from vanilla encoding to preserve chronological order and unit interval of training and test data after encoding by mapping indices of test data to the range $[1,\infty)$, while keeping the difference between neighboring encoding consistent with the training data. We denote the variant using vanilla and vanilla$^{*}$ as $\text{INRAD}_{\text{van}}$ and $\text{INRAD}_{\text{van}^{*}}$, respectively.
Table~\ref{TablePerformanceComparison} shows that $\text{INRAD}_{\text{temp}}$ achieves slightly superior performance in general compared to $\text{INRAD}_{\text{van}}$ and $\text{INRAD}_{\text{van}^{*}}$. However, when the length of the dataset becomes long, the performance of vanilla and vanilla$^{*}$ degrades significantly while temporal encoding remains at 0.94, as shown in the case of WADI. The performance gap in WADI becomes even more significant in the case of cold-start settings, which is 0.25.
Also, Figure~\ref{Figure3Encoding} compares the convergence time for representation of test data between $\text{INRAD}_{\text{van}}$, $\text{INRAD}_{\text{van}^{*}}$, and $\text{INRAD}_{\text{temp}}$ using MSL and SMAP dataset. The vanilla encoding shows the slowest convergence time, and the vanilla$^{*}$ and our temporal encoding shows competitive results. This result shows that representation of test data is learned faster when time coordinates in training and test data are encoded while preserving the chronological order. Overall, our temporal encoding strategy achieves superior performance and fast convergence compared to the vanilla encoding strategy.
\subsection{RQ 3. Training speed comparison}
Here, we study the training speed of $\text{INRAD}_{\text{temp}}$ and compare it to the other four baselines that show good performance. Table~\ref{TableTrainingTimePerEpoch} summarizes the results the training time per epoch for $\text{INRAD}_{\text{temp}}$ along with other baseline methods on three benchmark datasets. Specifically, the reported time is the average time across all entities within each dataset (i.e., 28 entities for SMD, 55 for SMAP, and 27 for MSL). The results show that our method achieves the fastest training time, mainly because our method only uses a simple MLP for training without any additional complex modules (e.g., RNNs).
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{Figure3_2.pdf}
\caption{Convergence time (sec) comparison for different encoding techniques.}
\label{Figure3Encoding}
\end{figure}
\subsection{RQ 4. Hyperparameter sensitivity}
In Figure~\ref{Hyperparameter_experiment}, we test the robustness of $\text{INRAD}_{\text{temp}}$ by varying different hyperparameters settings using the MSL dataset. In our experiment, we change the patience in early stopping in range $\{30, 60, 90, 120, 150\}$, size of hidden dimension in range $\{32, 64, 128, 256, 512\}$, $\omega_0$ of the first layer in range $\{30, 300, 3000, 30000, 300000\}$, and the number of layers in range $\{1, 2, 3, 4, 5\}$. Figure~\ref{Hyperparameter_patience},~\ref{Hyperparameter_hidden_dim}, and~\ref{Hyperparameter_num_layers} shows that $\text{INRAD}_{\text{temp}}$ achieves high robustness with varying hyperparameter settings. We see that the choice of $\omega_0$ also minimally impacts $\text{INRAD}_{\text{temp}}$ as in Figure~\ref{Hyperparameter_omega}. This results suggests that the MLP struggles to differentiate neighboring inputs in the case where $\omega_0$ is extremely low.
\begin{figure}
\centering
\begin{subfigure}[b]{0.475\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Figure4_a.pdf}
\caption[Network2]%
{{\small Effect of patience}}
\label{Hyperparameter_patience}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Figure4_b.pdf}
\caption[]%
{{\small Effect of hidden dimension}}
\label{Hyperparameter_hidden_dim}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.475\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Figure4_c.pdf}
\caption[]%
{{\small Effect on $\omega_0$ in first layer}}
\label{Hyperparameter_omega}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Figure4_d.pdf}
\caption[]%
{{\small Effect of number of layers}}
\label{Hyperparameter_num_layers}
\end{subfigure}
\caption[ The average and standard deviation of critical parameters ]
{\small Performance of $\text{INRAD}_{\text{temp}}$ for MSL under various hyperparameter settings.}
\label{Hyperparameter_experiment}
\end{figure}
\section{Conclusion}
In this paper, we proposed INRAD, a novel implicit neural representation-based method for multivariate time-series anomaly detection, along with a temporal encoding technique. Adopting a simple MLP, which takes time as input and outputs corresponding values to represent a given time-series data, our method detects anomalies by using the representation error as anomaly score. Various experiments on five real-world datasets show that our proposed method achieves state-of-the-art performance in terms of accuracy and training speed while using the same set of hyperparameters. For future work, we can consider additional strategies for online training in order to improve applicability in an environment where prompt anomaly detection is needed.
\nocite{langley00}
|
1,477,468,750,645 | arxiv | \section{Introduction}
The emergence of autonomous vehicles (AVs) in urban networks leads to new operational challenges that have received a growing attention over the past few years. Of particular interest is the future paradigm wherein legacy (or human-operated) vehicles have been fully replaced with AVs. In this futuristic context, several studies have shown that the management of urban networks may benefit from forecasted technological advancements, such as improved trajectory control, automatic collision avoidance or platooning. However, the intermediate traffic state wherein legacy and autonomous vehicles co-exist has not been nearly as much examined by researchers. In such an intermediate traffic context, urban networks will need to adapt to make the most out of AV technology and allow the transition towards fully autonomous traffic, if this is ever to happen. The first AVs evolving in urban traffic networks are likely to have to abide by the existing infrastructure and legislation. However, this picture may change rapidly. We conjecture that with the increase in AV demand, urban traffic networks will adapt and that AV-specific infrastructure will be available to improve network operations at traffic intersections.
In this paper, we hypothesize a hybrid traffic context wherein legacy vehicles (LVs) and AVs both have access to dedicated infrastructure in an urban transport network. We propose a new stochastic hybrid network control model to manage traffic intersections by combining traditional green signal phases with AV-restricted signal phases called \emph{blue} phases. During blue phases, only AVs are allowed to proceed through the intersection. On the other hand, green phases can be used by any type of vehicle. To avoid first-in-first-out restrictions on queue departures, we assume that AVs have dedicated lanes to access traffic intersections, hereby referred to as AV-lanes. Legacy vehicle lanes (LV-lanes) can be used by both LVs and AVs to access traffic intersection but for convenience we assume that AVs choose AV-lanes by default. We assume fixed route choice or known turning ratios and we prove that the proposed network control policy is stable under conventional travel demand conditions, \emph{i.e.} that queues are bounded.
We first discuss the literature on network traffic control in transportation before investigating the state of the art in intersection management with AVs. We then position our paper with respect to the field and highlight our contributions.
\subsection{Network Traffic Control}
Traffic signal timing has been studied for decades, and some of the classic methods such as \citet{webster1958traffic}'s delay formula are widely known today. Optimization of individual signals is well-established, but optimal coordination of signals across networks is more complex~\citep{gartner1975optimization,robertson1991optimizing} as complete two-way coordination even through grid networks can be impossible to achieve. Signal timing is further complicated by additions such as transit signal priority~\citep{skabardonis2000control,dion2002rule}. Nevertheless, several commercial software tools (\emph{e.g.} Synchro) are currently available to assist city engineers with signal timings.
The literature on network traffic control spans across several fields such as optimization and control theory, transportation engineering and computer science. The seminal work of \citet{tassiulas1992stability} pioneered the research on stability conditions in network traffic control with an application to data packets routing in communication networks. The authors notably introduced the concept of back-pressure algorithm as method for decentralized control of a network of routers. In the context of urban transport networks, there is an extensive body of research on network traffic signal control. \citet{varaiya2013max} proposed a network traffic control policy based on max-pressure, a variant of the back-pressure algorithm wherein the control policy chooses the signal phase that maximizes the pressure at each intersection. Stability was proven assuming each turning movement has a distinct queue. \citet{wongpiromsarn2012distributed} proposed a pressure-based policy which maximizes throughput for unbounded queues. However, practical limitations such as link length require a careful choice of the pressure function to avoid queue spillback. Building on this effort, \citet{xiao2014pressure} proposed a pressure-releasing policy that accounts for finite queue capacities. Nonetheless, to more canonically apply the pressure-based routing they assumed that each turning movement has a separate queue, which is often not realistic. \citet{le2015decentralized} proposed a fixed-time policy wherein all phases are assigned a non-zero activation time and proved stability for fixed turning proportions and unbounded queues. Recently, \citet{valls2016convex} propose a convex optimization approach to traffic signal control that decouples the stability of the system from the choice of traffic control policy.
Starting from the seminal work of \citet{smith1979traffic}, several efforts have also attempted to model the impact of traffic signal optimization on route choice \citep{zhang2012traffic, gregoire2013back,zaidi2016back}. Other studies used bi-level optimization for signal timings that assume a user equilibrium behavior~\cite{yang1995traffic, sun2006bi}. \citet{le2017utility} used utility functions in max-pressure control to influence routing. However, while \citet{tassiulas1992stability}'s policy is provably throughput-optimal, max-pressure route choice makes no guarantees on the efficiency of the travel times. We assume fixed route choice in our network traffic control model to focus on green and blue phases signal control, and leave route choice for later studies.
\subsection{Intersection Management with Autonomous Vehicles}
With the advent of connected and autonomous vehicular technology over the past decades, researchers have progressively explored and proposed new intersection management paradigms instead of the traditional tricolor signal control system. \citet{dresner2004multiagent,dresner2005multiagent} proposed the autonomous intersection manager (AIM) protocol as an alternative to traffic signals for AVs. In AIM, vehicles use wireless communication channels to request a reservation from the intersection manager (IM). A reservation specifies the turning movement as well as the time at which the vehicle can enter the intersection. The IM simulates reservation requests on a space-time grid of tiles and accepts requests if they do not conflict with other reservations. If two vehicles will occupy the same tile at the same time, a potential conflict exists and the request must be rejected. The AIM protocol can be used with a First-Come-First-Served (FCFS) policy, wherein vehicles are prioritized based on their arrival time at the intersection. \citet{fajardo2011automated} and \citet{li2013modeling} compared FCFS with optimized traffic signals, and found that FCFS reduced delays in certain scenarios. However, \citet{levin2015paradoxes} found that FCFS created more delay with signals in certain common cases (\emph{e.g.} asymmetric intersections). Traffic signals are within the feasible region of AIM controls~\citep{dresner2007sharing}, so AIM can always perform at least as well as signals when optimized.
Naturally, FCFS is only one of many potential policies for reservations. \citet{schepperle2007agent,schepperle2008traffic} allowed vehicles to bid for priority: in addition to reservation requests, vehicles would also communicate a willingness to pay and received priority access accordingly. The authors found that auctions reduced delay weighted by vehicle value-of-time, and \citet{vasirani2010market} found similar results for a network of intersections. \citet{carlino2013auction} used system bids to further improve the efficiency of auctions. However, \citet{levin2015intersection} found that high value-of-time vehicles could become trapped behind low value-of-time vehicles, making it difficult to realize higher travel time savings for high-bidding vehicles.
Several efforts have focused on more microscopic vehicle trajectory optimization formulations to control AVs at traffic intersections. \citet{gregoire2013back} developed a cooperative motion-planning algorithm using a path velocity decomposition to optimally coordinate vehicles while preventing collisions. \citet{de2015autonomous} integrated a priority-based assignment into autonomous intersection simulation. \citet{altche2016analysis} developed a Mixed-Integer Linear Programming (MILP) formulation to coordinate vehicles through intersections. In this model, the IM decides when AVs enter the intersection and at which speed. The authors discretized time and tracked vehicle movement in continuous space variables. \citet{zhang2016optimal,zhang2017decentralized} focused on the decentralized control of traffic intersections based on First-In-First-Out (FIFO) conditions and considered fuel consumption, throughput and turning movements. This framework was extended by \citet{malikopoulos2018decentralized} proposed a decentralized energy-optimal control framework for connected and automated vehicles. Vehicle separation is ensured by rear-end and lateral collision avoidance constraints and the authors prove the existence of a nonempty solution space.
\citet{levin2017conflict} proposed a conflict point formulation for optimizing intersection throughput referred to as the AIM* protocol. As in \citet{altche2016analysis}, the IM decides on AVs' entry time and speed but instead of discretizing time, the authors discretize the intersection and focus on conflict points to ensure safety. The authors prove that there always exist a conflict-free, feasible solution to the proposed MILP. The present paper builds on this research to coordinate traffic operations during blue phases.
Only a few papers have addressed the configuration wherein AVs and legacy vehicles share traffic intersections. \citet{dresner2006human} proposed to periodically activate a traditional green phase to allow legacy vehicles to access the intersection. \citet{conde2013intelligent} suggested that legacy vehicles could reserve additional space to ensure safety while using the AIM protocol. \citet{qian2014priority} discussed the use of car-following models for collision avoidance among legacy vehicles and AVs. These efforts rely on the deployment of the AIM protocol to handle vehicle reservations but do not discuss their impact at a network level, \emph{i.e.} the effect of such policies on network throughput and stability. \citet{levin2016multiclass} found that reserving extra space for legacy vehicles created significant delays and that high AV market penetrations (around 80\%) were needed for AIM to improve over traffic signals. The hybrid network traffic control policy combining blue and green phases proposed in this paper may provide an hybrid approach to retain efficiency at lower market penetrations.
\subsection{Our Contributions}
In this paper, we propose a new decentralized, stochastic model for coordinating traffic composed of LVs and AVs in a network of intersections. Our model assumes that vehicle route choice, or equivalently, turning proportions are known. We assume that lane queues are measured periodically and we propose a decentralized, pressure-based algorithm to optimize network throughput. We distinguish between traditional green phases and blue phases which are only available for AVs. For this, we assume that intersections are accessible through two types of lanes: LV-lanes, which can be used by both LVs and AVs, and AV-lanes, which are restricted to AVs. During blue phases, all incoming lanes have access to the intersection and traffic is coordinated using the conflict-point formulation proposed by \citet{levin2017conflict}. At each intersection and each time period, a phase is activated based on the current state of the network using the proposed hybrid pressure-based policy.
We make the following contributions. 1) We propose a new model for network traffic control with LVs and AVs which combines traditional green phases with AV-restricted blue phases. 2) We present a new MILP formulation for green phase activation wherein turning movement capacity is determined endogenously. 3) We extend the max-pressure formulation proposed by \citet{varaiya2013max} to lane-based queues and we propose a new hybrid max-pressure network control policy wherein LVs and AVs share the infrastructure. 4) We characterize the stability region of this system and prove that the proposed hybrid max-pressure network control policy is stable, \emph{i.e.} stabilizes the largest possible set of demand rates in the network. 5) We conduct numerical experiments to test the proposed hybrid network control policy.
The remainder of the paper is organized as follows. We present our stochastic network traffic control model in Section \ref{net}. Intersection phase activation models are presented in Section \ref{phases}. The proposed control policy and its stability proof are introduced in Section \ref{policy}. Numerical results are presented in Section \ref{num} and we discuss our findings in Section \ref{con}.
\section{Stochastic Network Traffic Control}\label{net}
Consider a traffic network $\G = (\N,\A)$ with a set of intersections $\N$ connected by a set of links $\A$. The set of links is partitioned into three subsets: internal links that connect two intersections, denoted $\Ao \subset \A$, source links at which vehicles enter the network, denoted $\Ar \subset \A$, and sink links at which vehicles exit, denoted $\As \subset \A$. We model congestion at network intersections using point-queues of infinite size and we are interested in the evolution of queue lengths over the entire network.
We consider two classes of vehicles and lanes: autonomous vehicles (AVs), denoted $a$, and legacy (or human-driven) vehicles (LVs), denoted $l$. We assume that each link of the network consists of a set of lanes which are either restricted to AVs (AV-lanes) or available to both AVs and LVs (LV-lanes). We use $\A_a$ and $\A_l$ to denote AV-lanes and LV-lanes, respectively, and we assume that vehicle movement between different classes of lanes are forbidden. Although AVs can use LV-lanes, we do not model any type of interaction at the traffic-flow level among AVs and LVs on LV-lanes: if an AV uses an LV-lane we assume that it behaves as an LV. We assume that each class of lanes is served by a color-coded traffic phase. Specifically, we assume that LV-lanes are served exclusively by traditional green signal phases. In turn, we assume that AV-lanes are served exclusively by signal-free blue phases. Blue phases differ from green phases in that they directly control vehicles' trajectory within the intersection. The proposed hybrid network control policy presented hereafter chooses which phase (green or blue) should be activated at each intersection of the network and each time period over a discretized time horizon. Both traffic control phases are formally introduced in Section \ref{phases}. \newline
Let $\xit \in \Re_+$ be the number of vehicles on link $i \in \A$ seeking to enter the intersection at time $t$. Although we discretize vehicles in our numerical experiments, integer queue lengths are not necessary for the analytical results presented hereafter. Let $\xt$ be the array of all queue lengths at time $t$. $\xt$ is the state of the network, and the state space is $\X = \{\xit \geq 0 : i \in \A\}$. We consider discretized time and we assume fixed phase time of length $\dt$. Further, we assume that the state of the network $\xt$ is known at each time step $t = 0, \dt, 2\dt, \ldots$. The goal is to design a throughput-optimal network traffic control policy that optimally selects a traffic signal control at each time period $[t, t+\dt[$.
The proposed stochastic network traffic control formulation is based on the concept of vehicle \emph{movements} which are formally defined below.
\begin{defi}
A movement $(i,j) \in \A^2$ is a vehicle trajectory from lane $i$ to lane $j$ across a common intersection $n \in \N$ in the network. We denote $\M$ the set of all movements in the network.
\end{defi}
AVs communicate their position with IMs to make use of the AIM protocol, hence we assume that movement-specific queues are known for AVs. In contrast, LV-queues can be detected through loop detectors and flow sensors currently in use for traffic signals but their destination is assumed unknown. Specifically, let $\A_a \subset \A$ be the set of AV-restricted lanes. For these lanes, we assume known movements queues, \emph{i.e.} if $j \in \A_a$, then $\xit = \sum_{j \in \A_a} \xijt$ and $\xijt$ is known. In contrast, for other lanes $i \in \A_l = \A \setminus \A_a$, only $\xit$ is known since route choice for LVs is assumed unknown.\\
For each lane $i \in \A$ we assume that lane capacity $C_i$ is known and determined assuming a triangular fundamental diagram relationship, that is $C_i =\frac{\maxU_i w K}{\maxU+w}$, where $\maxU_i$ is the free-flow speed on lane $i$, $K$ is the jam density and $w$ is the congestion wave speed. We also assume that the maximum, unconditional movement service rate $\usij$ is known for each movement and determined based on lost time, specifically: unconditional movement service rates for all movements $(i,j)$ are calculated as $\usij = \min\{C_i, C_j\} \frac{\Delta t - L}{\Delta t}$.
Let $\pijt$ be a random variable denoting the turning proportion from lane $i$ to $j$ at $t$ with known mean $\pij$. Let $\dit$ be a random variable denoting the external incoming traffic onto lane $i$ at time $t$ with known mean $\di$. We denote $\bm{p}$, $\bm{d}$ and $\bm{\bar{s}}$ the vectors of mean turning proportions, mean demands and unconditional movement service rates, respectively. These upper bounds on movement service rates represent the maximum number of vehicles that can be moved from lane $i$ to lane $j$ during time period $t$ when no conflicting movement with $(i,j)$ is activated. From these rates, we can determine maximum, unconditional lane service rates $\usi = \sum_{j \in \A} \usij$. For convenience if $i$ and $j$ do not correspond to a possible movement in the network, \emph{e.g.} they belong to different intersections or they are both entry or exit lanes of the same intersection, we assume that $\pij = 0$. Hence, we can define the sets of AV-movements $\M_a \equiv \{(i,j) \in \A_a^2 : \pij \neq 0\}$ and LV-movements $\M_l \equiv \{(i,j) \in \A_l^2 : \pij \neq 0\}$. \\
Traffic at network intersections is coordinated by phases which are determined by the selection of the \emph{activation matrix}. In addition, we also introduce the concept of \emph{service matrix} which is used in the proposed traffic control formulations.
\begin{defi}
An activation matrix $\bt$ is a $|\A|\times|\A|$-matrix wherein all entries take value 0 (inactive) or 1 (active), \emph{i.e.} $\bijt \in \{0,1\}$ at time $t$.
\end{defi}
\begin{defi}
A service matrix $\at$ is a $|\A|\times|\A|$-matrix wherein all entries take a value between 0 (not serviced) and 1 (fully serviced), \emph{i.e.} $\aijt \in [0,1]$ at time $t$.
\end{defi}
The entries of an activation matrix characterize the \emph{activeness} of the corresponding phase: $\bijt = 1$ means that movement $(i,j)$ is active during phase $t$ whereas $\bijt = 0$ means that movement $(i,j)$ is inactive. The entries of the service matrix characterize the \emph{service level} of active movements during phase $t$: $\aijt = 1$ corresponds to a maximal service level for of movement $(i,j)$, whereas $\aijt = 0$ means that movement $(i,j)$ cannot serve any vehicles during time period $t$. Fractional service level values model situations where conflicting movements, \emph{i.e.} posing a safety risk, simultaneously have non-zero activation values. Further, the activation and service matrices are linked through the movement-based constraints $\aijt \leq \bijt$. These linking constraints ensure that an intersection can only service vehicles on movement $(i,j)$ if this movement is active $\bijt = 1$. In the proposed traffic control policy, the selection of the activation matrix requires the solution of two mathematical optimization problems presented in Section \ref{green} and \ref{blue}, respectively and further details are discussed therein.\\
Let $\Yit$ be a random variable with mean $\yit$ denoting the number of vehicles serviced in lane $i$ at time $t$. The vector $\Yt$ is endogenous to the service matrix $\at$ selected by the control policy. Note that the mean $\yit$ of $\Yit$ is unknown and $\yit$ will be modeled as a control variable in the proposed traffic phase optimization formulation. Specifically, the dependency between the vector of lane service rates $\yt$ and $\at$ will be presented in detail in Sections \ref{green} and \ref{blue}, for green and blue phases, respectively. The proposed stochastic network traffic control model is summarized by the lane-queue evolution equation \eqref{eq:queue}.
\begin{equation}
\xjtt = \xjt - \Yjt + \sum_{i \in \A} \pijt \Yit + D_j(t) \quad \forall j \in \A
\label{eq:queue}
\end{equation}
Note, if $j \notin \A_r$ then $\djt = 0$. Conversely, if $j \in \A_r$ then $j$ has no predecessor links, thus $\sum_{i \in \A} \pijt = 0$. Although AV-lanes and other lanes have identical queue evolution equations, the information available is more accurate for AV-lanes, this used in the calculation of network control policies.\\
In related works, the lane service rate vector $\yt$ is commonly calculated as the minimum between the supply (lane or movement service capacity) and the demand (lane or movement queue length) \citep{varaiya2013max,le2015decentralized}. However, this modeling approach does not always capture the interdependency between the activation of possibly conflicting movements with lane or movement service capacity and queue length. Precisely, previous efforts have assumed that a set of activation matrices is provided for each intersection and that lane or movement service capacities can be pre-processed accordingly. This overlooks the impact of queue length on intersection capacity. In contrast our proposed integrated approach aims to accurately estimate the expected number of serviced vehicles at each time period by leveraging the available lane-queue length information (note that this information is also assumed available in the aforementioned papers).
\section{Traffic Control with Green and Blue Phases}\label{phases}
In this section, we present two intersection-based formulations to coordinate traffic at intersections during green (Section \ref{green}) and blue (Section \ref{blue}) phases, respectively. We then show that throughput-optimal network-wide traffic coordination can be decentralized by implementing a max-pressure-based policy that activates the highest pressure phase---among green and blue phases---at each intersection.
\subsection{Green Phase}\label{green}
For green phases, we propose a MILP approach that aims to identify the optimal activation matrix $\at$ while accounting for movement capacity loss due to potential conflicting movements and known lane-queues. Specifically, we estimate the expected number of serviced vehicles (or lane service rates) for each lane in the network when this value is assumed endogenous to the selected activation matrix.
Let $\A_l^n \subset \A_l$ (respectively, $\M_l^n \subset \M_l$) be the set of LV-lanes (respectively, LV-movements) of intersection $n \in \N$. We make the following design assumptions:
\begin{enumerate}
\item The set of LV-movements $\M_l^n$ can be partitioned into two sets: priority $\P^n$ and yield $\Y^n$ movements, \emph{i.e.} $\M_l^n = \P^n \cup \Y^n$ and $\P^n \cap \Y^n = \emptyset$.
\item Two conflicting priority movements cannot be selected simultaneously.
\item Two conflicting yield movements cannot be selected simultaneously.
\item If selected, a priority movement $(i,j) \in \P^n$ has a known full movement service rate $\usij$.
\item If selected, a yield movement can have partial movement capacity.
\end{enumerate}
Assumptions 2 and 3 are motivated by the fact that in existing traffic intersections there needs to be a consensus between movements posing a safety risk: this consensus is traditionally resolved by ``right-of-way'' rules which implies that one movement must have priority over the other. For instance, two through movements cannot be activated simultaneously unless they are parallel. In our numerical experiments we model through and right turns as priority movements whereas left turns are categorized as yield movements.
To implement the intersection design assumptions listed above, we introduce two binary matrices that can be pre-processed for all intersections in the network. Formally, let $\C_{ij}$ be the set of potentially conflicting movements with movement $(i,j) \in \M_l^n$. This set can be defined by examining the geometry of the intersection containing movement $(i,j)$ and identifying conflict points with other movements---an illustration is provided in Section \ref{example}. Given conflict sets $\C_{ij}$ for each movement, we can determine the values $\cijij$ for each pair of movements as follows:
\begin{equation}
\cijij = \begin{cases}
1 \text{ if } \C_{ij} \cap \C_{i'j'} \neq \emptyset \\
0 \text{ otherwise}
\end{cases}\quad \forall (i,j), (i',j') \in \M_l^n
\label{eq:conflicts}
\end{equation}
Let $g_{ij}^{i'j'}$ be a binary parameter indicating forbidden simultaneous movements, formally defined as:
\begin{equation}
\gijij = \begin{cases}
1 \text{ if } (i,j), (i',j') \in \P \text{ and } \cijij = 1 \\
1 \text{ if } (i,j), (i',j') \in \Y \text{ and } \cijij = 1 \\
0 \text{ otherwise}
\end{cases}\quad \forall (i,j), (i',j') \in \M_l^n
\label{eq:forbidden}
\end{equation}
Let $\bijt \in \{0,1\}$ be the binary variable representing the activation of LV-movement $(i,j) \in \M_l^n$ in the activation matrix ($\bijt = 1$) or not ($\bijt = 0$). To forbid the simultaneous activation of two priority or two yield conflicting movements, as required by the design assumptions 2 and 3, we impose the following constraint:
\begin{equation}
\bijt + \bijpt \leq 1 \quad \forall (i,j), (i',j') \in \M_l^n : g_{ij}^{i'j'} = 1
\label{eq:forbid}
\end{equation}
We can use a supply-demand formulation to calculate the lane service rates $\yijt$ for movements $(i,j) \in \M_l^n$. Let $\aijt \in [0,1]$ be the decision variable representing the fraction of capacity allocation to LV-movement $(i,j) \in \M_l^n$. Recall that We have the relationship $\aijt \leq \bijt$. On the supply side, the expected movement capacity is $\aijt \usij$. Assuming known, exogenous turning proportions, $\pij$, the demand for movement $(i,j)$ at time $t$ is upper-bounded by $\pij \xit$. However, FIFO behavior on lanes may lead to vehicle-blocking \citep{tampere2011generic,li2016effects}. To capture this FIFO behavior, let $\pit \in [0,1]$ be a variable representing the effect of FIFO behavior on vehicles movements from lane $i \in \A_l^n$. The expected service rate for movement $(i,j) \in \M_l^n$ can be calculated as:
\begin{equation}
\yijt = \min\{\aijt \usij, \pij \xit \pit\}
\label{eq:service}
\end{equation}
The value of lane-based variables $\pit$ can be adjusted based on the ratio of supply to demand for each movement. Specifically we impose the constraints:
\begin{equation}
\pit \leq \frac{\aijt \usij}{\pij \xit} \quad \forall (i,j) \in \M_l^n
\label{eq:fifo}
\end{equation}
Recall that $\yt$ is the vector of lane service rates. Lane service rates can be defined as the sum of expected service rates over all movements from lane $i$, \emph{i.e.} $\yit = \sum_{j \in \A_l^n : (i,j) \in \M_l^n} \yijt$.\newline
For a priority movement $(i,j) \in \P^n$, variable $\aijt$ behaves as a binary variable, \emph{i.e.} $\aijt = 1$ implies that $(i,j)$ is selected in the activation matrix whereas $\aijt = 0$ implies that it is not. Although fractional values are permitted there is no incentive to allocate a fraction of capacity to a priority movement. This is in compliance with the design assumption 4. In turn, for a yield movement $(i,j) \in \Y^n$, the available capacity depends on the selection of conflicting movements. Note that, based on our design assumptions, a yield movement may conflict with any number of priority movements as long as these priority movements do not conflict with each other. To calculate the endogenous expected capacity of yield movements, we define $\mij \geq 0$ as the \emph{slack} of movement $(i,j) \in \M_l^n$ representing the expected available capacity for this movement if activation matrix $\at$ is selected. The slack of movement $(i,j) \in \M_l^n$ is the positive part of the gap between supply and demand:
\begin{equation}
\mij \equiv \max\{\aijt \usij - \pij \xit \pit, 0\} = (\aijt \usij - \pij \xit \pit)^+
\label{eq:slack}
\end{equation}
Recall that $\C_{ij}$ is the set of potentially conflicting movements with movement $(i,j) \in \M_l^n$. In line with the design assumption 5, we can define the endogenous expected capacity of yield movement $(i,j) \in \Y^n$ as:
\begin{equation}
\aijt \usij \leq \min\left\{\sum_{(i',j') \in \C_{ij}} \mijp, \usij \right\}
\label{eq:endo}
\end{equation}
Note that since $\aijt \in [0,1]$, the term $\aijt \usij$ is naturally upper-bounded by $\usij$ and the Constraint \eqref{eq:endo} can thus be linearized using the inequality: $\aijt \usij \leq \sum_{(i',j') \in \C_{ij}} \mijp$. Finally, the upper bound on $\aijt$ can be omitted since it is upper bounded by $\bijt$.\newline
Observe that binary variables are required to linearize the $\max$ function in the slack variables $\mij$. Let $\lij \in \{0,1\}$ be equal to 1 if $\aijt \usij - \pij \xit \pit \geq 0$ and 0 otherwise. Let $M_{ij} \geq 0$ be a large number such that $M_{ij} \geq \max\{-\aijt \usij + \pij \xit \pit\}$, the definitional constraint \eqref{eq:slack} can be expressed using the set of integer-linear constraints:
\begin{subequations}
\begin{align}
(\lij - 1) M_{ij} &\leq \aijt \usij - \pij \xit \pit \\
\lij M_{ij} &\geq \aijt \usij - \pij \xit \pit \\
\mij &\leq \aijt \usij - \pij \xit \pit + (1 - \lij)M_{ij} \\
\mij &\geq \aijt \usij - \pij \xit \pit \\
\mij &\leq \lij M_{ij} \\
\mij &\geq 0 \\
\lij &\in \{0,1\}
\end{align}
\end{subequations}
Our objective is to select a control policy that maximizes network throughput. We build on the max-pressure literature and define $\wit$ as the \emph{weight} of lane $i \in \A$ at time $t$ onto the network based on current queue lengths \citep{tassiulas1992stability}:
\begin{equation}
\wit = \xit - \sum_{j \in \A : (i,j) \in \M} \pij \xjt
\label{eq:weight}
\end{equation}
As in \citet{varaiya2013max}, we will later show that maximizing pressure locally, \emph{i.e.} at each intersection, will maximize network throughput. Hence, the objective function for the green phase activation program should maximize local pressure, \emph{i.e.} $\sum_{i \in \A_l^n} \wit \yit = \sum_{i \in \A_l^n} \wit \sum_{j \in \A_l^n : (i,j) \in \M_l^n} \yijt$. Note that since variables $\yijt$ appear in the objective function, Equation (5) can be linearized using two linear constraints. Observe that if the coefficient $\wit$ of $\yijt$ is negative, there is no incentive to service any vehicle on lane $i$ and thus $\yijt = 0$ for all movements $(i,j) \in \M_l^n$. Let $\zgreen$ be the maximal local pressure that can be obtained using the green phase based on the network state $\xt$ at intersection $n \in \N$. The proposed mathematical programming formulation for identifying maximal pressure green phases is summarized below in \eqref{mod:green} and hereby referred to as the \green.
\begin{subequations}
\begin{align}
\zgreen =\ & \max && \sum_{i \in \A_l^n} \wit \sum_{j \in \A_l^n : (i,j) \in \M_l^n} \yijt \\
& \mathrm{s.t.} && \yijt \leq \aijt \usij && \forall (i,j) \in \M_l^n \\
& && \yijt \leq \pij \xit \pit && \forall (i,j) \in \M_l^n \\
& && \pit \leq \frac{\aijt \usij}{\pij \xit} && \forall (i,j) \in \M_l^n : \pij \xit > 0 \\
& && \aijt \usij \leq \sum_{(i',j') \in \C_{ij}} \mijp && \forall (i,j) \in \Y^n \\
& && \bijt + \bijpt \leq 1 && \forall (i,j), (i',j') \in \M_l^n : g_{ij}^{i'j'} = 1 \\
& && \aijt \leq \bijt && \forall (i,j) \in \M_l^n \\
& && (\lij - 1) M_{ij} \leq \aijt \usij - \pij \xit \pit && \forall (i,j) \in \M_l^n \\
& && \lij M_{ij} \geq \aijt \usij - \pij \xit \pit && \forall (i,j) \in \M_l^n \\
& && \mij \leq \aijt \usij - \pij \xit \pit + (1 - \lij)M_{ij} && \forall (i,j) \in \M_l^n \\
& && \mij \geq \aijt \usij - \pij \xit \pit && \forall (i,j) \in \M_l^n \\
& && \mij \leq \lij M_{ij} && \forall (i,j) \in \M_l^n \\
& && \mij \geq 0 && \forall (i,j) \in \M_l^n \\
& && \lij \in \{0,1\} && \forall (i,j) \in \M_l^n \\
& && \bijt \in \{0,1\} && \forall (i,j) \in \M_l^n \\
& && \aijt \geq 0 && \forall (i,j) \in \M_l^n \\
& && \yijt \geq 0 && \forall (i,j) \in \M_l^n \\
& && 1 \geq \pit \geq 0 && \forall i \in \A_l^n
\end{align}
\label{mod:green}
\end{subequations}
The solution of \green gives an optimal service matrix at intersection $n$ for the green phase, denoted $\atng$ and an optimal (binary) activation matrix denoted $\btng$.
\subsection{Blue Phase}\label{blue}
To coordinate traffic during blue phases, we adapt a mixed-integer programming formulation from \citet{levin2017conflict} to maximize local pressure. For blue phases, lane service rates can be obtained by solving an optimization problem wherein collision avoidance constraints are imposed at all conflicts point of the intersection. In contrast to \green, the blue phase model optimizes individual vehicle trajectories while ensuring traffic safety. Specifically, the blue phase Model finds optimal AVs speeds and departure time based on the current AV demand at each intersection.\\
Let $\Vt$ be set of AVs in the network at time $t$ seeking to enter intersection $n \in \N$. Let $\A_a^n \subset \A_a$ (respectively, $\M_a^n \subset \M_a$) be the set of AV-lanes (respectively, AV-movements) of intersection $n \in \N$. For each intersection, the set of possible AV-movements is assumed known and intersecting AV-movements generate \emph{conflict-points}. Thus, an AV-movement $(i,j) \in \M_a^n$ can be viewed as a two-dimensional trajectory which consists of a sequence of conflict-points starting at the head node of lane $i$ denoted $i^+$ and ending at the tail node of lane $j$ denoted $j^-$. Since AVs' route choice is assumed to be known, we can map each vehicle to a trajectory. Let $\inc_v$ (respectively, $\out_v$) be the entry (respectively, exit) point of vehicle $v \in \Vt$ into the intersection. Let $\rho_v$ be the trajectory of $v$, \emph{i.e.} $\rho_v = \{\inc_v, \ldots ,\out_v\}$.\\
Recall that the current time period consists of the interval $[t, t + \dt[$. Let $t_v(c) \geq t$ be a decision variable representing the arrival time of vehicle $v$ at point $c \in \rho_v$ and let $\tau_v(c) \geq 0$ be a decision variable representing the time conflict point $c \in \rho_v$ is reserved for vehicle $v$. The values of these variables is determined by the speed assigned and the departure assigned to $v$ as described in \citet{levin2017conflict}.\\
Let $z_v$ be a binary variable denoting if vehicle $v \in \V^n(t)$ traverses intersection $n$ during the current time period (1) or not (0). Formally,
\begin{equation}
z_v = 1 \quad \Leftrightarrow \quad t_v(\out_v) + \tau_v(\out_v) \leq t + \dt
\label{eq:bin}
\end{equation}
A relaxed form of this relationship can be modeled using integer-linear constraints as follows:
\begin{equation}
t_v(\out_v) + \tau_v(\out_v) \leq t + \dt + (1 - z_v)M_v
\label{eq:zt}
\end{equation}
where $M_v \geq 0$ represents a large number which can be set to the maximal exit time of vehicle $v \in \Vt$. Constraint \eqref{eq:zt} imposes that $z_v = 1$ if $t_v(\out_v) + \tau_v(\out_v) \leq t + \dt$ and is free otherwise. To ensure that $z$-variables are only activated if all predecessor vehicles in the same queue have traversed the intersection, we impose the constraints $z_{v'} \leq z_v$, for all vehicles $v, v' \in \Vt : \inc_v = \inc_{v'}, e_v < e_{v'}$. Let $\Vit = \{v \in \Vt : \inc_v = i^+\}$ be the set of vehicles seeking to travel from lane $i \in \A_a^n$ at time $t$. The number of AVs serviced on lane $i \in \A_a^n$ is $\sum_{v \in \Vit} z_v$. Similarly to the objective function of \green, the objective of the blue phase model is to maximize local pressure on AV-lanes. This can be formulated as: $\sum_{i \in \A_a^n} \wit \sum_{v \in \Vit} z_v$.
The remaining of the blue phase model is identical to that presented in \citet{levin2017conflict}. Note that movement capacity is only implicitly represented within the blue phase model. Recall that we assume a triangular fundamental diagram relationship in the intersection. This relationship is used to determine the reservation time $\tau_v(c)$ in Constraint \eqref{eq:tau}. Constraints \eqref{eq:speedbounds} and \eqref{eq:ctespeed} are used to impose lower ($\minU_v$) and upper ($\maxU_v$) bounds on the speed of vehicle $v$, and to impose that vehicles have a constant speed throughout the intersection. FIFO conditions at all shared conflict points for vehicles in the same lane queue are imposed by Constraints \eqref{eq:fifoblue}. Binary variables $\dvvp(c)$ are used to model the order of vehicles at conflict points, \emph{i.e.} $\dvvp(c) = 1$ (respectively $\dvpv(c) = 1$) means that vehicle $v$ (respectively, $v'$) traverses conflict point $c$ before vehicle $v'$ (respectively, $v$) and disjunctive trajectory separation constraints (see \eqref{eq:dis1} and \eqref{eq:dis2} in the formulation below) are used to ensure that conflict points are reserved a sufficient amount of time to ensure traffic safety. This formulation builds on space-discretized collision avoidance formulations for air traffic control \citep{rey2015equity,rey2015subliminal}. For more details on this conflict-point formulation, we refer the reader to \citet{levin2017conflict}. Let $\zblue$ be the maximal local pressure that can be obtained using the blue phase based on the network state $\xt$ at intersection $n$. The MILP used to coordinate traffic during blue phases is summarized below in Formulation \eqref{mod:blue} and hereby referred to as the \blue.
\begin{subequations}
\begin{align}
\zblue =\ & \max && \sum_{i \in \A_a^n} \wit \sum_{v \in \Vit} z_v \\
& \mathrm{s.t.} && t_v(\out_v) + \tau_v(\out_v) \leq t + \dt + (1 - z_v)M_v && \forall v \in \Vt \\
& && t_v(\inc_v)\geq e_v && \forall v \in \Vt \\
& && \tau_v(c) = \frac{L_v}{w} + \frac{L_v(t_v(\out_v)-t_v(\inc_v))}{d_v(\inc_v,\out_v)} && \forall v \in \Vt, \forall c\in\p_v \label{eq:tau}\\
& && \frac{d_v(\inc_v,\out_v)}{\maxU_v} \leq t_v(\out_v) - t_v(\inc_v) \leq \frac{d_v(\inc_v,\out_v)}{\minU_v} && \forall v \in \Vt \label{eq:speedbounds} \\
& && \frac{t_v(c)-t_v(\inc_v)}{d_v(\inc_v,c)} = \frac{t_v(\out_v)-t_v(\inc_v)}{d(\inc_v,\out_v)} && \forall v \in \Vt, \forall c\in\p_v \label{eq:ctespeed} \\
& && t_v(c) + \tau_v(c) \leq t_{v'}(c) && \forall v, v' \in \Vt : \inc_v = \inc_{v'}, e_v < e_{v'}, \nonumber \\
& && && \forall c\in\p_v\cap\p_{v'} \label{eq:fifoblue} \\
& && t_v(c)+\tau_v(c)-t_{v'}(c) \leq (1 - \dvvp(c))M_{vv'} && \forall v, v' \in \V^n(t): \inc_v\neq\inc_{v'}, \nonumber \\
& && && \forall c\in\p_v\cap\p_{v'} \label{eq:dis1}\\
& && \dvvp(c)+\dvpv(c)= 1 && \forall v, v' \in \V^n(t): \inc_v\neq\inc_{v'}, v<{v'}, \nonumber \\
& && && \forall c\in\p_v\cap\p_{v'} \label{eq:dis2}\\
& && z_{v'} \leq z_v && \forall v, v' \in \Vt : \inc_v = \inc_{v'}, e_v < e_{v'} \\
& && \dvvp(c) \in \{0,1\} && \forall v, v' \in \V^n(t): \inc_v\neq\inc_{v'}, \nonumber \\
& && && \forall c\in\p_v\cap\p_{v'} \\
& && z_v \in \{0,1\} && \forall v \in \V^n(t) \\
& && t_v(c)\geq t && \forall v \in \V^n(t), \forall c \in \p_v \\
& && \tau_v(c)\geq 0 && \forall v \in \V^n(t), \forall c \in \p_v
\end{align}
\label{mod:blue}
\end{subequations}
To derive the service matrix $\atnb$ associated with the optimal solution of \blue at intersection $n$ and time $t$, we introduce movement-based vehicle sets $\Vijt = \{v \in \Vt : \inc_v = i^+, \out_v = j^-\}$ for each $(i,j) \in \M_a^n$. The service rate of movement $(i,j) \in \M_a^n$ is then calculated as the ratio of the number of vehicles serviced by the unconditional movement service rate, \emph{i.e.}:
\begin{equation}
\aijt = \frac{\sum_{v \in \Vijt}{z_v}}{\usij}
\label{eq:aijblue}
\end{equation}
The (binary) activation matrix $\btnb$ associated with the optimal solution of \blue intersection $n$ and time $t$ can be defined based on $\atnb$ as follows:
\begin{equation}
\bijt = \begin{cases}
1 \text{ if } \aijt > 0 \\
0 \text{ otherwise}
\end{cases}
\label{eq:bijblue}
\end{equation}
\subsection{Illustration of \green and \blue Traffic Control Formulations}\label{example}
The proposed traffic control formulation is illustrated in Figure \ref{fig:inter} for a typical traffic intersection connected to eight incoming and eight outgoing links, each of which composed of one LV-lane and one AV-lane. A possible solution to \green is shown in Figure \ref{fig:interpy} with four priority LV-movements and two yield LV-movements. The blue phase is illustrated in \ref{fig:interblue} wherein AV-traffic is coordinated by solving \blue and reserving conflicts points for each vehicle in order to maximize local pressure.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.33\linewidth}
\centering\includegraphics[width=0.95\linewidth]{inter_default}
\caption{\label{fig:interall}}
\end{subfigure
\begin{subfigure}[b]{0.33\linewidth}
\centering\includegraphics[width=0.95\linewidth]{priority_yield}
\caption{\label{fig:interpy}}
\end{subfigure}
\begin{subfigure}[b]{0.33\linewidth}
\centering\includegraphics[width=0.95\linewidth]{inter_blue}
\caption{\label{fig:interblue}}
\end{subfigure}
\caption{Figure~\ref{fig:interall} shows all possible LV- and AV-movements on a typical intersection. Figure~\ref{fig:interpy} depicts a possible green phase involving priority and yield LV-movements and Figure~\ref{fig:interblue} illustrates the conflict-point formulation during blue phase.\label{fig:inter}}
\end{figure}
To further illustrate the behavior of formulations \green and \blue, we examine a specific instance based on the intersection geometry illustrated in Figures~\ref{fig:interpy} and~\ref{fig:interblue}, respectively. We consider a 4-approach intersection with one LV-lane and one AV-lane per approach. Lanes are labelled based on their cardinal orientation. Incoming lanes are sub-scripted with a $-$ sign and outgoing lanes are sub-scripted with a $+$ sign. Each lane has three possible movements: right, through and left. For green phase purposes, Right and Through movements are assumed to be \emph{priority} ($\P$) movements whereas left movements are assumed to be \emph{yield} ($\Y$) movements.
We assume that the free-flow speed is $\maxU = 44$ ft/s and the congestion wave speed is $w = 11$ ft/s. We consider an intersection of width 48 ft and assume all vehicles to be of length 17.6 ft. We use a time period of $\Delta t = 10$ s and we assume a lost time of 2 s for green phase which represents all-red clearance time and vehicle start-up delay. Under this configuration, the unconditional movement service rate is 4 vehicles per time step. Turning proportions and movement specific information is detailed in Table \ref{tab:mov}.
\begin{table}[b
\centering
\begin{tabular}{lllll}
\toprule
& & & Unconditional Movement & Number of \\
Movement & Type & Turning Proportion ($\pij$) & Service Rate ($\usij$ in veh/$\Delta t$) & Conflicts Points ($|\rho_v|$)\\
\midrule
Right & Priority & 10\% & 4 & 2 \\
Through & Priority & 80\% & 4 & 6 \\
Left & Yield & 10\% & 4 & 6 \\
\bottomrule
\end{tabular}
\caption{Turning proportions, unconditional movement service rates and number of conflict points (blue phase only) based on movement type.}
\label{tab:mov}
\end{table}
Prior to solving model \green, it is necessary to identify the potential conflict sets $\C_{ij}$ of each movement $(i,j)$ of the intersection. In this illustration, we assume the following intersection configuration:
\begin{itemize}
\item \textbf{Right movements} are conflicting with through and left movements ending at the same lane.
\item \textbf{Through movements} are conflicting with right movements ending on the same lane, perpendicular through movements and left movements starting from a different lane.
\item \textbf{Left movements} are conflicting with right movements ending on the same lane, through movements starting from different lanes and left movements starting from perpendicular lanes.
\end{itemize}
To illustrate this intersection configuration, the conflict sets $\C_{ij}$ of movements emanating from lane $S_-$ are provided below (the conflict sets of movements emanating from other lanes are symmetrical):
\begin{itemize}
\item $\C_{S_-,E_+} = \left\{(W_-,E_+), (N_-,E_+)\right\}$
\item $\C_{S_-,N_+} = \left\{(E_-,N_+), (W_-,E_+), (E_-,W_+), (E_-,S_+), (N_-,E_+), (W_-,N_+)\right\}$
\item $\C_{S_-,W_+} = \left\{(N_-,W_+), (W_-,E_+), (N_-,S_+), (E_-,W_+), (E_-,S_+), (W_-,N_+)\right\}$
\end{itemize}
\begin{table
\centering
\begin{tabular}{lll}
\toprule
Lane ($i$) & Demand ($x_i(t)$) & AV route choice (blue phase only)\\
\midrule
$S_-$ & 10 & $[N_+,E_+,N_+,N_+,W_+,N_+,N_+,N_+,N_+,N_+]$ \\
$W_-$ & 4 & $[E_+,E_+,S_+,W_+]$\\
$N_-$ & 2 & $[S_+,S_+]$\\
$E_-$ & 7 & $[W_+,W_+,W_+,N_+,S_+,W_+,W_+]$\\
\bottomrule
\end{tabular}
\caption{Lane-queues length and vehicle route choice (for blue phase only).}
\label{tab:demand}
\end{table}
For comparison purposes, we assume identical traffic demand for both LV and AV-lanes. We assume that there is no downstream traffic, hence pressure weights are equal to lane-demand, \emph{i.e.} $\xit = \wit$ for all incoming lanes into the intersection. Lane-demand and AV route choice information is provided in Table \ref{tab:demand}. The optimal objective function value of \green is $\zgreen = 54$ and that of \blue is $\zblue = 55$. The details of the optimal solutions of \green and \blue are summarized in Tables \ref{tab:movsol} (movement-based variables) and \ref{tab:lane} (lane service rates).
\begin{table
\centering
\begin{tabular}{llllllll}
\toprule
& \multicolumn{5}{l}{\green} & \multicolumn{2}{l}{\blue} \\
\cmidrule(l){2-6} \cmidrule(l){7-8}
Movement $(i,j)$ & $\yijt$ & $\mij$ & $\lij$ & $\aijt$ & $\bijt$ & $\aijt$ & $\bijt$ \\
\midrule
$(S_-,E_+)$ & 0.5 & 0.2 & 1 & 0.175& 1 & 0.2 & 1 \\
$(S_-,N_+)$ & 4.0 & 0.0 & 0 & 1.0 & 1 & 0.2 & 1 \\
$(S_-,W_+)$ & 0.5 & 0.0 & 0 & 0.125& 1 & 0.0 & 0 \\
$(W_-,S_+)$ & 0.0 & 0.0 & - & 0.0 & 0 & 0.2 & 1 \\
$(W_-,E_+)$ & 0.0 & 0.0 & - & 0.0 & 0 & 0.4 & 1 \\
$(W_-,N_+)$ & 0.0 & 0.0 & - & 0.0 & 0 & 0.0 & 0 \\
$(N_-,E_+)$ & 0.2 & 0.0 & 0 & 0.05 & 1 & 0.0 & 0 \\
$(N_-,S_+)$ & 1.6 & 0.5 & 1 & 0.525& 1 & 0.2 & 1 \\
$(N_-,W_+)$ & 0.2 & 0.0 & 0 & 0.05 & 1 & 0.0 & 0 \\
$(E_-,N_+)$ & 0.0 & 0.0 & - & 0.0 & 0 & 0.0 & 0 \\
$(E_-,W_+)$ & 0.0 & 0.0 & - & 0.0 & 0 & 0.6 & 1 \\
$(E_-,S_+)$ & 0.0 & 0.0 & - & 0.0 & 0 & 0.0 & 0 \\
\bottomrule
\end{tabular}
\caption{Optimal value of movement-based decision variables obtained by solving \green and \blue for the illustrative instance. For \green, the optimal value of lane-based variables involving activated movements ($\bijt=1$) are $\phi_{N_-}(t)$=1.0 and $\phi_{S_-}(t)$=0.5. The symbol - means that the value of this variable is meaningless if the movement is not activated.}
\label{tab:movsol}
\end{table}
\begin{table
\centering
\begin{tabular}{lllll}
\toprule
& \multicolumn{2}{l}{\green} & \multicolumn{2}{l}{\blue} \\
\cmidrule(l){1-1} \cmidrule(l){2-3} \cmidrule(l){4-5}
Lane ($i$) & $\yit$ & $\wit \yit$ & $\yit$ & $\wit \yit$ \\
\midrule
$S_-$ & 5 & 50 & 2 & 20 \\
$W_-$ & 0 & 0 & 3 & 12 \\
$N_-$ & 2 & 4 & 1 & 2 \\
$E_-$ & 0 & 0 & 3 & 21 \\
\bottomrule
\end{tabular}
\caption{Optimal lane service rates obtained by solving \green and \blue for the illustrative instance. For green phase, lane service rates are calculated as $\yit = \sum_{j \in \A_l^n : (i,j) \in \M_l^n} \yijt$. For blue phase, lane service rates are calculated as $\yit = \sum_{v \in \Vit} z_v$.}
\label{tab:lane}
\end{table}
In green phase, all 6 movements from South and North lanes ($S_-$ and $N_-$) are activated as indicated in Table \ref{tab:movsol}. Of the activated movements, only movement $(S_-,N_+)$ has a full service rate ($\aijt = 1.0$) and is thus able to service $\usij = 4.0$ vehicles. Other movements have fractional service rates indicating that they are not operating at capacity either due low demand or conflicting movements. Two priority movements, $(S_-,E_+)$ and $(N_-,S_+)$, have a non-zero slack ($\mij > 0$) which allows two Left turns movements, $(S_-,W_+)$ and $(N_-,W_+)$, to service 0.5 and 0.2 vehicles, respectively. Blocking effects due to FIFO conditions yield $\phi_{N_-}(t)$=1.0 and $\phi_{S_-}(t)$=0.5, indicating that there is no blocking on lane $N_-$ but that half of the demand on lane $S_-$ is affected. In contrast to green phase, blue phase activates at least one movement from each incoming lane to the intersection with a total of 6 movements as indicated in Table \ref{tab:movsol}. All activated movements have fractional service rates ($\aijt$) comprised between 0.2 and 0.6, which is narrower range compared to services rates of activated movements in green phase.
Optimal lane service rates and pressure-weighted lane service rates are reported in Table \ref{tab:lane}. These results highlight the difference between \green and \blue with the former servicing only 2 lanes whereas the latter is capable to service some traffic on all 4 lanes of the intersection thanks to the conflict-point formulation allowing multiple conflicting movements to be activated simultaneously.\\
The MILPs \green and \blue can be solved to optimality using traditional branch-and-cut-and-bound techniques in near real-time, as shown in Section \ref{num}. Their solution provides the basis for the proposed online network traffic control policy which is introduced in the next section, along with its proof of stability.
\section{Hybrid Network Control Policy and Stability}\label{policy}
In this section, we present a new network traffic control policy for intersection control combining green (LV-lane restricted) and blue phases (AV-lane restricted) and prove that it maximizes throughput. The proposed network traffic control policy works by repeatedly solving \green and \blue at each time period $t$ based on the network state $\xt$ and combining local (intersection-level), optimal activation matrices into a network-wide activation matrix $\at$.
\subsection{Stability Region}
Let $\AM$ be the set of service matrices. A \textit{policy} is a function $\pi: \X \rightarrow \AM$ that chooses a service matrix $\at \in \AM$ for every state $\xt$. We use the concept of \emph{strong stability} to characterize the proposed stochastic queuing process \citep{leonardi2001bounds,wongpiromsarn2012distributed,zaidi2016back}:
\begin{defi}
A stochastic queue evolution process is \textit{strongly stable} under policy $\pi$ if and only if there exists $K < \infty$ such that the network state $\xt$ verifies:
\begin{equation}
\limsup_{t \rightarrow \infty} \mathbb{E}\left[\xt \right] < K
\label{eq:stability_sq}
\end{equation}
\label{defi:stability_sq}
\end{defi}
Let $\bar{t}$ be a time period index. The strong stability condition \eqref{eq:stability_sq} is equivalent to the following statement \citep{leonardi2001bounds}: there exists $K < \infty$ such that:
\begin{equation}
\limsup_{\bar{t} \rightarrow \infty} \mathbb{E}\left[\frac{1}{\bar{t}}\sum_{t=1}^{\bar{t}} \xt \right] < K
\label{eq:stability2_sq}
\end{equation}
For brevity, we hereby referred to \emph{strong stability} as \emph{stability}. Recall that $\di$ is the demand rate on lane $i \in \A_r$ and let $\fii$ be the flow rate on lane $i \in \A$. Let $\A_{ro} = \A_r \cup \A_o$. We impose the following flow conservation constraints:
\begin{align}
\fii &= \di &&\quad \forall i \in \A_r \label{eq:flow1} \\
\fj &= \sum_{i \in \A_{ro}} \fii \pij &&\quad \forall j \in \A_o \label{eq:flow2}
\end{align}
We can now define the stability region of the system.
\begin{defi}
Let $\D$ be the set of demand rate vectors verifying flow conservation constraints \eqref{eq:flow1} and \eqref{eq:flow2} such that lane flow rates are inferior to the unconditional lane service rates $\usi$ that \emph{i.e.}:
\begin{equation}
\D \equiv \left\{\bm{d} \in \Re_+^{|\A_r|} : (\fii \leq \usi, i \in \A_{ro}) \wedge \eqref{eq:flow1} \wedge \eqref{eq:flow2} \right\}
\label{eq:stabregion}
\end{equation}
We denote $\R$ the interior of $\D$ \emph{i.e.}: $R = \{\bm{d} \in \D^\circ\}$ hereby referred to as the stability region of the system.
\end{defi}
\subsection{Hybrid Max-pressure Network Control Policy}
The proposed hybrid max-pressure policy selects at each time period $t$ the service matrix $\at$ which maximizes the network-wide pressure among all possible green and blue phases.
\begin{defi}
The network traffic control policy $\pi^\star (\xt)$, defined as:
\begin{equation}
\pi^\star (\xt) = \argmax \left\{\sum_{n \in \N} \big(\zgreen \vee \zblue\big) : \at \in \AM\right\}
\label{eq:opt}
\end{equation}
is hereby referred to as the hybrid max-pressure network control policy for legacy and autonomous vehicles.
\end{defi}
At each intersection $n \in \N$, the hybrid max-pressure policy selects the phase (green or blue) and service matrix ($\at$) maximizing the local pressure. The service matrix is either $\atng$ if the green phase is activated or $\atnb$ if the blue phase is activated. The implementation of the hybrid max-pressure policy requires the resolution of the two MILPs \green and \blue at each intersection of the network. We next show that policy $\pi^\star$ is stabilizing for any demand rate in the stability region of the system $\R$.
\begin{theorem}
\label{theo}
The stochastic queue evolution process \eqref{eq:queue} is stable under the hybrid max-pressure policy $\pi^\star$ \eqref{eq:opt} for any demand rates vector $\bm{d} \in \R$.
\begin{proof}
To show that the stochastic queue evolution process \eqref{eq:queue} verifies the stability condition \eqref{eq:stability2_sq}, we first observe that this process is a discrete-time markov chain (DTMC) and we will show that it satisfies the conditions of Theorem 2 in \citet{leonardi2001bounds}. Specifically, we need only to show that there exists $\epsilon \geq 0$ such that the drift of a chosen Lyapunov function of the DTMC is upper bounded. Let $\xt \mapsto \left\|\xt\right\|^2 = \sum_{i \in \A} (\xit)^2$ be the chosen Lyapunov function (equivalent to $V()$ in the notation of \citet{leonardi2001bounds}). We show that the Lyapunov drift of the DTMC $\mathbb{E}\left[\left\|\xtt\right\|^2 - \left\|\xt\right\|^2 |\ \xt\right]$ is upper bounded by $-\epsilon |\xt|$ where $|\xt| = \sum_{i \in \A} |\xit|$.
Let $\dxit = \xitt - \xit$. We have: $\left\|\xtt\right\|^2 - \left\|\xt\right\|^2$ = $\left\|\xt + \dxt\right\|^2 - \left\|\xt\right\|^2$ = $2\xt^\intercal \cdot \dxt + \left\|\dxt\right\|^2$. We next show that $\xt^\intercal \cdot \dxt$ and $\left\|\dxt\right\|^2$ are both upper-bounded.\\
Recall that $\A_{ro} = \A_r \cup \A_o$.
\begin{align*}
\xt^\intercal \cdot \dxt =& - \sum_{j \in \A_{ro}} \xjt \Yjt + \sum_{j \in \A_o} \sum_{i \in \A_{ro}} \xjt \pijt \Yit + \sum_{j \in \A_r} \xjt \djt \\
=& - \sum_{i \in \A_{ro}} \xit \Yit + \sum_{j \in \A_o} \sum_{i \in \A_{ro}} \xjt \pijt \Yit + \sum_{j \in \A_r} \xjt \djt \\
=& \sum_{i \in \A_{ro}} \Yit \left(-\xit + \sum_{j \in \A_o}\xjt \pijt \right) + \sum_{j \in \A_r} \xjt \djt
\end{align*}
Computing the conditional expected value $\mathbb{E}[\xt^\intercal \cdot \dxt |\ \xt]$:
\begin{equation}
\mathbb{E}\left[\xt^\intercal \cdot \dxt |\ \xt\right] = - \sum_{i \in \A_{ro}} \mathbb{E}\left[\Yit |\ \xt\right] \wit + \sum_{j \in \A_r} \xjt \dj
\label{eq:exp}
\end{equation}
Using the flow conservation constraints \eqref{eq:flow1} and \eqref{eq:flow2}:
\begin{align*}
\sum_{j \in \A_r} \xjt \dj &= \sum_{j \in \A_r} \xjt \dj + \sum_{j \in \A_o} \xjt \fj - \sum_{j \in \A_o} \xjt \fj \\
&= \sum_{j \in \A_{ro}} \xjt \fj - \sum_{k \in \A_o} \xkt \fk \\
&= \sum_{j \in \A_{ro}} \xjt \fj - \sum_{k \in \A_o} \xkt \sum_{j \in A_{ro}} \fj \pjk \\
&= \sum_{j \in \A_{ro}} \fj \left(\xjt - \sum_{k \in \A_o} \xkt \pjk \right)\\
&= \sum_{j \in \A_{ro}} \wjt \fj
\end{align*}
Hence \eqref{eq:exp} can be re-written as:
\begin{equation}
\mathbb{E}\left[\xt^\intercal \cdot \dxt |\ \xt\right] = \sum_{i \in \A_{ro}} \left(\fii - \mathbb{E}\left[\Yit |\ \xt\right] \right) \wit
\label{eq:exp2}
\end{equation}
\begin{lemma}
\label{lemma1}
If the hybrid max-pressure policy $\pi^\star$ as defined in \eqref{eq:opt} is used and the demand vector $\bm{d} \in \R$ then $\exists \epsilon > 0$ such that: $\mathbb{E}\left[\xt^\intercal \cdot \dxt |\ \xt\right] \leq -\epsilon |\xt|$.
\begin{proof}
Let $\yits$ denote the optimal lane service rate if $\pi^\star$ is used at time $t$, \emph{i.e.} if $\at = \pi^\star (\xt)$, then $\mathbb{E}[\Yit |\ \xt] = \yits$. For each intersection $n$, if green phase is selected by the policy then lane services rates are given by the solution of Model \eqref{mod:green}, \emph{i.e.} $\yits = \sum_{j \in \A_l^n : (i,j) \in \M_l^n} \yijt$; otherwise, blue phase is selected and lane service rates are given by the solution of Model \eqref{mod:blue}: $\yits = \sum_{v \in \V_i^n(t)} z_v$. By definition, policy $\pi^\star$ imposes $\yijt \leq \usij$: this constraint is explicitly imposed in \green and implicitly imposed in \blue through \eqref{eq:aijblue} with $\yijt = \sum_{v \in \V_{ij}^n(t)} z_v$. Recall that $\usi = \sum_{j \in \A : (i,j) \in \M} \usij$, hence $\yits \leq \usi$ and since we assume $\bm{d} \in \R$, we can construct a feasible vector of lane service rates $\ytp$ such that $\yitp < \usi$ and:
\begin{equation}
\sum_{i \in \A_{ro}} \wit \yits \geq \sum_{i \in \A_{ro}} \wit \yitp
\label{eq:l1}
\end{equation}
The choice of the array $\ytp$ depends on the phase (green or blue) being activated at each intersection. Let $\A_G^n(t) = \{i \in \A_l^n : \zgreen \geq \zblue\}$ and $\A_B^n(t) = \{i \in \A_a^n : \zgreen < \zblue\}$ be the sets of incoming lanes to intersection $n \in \N$ if green phase and blue phase are activated at time $t$, respectively; and let $\A_G(t) = \cup_{n \in \N} \A_G^n(t)$ and $\A_B(t) = \cup_{n \in \N} \A_B^n(t)$.
If green phase is activated, then there exists $\epsilon' > 0$ such that we can choose:
\begin{equation}
\yitp \equiv
\begin{cases}
\fii + \epsilon' &\text{ if } \wit > 0 \\
0 &\text{ otherwise}
\end{cases}\quad \forall i \in \A_G(t)
\label{eq:l1green}
\end{equation}
The local (for intersection $n \in \N$) lane service rate vector defined by \eqref{eq:l1green} may not be feasible with regards to the feasible region of \blue. Instead, if blue phase is activated, we can choose:
\begin{equation}
\yitp \equiv
\begin{cases}
\left\lfloor \fii \right\rfloor + 1 &\text{ if } \wit > 0 \\
0 &\text{ otherwise}
\end{cases}\quad \forall i \in \A_B(t)
\label{eq:l1blue}
\end{equation}
Let $\Z_+$ denote the set of non-negative integers. If $\fii \in \Z_+$, then $\fii \leq \yits - 1$ since by assumption $\fii < \yits \in \Z_+$, hence if $\wit >0$, then $\yitp$ is an integer such that $\yitp \leq \yits$. Similarly, if $\fii \notin \Z_+$ and $\wit > 0$ then $\left\lfloor \fii \right\rfloor + 1 \leq \yits$. Hence the local lane service rate vector defined in \eqref{eq:l1blue} is a feasible solution of \blue since it corresponds to integer lane service rates equal or smaller to the number of vehicles serviced per lane in the optimal solution and it is always feasible to service fewer vehicles than optimal.
Let $X^+ \equiv \max\{X,0\}$ and $X^- \equiv \max\{-X,0\}$. We use the identities $X = X^+ - X^-$ and $|X| = X^+ + X^-$ in the following. Inequality \eqref{eq:l1} then implies:
\begin{align}
\mathbb{E}\left[\xt^\intercal \cdot \dxt |\ \xt\right] &= \sum_{i \in \A_{ro}} \left(\fii - \yits \right) \wit \nonumber \\
&\leq \sum_{i \in \A_{ro}} \left(\fii - \yitp \right) \wit \nonumber \\
&= \sum_{i \in \A_{ro} \cap \A_G(t)} \left(\fii - \yitp \right) \wit + \sum_{i \in \A_{ro} \cap \A_B(t)} \left(\fii - \yitp \right) \wit \nonumber \\
&= \sum_{i \in \A_{ro} \cap \A_G(t)} \left(\fii - \yitp \right) (w_i^+(t) - w_i^-(t)) + \sum_{i \in \A_{ro} \cap \A_B(t)} \left(\fii - \yitp \right) (w_i^+(t) - w_i^-(t)) \nonumber \\
&= \sum_{i \in \A_{ro} \cap \A_G(t)} -\epsilon' w_i^+(t) - \fii w_i^-(t) + \sum_{i \in \A_{ro} \cap \A_B(t)} -\left(\left\lfloor \fii \right\rfloor + 1 - \fii \right) w_i^+(t) - \fii w_i^-(t) \label{eq:last}
\end{align}
Assuming $\epsilon' < \fii$, we obtain:
\begin{equation}
\sum_{i \in \A_{ro} \cap \A_G(t)} -\epsilon' w_i^+(t) - \fii w_i^-(t) \leq \sum_{i \in \A_{ro} \cap \A_G(t)} -\epsilon' (w_i^+(t) + w_i^-(t)) = -\epsilon' \sum_{i \in \A_{ro} \cap \A_G(t)} |\wit|
\label{eq:AG}
\end{equation}
In addition, $\exists \epsilon'' > 0$ such that $\epsilon'' < \left\lfloor \fii \right\rfloor + 1 - \fii$ and $\epsilon'' < \fii$. Hence, we have:
\begin{equation}
\sum_{i \in \A_{ro} \cap \A_B(t)} -\left(\fii - \left\lfloor \fii \right\rfloor + 1 \right) w_i^+(t) - \fii w_i^-(t) \leq \sum_{i \in \A_{ro} \cap \A_B(t)} -\epsilon'' (w_i^+(t) + w_i^-(t)) = -\epsilon'' \sum_{i \in \A_{ro} \cap \A_B(t)} |\wit|
\label{eq:AB}
\end{equation}
Let $\underline{\epsilon} = \min\{\epsilon',\epsilon''\}$. Combining inequalities \eqref{eq:AG} and \eqref{eq:AB} with \eqref{eq:last} yields:
\begin{equation}
\mathbb{E}\left[\xt^\intercal \cdot \dxt |\ \xt\right] \leq -\epsilon' \sum_{i \in \A_{ro} \cap \A_G(t)} |\wit| -\epsilon'' \sum_{i \in \A_{ro} \cap \A_B(t)} |\wit| \leq -\underline{\epsilon} \sum_{i \in \A_{ro}} |\wit| = -\underline{\epsilon} |\wt|
\end{equation}
The function $\xit \mapsto \wit$ is linear, hence there exists $\eta > 0$ such that $|\wt| \geq \eta |\xt|$ or equivalently $-|\wt| \leq -\eta |\xt|$ which implies $\mathbb{E}\left[\xt^\intercal \cdot \dxt |\ \xt\right] \leq -\underline{\epsilon} \eta |\xt| = -\epsilon |\xt|$.
\end{proof}
\end{lemma}
\begin{lemma}
\label{lemma2}
There exists $K' \geq 0$ such that $\mathbb{E}\left[\left\|\dxt\right\|^2 |\ \xt\right] \leq K'$.
\begin{proof}
Since $\Yit \geq 0$ and $\pijt \leq 1$, we have:
\begin{equation}
\dxjt = - \Yjt + \sum_{i \in \A} \pijt \Yit + \djt \leq \begin{cases}
\sum_{i \in \A_l : (i,j) \in \M_l} \Yit + \djt \text{ if } j \in \A_l \\
\sum_{i \in \A_a : (i,j) \in \M_a} \Yit + \djt \text{ if } j \in \A_a
\end{cases}
\label{eq:l2}
\end{equation}
Let $\bar{S}_i$ be the maximum value of the random variable $\Yit$. If $i \in \A_l$, then $\bar{S}_i$ can be determined based on the unconditional lane service capacities $\usi$; otherwise if $i \in \A_a$, then this bound can be derived from AVs' maximum speed under conflict-free traffic conditions. Let $\bar{S} = \max\{\bar{S}_i : i \in \A\}$ and let $K_j$ be the number of lanes permitted to reach lane $j$, \emph{i.e.} $K_j = |i \in \A_l : (i,j) \in \M_l|$ if $j \in \A_l$ or $K_j = |i \in \A_a : (i,j) \in \M_a|$ if $j \in \A_a$. In addition, let $\bar{D}_i$ be the maximum value of the random variable $\dit$. From \eqref{eq:l2} we get:
\begin{equation}
\dxjt \leq K_j \bar{S} + \bar{D}_j
\end{equation}
Which gives the following bound:
\begin{equation}
\left\|\dxt\right\|^2 \leq \sum_{j \in \A} (K_j \bar{S} + \bar{D}_j)^2 = K' \quad \Rightarrow \mathbb{E}\left[\left\|\dxt\right\|^2 |\ \xt\right] \leq K'
\label{eq:Z2}
\end{equation}
\end{proof}
\end{lemma}
Combining the upper bounds obtained from Lemmas \ref{lemma1} and \ref{lemma2}:
\begin{equation}
\mathbb{E}\left[\left\|\xtt\right\|^2 - \left\|\xt\right\|^2 |\ \xt\right] = \mathbb{E}\left[2\xt^\intercal \cdot \dxt + \left\|\dxt\right\|^2 |\ \xt\right] \leq -2\epsilon |\xt| + K'
\label{eq:condition3}
\end{equation}
Since $K'$ is a constant, Equation \eqref{eq:condition3} is equivalent to Condition (3) in Theorem 2 of \citet{leonardi2001bounds}, with the choice of $\xt \mapsto \left\|\xt\right\|^2$ as the Lyapunov function $V()$ and $B = 0$, which proves the theorem.
\end{proof}
\end{theorem}
A natural extension of Theorem \ref{theo} is that the pure network traffic control policies wherein only \green or \blue is used to coordinate traffic are also stable.
\begin{corollary}
\label{coro}
The pure pressure-based network traffic control policy consisting of policy $\pi^\star$ \eqref{eq:opt} with only green (respectively, blue) phases coordinated by \green (respectively, \blue) are stable for any demand rates in the stability region $\R$.
\begin{proof}
To prove that pure network traffic control policies are stable within the stability region $\R$ it suffices to observe that policy $\pi^\star$ \eqref{eq:opt} is defined based on a logical OR-condition taking the maximum local pressure among green and blue phases at each time period and intersection. Hence, Theorem \ref{theo} also applies to the unary case wherein only \green or \blue is used to coordinate traffic.
\end{proof}
\end{corollary}
Theorem \ref{theo} proves that the proposed hybrid network control policy $\pi^\star$ \eqref{eq:opt} stabilizes any demand vector in the stability region of the system $\R$ \eqref{eq:stabregion}. According to \citet{tassiulas1992stability}, and as discussed in \citet{wongpiromsarn2012distributed} and \citet{varaiya2013max} for the case of signalized traffic control, this is equivalent to throughput optimality since it shows that the stability region of policy \eqref{eq:opt} is a superset of the stability region of any policy.
Corollary \ref{coro} establishes that pure policies based on \green or \blue traffic control models also maximize throughput. We note that stability of the pure green network traffic control case is an extension of the work of \citet{varaiya2013max}. \citet{varaiya2013max} proposed a network traffic control policy for a single class of vehicles and assumed that each movement had a dedicated queue. In addition, it was assumed that movement capacities are exogenous to the traffic signal control policy. We have both relaxed this framework by only requiring knowledge of lane-queues and extended the formulation to two classes of lanes. Further, we introduced a pressure-based formulation for green phases (\green) wherein movement capacities calculated endogenously based within the traffic signal control policy.
\subsection{Online Network Traffic Control Algorithm}
We are now ready to present our decentralized network traffic control algorithm used to implement the proposed hybrid max-pressure network control policy. The pseudo-code of the proposed policy is summarized in Algorithm \ref{algo:policy}. At each time period $t$, we calculate the optimal \green and \blue phases at each intersection of the network $n \in \N$ based on the current state of the network $\xt$. The phase with the highest local pressure is selected for each intersection.
\begin{algorithm}
\KwIn{$\G=(\N,\A)$, $\bm{d}$, $\bm{p}$, $\bar{\bm{s}}$, $t$, $\xt$}
\KwOut{$\at$}
\For{$n \in \N$}{
$\zgreen \gets$ Solve \green \eqref{mod:green} \\
$\zblue \gets$ Solve \blue \eqref{mod:blue} \\
$\atn \gets \argmax_{\atng, \atnb} \{\zgreen, \zblue\}$
}
$\at \gets [\atn]_{n\in\N}$
\caption{Hybrid max-pressure network control policy}
\label{algo:policy}
\end{algorithm}
\section{Numerical Experiments}\label{num}
In this section, we conduct numerical experiments to test the proposed hybrid network control policy and report our findings.
\subsection{Implementation Framework}
We implement the proposed hybrid network control policy on artificial datasets to test computational performance and analyze the algorithm's behavior. We use a synthesized grid network of size $5 \times 5$, wherein each of the 25 nodes corresponds to a controlled intersection and each edge represents a bidirectional link between adjacent nodes. All intersections have the same topology as that depicted in Figure \ref{fig:interall}, \emph{i.e.} each node has four incoming and four outgoing links, each of which has one LV-lane and one AV-lane. Each incoming lane allows three movements: through, left turn and right turn.
We assume that vehicles' routes in the network are fixed. In each instance generated, we randomly and uniformly assign an origin and a destination to each vehicle, a route among these nodes and a departure time within the considered time horizon. Origins and destinations are chosen among nodes at the edge of the grid. The level of travel demand is determined by the \emph{departure rate} of vehicles into the network and the impact of travel demand onto network performance is assessed through a sensitivity analysis.
The time period is set to $\Delta t = 10$ s. We assume that green phases have a \emph{lost time} of 2 s to account for vehicle start-up delays and signal clearance intervals and we conduct a sensitivity analysis on this input parameter. In turn, blue phases are assumed to have zero lost time.
We use point-queues for all links in the network and we assume that vehicles take three time steps to travel from an intersection to the next intersection on their route. Vehicles travel through 3 links, each with a 10 s free flow time, between each intersection. Hence, in this configuration, it takes 30 s for a vehicle to travel between two adjacent intersections at free flow. We set the time horizon to 30 minutes and we execute Algorithm \ref{algo:policy} periodically until all vehicles have exited the network.
Vehicles' speed limit through intersections is assumed to be uniform and equal to 44 ft/s and the wave propagation speed is taken as 11 ft/s. We assume that all vehicles have a length of 17.6 ft and that lanes have a width of 12 ft. We use the triangular fundamental diagram to determine lane capacity which results in 1,440 veh/h.
In all experiments, we explore the sensitivity of the proposed policy with regards to the proportion of AVs by varying the proportion of AVs from 0\% to 100\% in increments of 10\%. To benchmark the performance of the proposed hybrid network control policy (summarized in Algorithm \ref{algo:policy}), we also simulate network traffic under a traditional traffic signal configuration wherein AV-lanes and blue phases are nonexistent. Under this pure network control policy all AV-lanes are treated as LV-lanes and we model this by using single-lane links with twice the lane capacity of the LV-lanes in the tested network configuration. This benchmark is hereby referred to as \textsf{2xGreen}. Each experiment is simulated 40 times and average performance is reported.
The simulation framework is implemented in Java on a Windows machine with 8 Gb of RAM and a CPU at 3 GHz. All MILPs are solved with CPLEX 12.8 (Java API) with a time limit of 60 s and default options.
The impact of the departure rate onto network performance is explored in Section \ref{deprate}, the impact of \emph{lost time} during green phases is assessed in Section \ref{losttime} and the activation pattern of green and blue phases is discussed in Section \ref{phase}.
\subsection{Impact of departure rate}\label{deprate}
\begin{figure
\begin{center}
\includegraphics[width=0.8\columnwidth]{Exp3_TSTT
\end{center}
\caption{Total system travel time for a varying proportion of AVs and vehicle departure rate. The results illustrate the trend of the mean total system travel time over 40 simulations on a $5 \times 5$ grid network with each link having one AV and one LV lane. The \textsf{2xGreen} experiment (benchmark) corresponds to the scenario where each link has the capacity of two LV-lanes.
\label{fig:exp3tstt
\end{figure}
\begin{figure
\includegraphics[width=\columnwidth]{Exp3_TT7
\caption{Average vehicle travel time based on AV proportion and vehicle departure rate. The results depict the mean and standard deviation over 40 simulations on a $5 \times 5$ grid network with each link having one AV and one LV lane. The \textsf{2xGreen} experiment (benchmark) corresponds to the scenario where each link has the capacity of two LV-lanes.
\label{fig:exp3TT
\end{figure}
\begin{figure
\centering
\includegraphics[width=0.8\columnwidth]{Exp3_runtime7
\caption{Average, local MILP runtime to solve the \blue or \green to optimality against departure rate (error bars represent standard deviation) over all 25 intersections in the network and time periods required in the simulation. The proportion of AVs is 50\% and the lost time for green phase is 2s.
\label{fig:runtime
\end{figure}
The evolution of the total system travel time (TSTT) for a varying departure rate is depicted in Figure \ref{fig:exp3tstt}. For this experiment, the green phase lost time is set to 2 s and we vary the departure rate from 4,000 veh/h (lowest demand) to 10,000 (highest demand). As expected, we observe that TSTT increases super-linearly with the departure rate, \emph{i.e.} travel demand. We find that the market penetration of AVs has a significant impact on TSTT. If less than 50\% of AVs are present in the system, we find that the use of dedicated AV lanes and the blue phase is not beneficial for the network in terms of TSTT, even at high demands, as noted by the performance of the benchmark which outperforms the hybrid network control policy for these levels of AV market penetration. Recall that the benchmark represents the TSTT when the pure, green network traffic control policy is used with the equivalent of two LV-lanes. At high demands (\emph{i.e.} 8,000 veh/h and beyond), we find that a market penetration of at least 60\% of AVs outperforms the benchmark. At the highest departure rate tested (\emph{i.e.} 10,000 veh/h) the average TSTT is reduced by 1/3 for a market penetration of 70\%. In turn, higher market penetration rates slightly decrease this figure. This can be explained by the fact that network capacity is not fully used at 100\% of AVs compared to 70\% since in the former configuration LV-lanes remain empty throughout the experiment.\\
To further investigate the behavior of the proposed hybrid network control policy, we examine average, vehicle-class travel times relative to the benchmark configuration. Figure \ref{fig:exp3TT} shows the average vehicle travel time for AVs (blue series), LVs (green series) and overall (orange series) in the network based on AV proportion and departure rate. The benchmark is shown as a dashed flat line in red. Three main trends can be identified: first, we find that increasing the departure rate mainly impacts the mean relative travel time of LVs with regards to the benchmark. Second, the proportion of AVs minimizing the mean vehicle travel time in the network is relatively robust to the travel demand and is found to be around 70\% of AVs. This suggests that at lower or higher market penetration rates, network capacity is not saturated and AV- or LV-lanes are under-utilized, respectively. This insight can help in identifying the optimal AV market penetration rate to deploy AV lanes and blue phases. Third, for high levels of travel demand, a sufficiently high proportion of AVs improves on the benchmark, \emph{i.e.} the average vehicle travel time (over both LVs and AVs) obtained using the hybrid network control policy is lower than the average vehicle travel time obtained with the pure, green network control policy.
For a departure rate of 4,000 veh/h, LVs and AVs' average travel time remain similar and the hybrid network control policy performs similarly to the benchmark. Increasing the departure rate from 5,000 to 7,000 veh/h, we observe congestion effects impacting LVs' average travel time, while AVs' average travel time remain only slightly penalized. Further increasing the departure rate to 8,000 veh/h and beyond yields a new pattern: for a proportion of AVs greater or equal to 60\%, the aggregate average vehicle travel time improves on the benchmark although LVs' average travel time remain more penalized than that of AVs. At a departure rate of 10,000 veh/h, we find that both LVs and AVs' average travel time are lower than the average travel time in the benchmark configuration and considerable travel time reduction are achieved from 60\% of AVs and beyond. Specifically, at this departure rate and with a market penetration of AVs of 70\%, we find that the average vehicle travel time is reduced by approximately 1/3 compared to the benchmark. We also observe that the mean relative vehicle (LVs and AVs) travel time exhibits a convex-shaped profile with a minima at 70\% of AV market penetration. This highlights that for high departure rates, high AV market penetration rates (80\% and above) yield an imbalanced usage of network capacity due to the low demand of LVs. \\
The computational performance of the proposed hybrid network control policy is illustrated in Figure \ref{fig:runtime}, which depicts the average runtime of MILPs \blue and \green over all intersections and all simulations against the departure rate. The results are reported for a market penetration corresponding to 50\% of AVs. Computational runtime increases linearly with departure rate for the \green MILP. The \blue MILP computational performance profile exhibits a super-linear growth with departure rate. This is expected since the \blue MILP models each vehicle trajectory whereas the \green MILP use aggregate flow variables to model vehicles movements. Nevertheless, all MILPs are solved in a few milliseconds. Further, the performance MILPs appears to be robust to vehicles' route choice and departure time as demonstrated by the low variance of the computational runtime over intersections and simulations. In practice, since the proposed hybrid network control policy is decentralized, the system is easily scalable to arbitrary-size networks.
\subsection{Impact of green phase lost time}\label{losttime}
We next explore the impact of green phase lost time. For this sensitivity analysis we set the departure rate to 7,000 veh/h and compare the baseline configuration (lost time of 2 s) with the cases of null (0 s) and doubled (4 s) lost time. The impact of green phase lost time on TSTT is illustrated in Figure \ref{fig:exp1tstt}. We observe that green phase lost time has a super-linear effect on TSTT. For a high level of AV market penetration, \emph{i.e.} with 80\% of AVs or more, the resulting traffic configuration is almost insensitive to lost time. We also find that the hybrid network control policy outperforms the benchmark for a sufficiently high market penetration of AVs (in this case, a proportion 60\% of AVs).
Figure \ref{fig:exp1tt} provides a more detailed outlook on the impact of green phase lost time by examining vehicle-class average travel time. If the green phase lost time is assumed null, we find that LVs' average travel time remain comparable to AVs' average travel time, especially if both classes of vehicles are in similar proportions in the network, \emph{i.e.} for proportions of AVs between 50\% to 70\%. In turn, doubling the baseline lost time (4 s) considerably penalizes LVs' average travel time for low proportions of AVs. For an AV market penetration of more than 60\%, increasing green phase lost time is better managed with the proposed hybrid network control policy than the in the pure, green network traffic control policy. This can be attributed to the traditional green phase model used in the benchmark configuration: since all lanes are LV-lanes, increasing lost time for green phases results in a considerable loss of capacity compared to the hybrid network configuration.
\begin{figure
\begin{center}
\includegraphics[width=0.8\columnwidth]{Exp1_TSTT
\end{center}
\caption{Total system travel time based on AV proportion and green phase lost time for a departure rate of 7,000 veh/h. The results illustrate the trend of the mean total system travel time over 40 simulations on a $5 \times 5$ grid network with each link having one AV and one LV lane. The \textsf{2xGreen} experiment corresponds to the scenario where each link has the capacity of two LV-lanes.
\label{fig:exp1tstt
\end{figure}
\begin{figure
\includegraphics[width=\columnwidth]{Exp1_LT3
\caption{Average vehicle travel time based on AV proportion and green phase lost time for a departure rate of 7,000 veh/h. The results depict the mean and standard deviation over 40 simulations on a $5 \times 5$ grid network with each link having one AV and one LV lane. The \textsf{2xGreen} experiment correspond to the scenario where each link has the capacity of two LV-lanes.
\label{fig:exp1tt
\end{figure}
\subsection{Phase activation patterns}\label{phase}
To further analyze the behavior of the proposed hybrid network control policy, we report the number of consecutive green or blue phases activated based on the market penetration of AVs in the network. For this analysis, we focus on a departure rate of 7,000 veh/h and a green phase lost time of 2 s, which exhibited the most balanced level of congestion for the generated instances and three AV penetration rates: 20\%, 50\% and 80\%. Figure \ref{fig:phase} depicts the distribution of average consecutive phase activation over time periods per intersection and per simulation. For all the proportion of AVs observed, we find that the blue phase is systematically activated for more consecutive time periods compared to the green phase. Since the blue phase admits more combinations of vehicle movements compared to the green phase, the latter often does not have greater pressure until queue lengths are longer. For a proportion of AVs of 20\% the distribution of the consecutive activations of green phases shows streaks of up to 10 time periods, with a large majority of streaks of less than 5 time periods. In turn, blue phases can be activated consecutively for up to 43 time periods. Increasing the proportion of AVs to 50\% results in more frequent streaks between 5 and 20 time periods for blue phases and more frequent short streaks for green phases. For 80\% of AVs in the network, green phases are almost always activated for a single time period only whereas blue phases may remain active for up to 50 time periods.
\begin{figure
\includegraphics[width=\columnwidth]{Phase
\caption{Distribution of average consecutive phase activations over time periods per intersection and per simulation. The departure rate is 7,000 veh/h and the lost time is set to 2 s.
\label{fig:phase
\end{figure}
\section{Discussion and Perspectives}\label{con}
In this paper, we proposed a new, pressure-based network traffic control policy for coordinating legacy vehicles (LV) and autonomous vehicles (AV). The proposed approach assumes that LVs and AVs share the urban network infrastructure. Specifically, we hypothesized that dedicated AV-lanes are available for AVs to access traffic intersections, thus obviating the inherent limitations of first-in-first-out (FIFO) lane queues, \emph{i.e.} vehicle blocking. This design assumption is plausible if the level of penetration of AVs is sufficiently high \citep{levin2016multiclass}. It should be noted that AV-lanes need not to be added infrastructure. If the proportion of AVs in the network is high enough, some LV-lanes can be restricted to AV-traffic. To coordinate traffic at network intersections, we introduced two intersection-level MILP formulations for maximizing local pressure. \green is used to coordinate traffic among LV-lanes. The proposed formulation for green phases only requires knowledge of lane queues and conflict-free movement capacities and estimates actual movement capacity endogenously based on movement activation. In addition, since route choice is assumed unknown for LVs, MILP \green accounts for vehicle-blocking effects due to FIFO conditions on lane-queues. To manage AV-traffic at network intersections, we introduce a so-called \emph{blue} phase during which only AVs are allowed to access the intersection. The resulting \blue MILP is adapted for max-pressure control from the conflict-point formulation introduced by \citet{levin2017conflict}. We characterized the stability region of the proposed queuing system and showed that the proposed decentralized hybrid network control policy is stable, \emph{i.e.} that it maximizes network throughput, under conventional travel demand conditions. Further, Theorem 1 and its proof show how traffic control formulations which are based on vehicle- or trajectory-level variables can be incorporated in stable network traffic control policies.
We conducted numerical experiments on randomly generated artificial instances on a grid-network to test the proposed policy. We explored the sensitivity of the policy with regards to the proportion of AVs in the network as well as the departure rate, which corresponds to the level of travel demand. We also investigated the impact of green phase lost time and examined consecutive phase activation patterns. We found that the different patterns emerged based on the level of congestion in the network. At low congestion levels, we observe that AVs' travel time remains close to the benchmark travel time whereas LVs' travel time is increasingly penalized when the proportion of AVs exceeds that of LVs. A low proportion of AVs penalizes AVs' travel time; instead a high proportion of AVs penalizes LVs. At higher congestion levels, the hybrid network control policy is seen to outperform the benchmark, thus quantifying the benefits the blue phase model for coordinating AV-traffic. We also find that travel demand mainly impacts LVs' travel time whereas AVs' travel time are considerably less penalized. Identifying critical levels of penetration for AVs can provide insight into the management of urban infrastructure. For instance, this can help in assessing at which point it becomes beneficial to restrict specific lanes to AV-traffic.
The outcomes of the experiments also reveal that fairness should be taken into consideration when allocating lanes to vehicle classes (LVs, AVs). Indeed, for high proportions of AVs in the network, LVs' travel time may be considerably penalized, despite the overall average travel time improving. Exploring the trade-offs between network throughput and fairness will be addressed in future studies. Real-time lane allocation among vehicle classes (\emph{e.g.} LV, AV) can be expected to further impact route choice and travel behavior altogether. This more general network design and control problem can be modeled using bilevel or simulation-based optimization wherein users' departure time and route choice can be accounted for based on traffic equilibrium theory \citep{le2017utility}. Given that our study is focused on traffic control and assumes fixed route choice, we leave this investigation for future research.
|
1,477,468,750,646 | arxiv | \section{INTRODUCTION}
Quantum phase transition (QPT) \cite{sachdev_2011}, as a fundamental
phenomenon in quantum physics, is characterized by the sudden change
of the ground states of quantum systems, induced by the change of
system parameters. In general, a quantum system occurred phase transition
needs to reach a thermodynamic limit, i.e., the components of the
system should attain infinity~\cite{HEPP1973360,PhysRevA.7.831,PhysRevLett.90.044101,PhysRevE.67.066203,PhysRevB.81.121105_2010}.
Since the infinite-component systems are composed of numerous degrees
of freedom, it will take a long time to prepare the initial state
of the systems, and the systems are easily affected by their environments.
Consequently, an interesting question is whether quantum phase transition
can take place in a finite-component system. It has recently been
shown that quantum phase transition can take place in simple systems~\cite{PhysRevA.54.R4657,Bishop_2001,PhysRevB.69.113203,PhysRevA.70.022303,PhysRevA.81.042311,PhysRevA.82.025802,PhysRevA.82.053841,PhysRevB.81.121105_2010,PhysRevA.87.013826,PhysRevLett.115.180404,PhysRevA.92.053823_2015,PhysRevLett.117.123602,PhysRevA.94.023835,PhysRevLett.119.220601,PhysRevA.95.013819,PhysRevA.95.043823}.
An advantage of this kind of systems is that the systems have less
degrees of freedom, and hence those previous mentioned difficulties
in infinite-component systems can be improved~\cite{PhysRevLett.124.120504}.
Quantum Rabi model (QRM), as a typical finite-component quantum system,
describes the interaction between a single two-level atom (qubit)
and a single bosonic mode. As one of the most fundamental models in
quantum optics, the QRM has attracted much attention from the communities
of quantum physics, quantum information, and especially ultrastrong
couplings~\cite{NatureREVPHY2019,RevModPhys.91.025005,NaturePhysics6772}.
In QRM, it has been shown that the quantum criticality exists in a
limit case, in which the ratio $\eta=\omega_{0}/\omega_{c}$ of the
qubit frequency $\omega_{0}$ to the bosonic-mode frequency $\omega_{c}$
tends to infinity. It has also been recognized that the quantum critical
position of the QRM depends on the frequency of the bosonic mode,
and that the coupling strength needs to enter the ultrastrong-coupling
regime~\cite{NatureREVPHY2019,RevModPhys.91.025005,NaturePhysics6772}.
Currently, with the improvement of the experimental conditions, the
ultrastrong couplings even deep-strong couplings have been realized
in various physical systems, such as superconducting quantum circuits~\cite{NaturePhysics6772,PhysRevLett.105.237001,PhysRevLett.105.060503,Forn2017Ultrastrong,2016Superconducting}
and semiconductor quantum wells~\cite{PhysRevB.79.201303,Nature7235178,PhysRevLett.105.196402}.
In the ultrastrong-coupling regime, the coupling strength is comparable
to the frequencies of the bosonic mode and the two-level atom. Furthermore,
the quantum criticality in finite-component quantum systems has been
expected and experimentally demonstrated in trapped-ion systems~\cite{Nat.Commun.2.377_2011,PhysRevLett.118.073001_2017,PhysRevX.8.021027_2018,PhysRevA.97.042317_2018,PhysRevA.100.022513_2019}.
All these advances motivate the experimental studies of quantum criticality
in various finite-component quantum systems, and hence how to observe
quantum criticality in realistic finite-component quantum systems
becomes an interesting task.
In this paper, we propose to show the dynamic sensitivity caused by
quantum criticality in the QRM by introducing an auxiliary atom coupled
to the cavity field of the QRM. Here, the auxiliary atom plays two
important roles in this system. The first role is a trigger, which
is used to stir up the quantum criticality in this system. The second
role is a sensor to detect the quantum critical behavior in the QRM.
Here, the decoherence of the auxiliary atom can reflect the dynamic
sensitivity of the QRM, and we use the Loschmidt echo (LE) to measure
the decoherence of the auxiliary atom~\cite{PhysRevLett.96.140604,PhysRevA.80.063829}.
In the short-time limit, the LE can be simplified to an exponential
function of the photon number variance in the ground state of the
QRM. As a result, we calculate the ground state of QRM in both the
normal and superradiance phases, and compare the analytical ground
state in the infinite $\eta$ case with the numerical ground state
in the finite $\eta$ case~\cite{PhysRevLett.115.180404}. We also
analyze the LE of the auxiliary atom and find a dynamic sensitivity
around the critical point of the QRM. This feature provides a signature
to characterize the quantum criticality in this system.
The rest of this paper is organized as follows. In Sec.~\ref{section2},
we introduce the QRM and construct conditional Rabi interactions by
introducing an auxiliary two-level atom far-off-resonantly coupled
to the cavity field in the QRM. In Sec.~\ref{section3}, we present
the analytical result for the LE of the auxiliary atom. We also calculate
the ground state of the critical QRM when it works in both the normal
and the superradiance phases at both finite and infinite $\eta$.
In Sec.~\ref{section4}, we exhibit the dependence of the LE on the
system parameters and analyze the dynamic sensitivity of the QRM.
Finally, a brief summary of this paper is presented in Sec.~\ref{section5}.
\section{MODEL AND HAMILTONIAN}
\label{section2}
We consider the quantum Rabi model, which is composed of a single-mode
cavity field coupled to a two-level atom. The Hamiltonian of the QRM
reads ($\hbar=1$)~\cite{NatureREVPHY2019,RevModPhys.91.025005}
\begin{equation}
\hat{H}_{\mathrm{Rabi}}=\omega_{c}\hat{a}^{\dagger}\hat{a}+\dfrac{\omega_{0}}{2}\hat{\sigma}_{z}-g\hat{\sigma}_{x}(\hat{a}+\hat{a}^{\dagger}),\label{eq:1}
\end{equation}
where $\hat{a}\ (\hat{a}^{\dagger})$ is the annihilation (creation)
operator of the cavity field with resonance frequency $\omega_{c}$.
The two-level atom has the ground state $\vert g\rangle$ and excited
state $\vert e\rangle$ with transition frequency $\omega_{0}$, and
it is described by the Pauli operators $\hat{\sigma}_{x}\equiv\vert e\rangle\langle g\vert+\vert g\rangle\langle e\vert$,
$\hat{\sigma}_{y}\equiv i(\vert g\rangle\langle e\vert-\vert e\rangle\langle g\vert)$,
and $\hat{\sigma}_{z}\equiv\vert e\rangle\langle e\vert-\vert g\rangle\langle g\vert$.
The parameter $g$ denotes the coupling strength between the cavity
field and the atom. In QRM, the parity operator $\hat{\Pi}=\exp\{i\pi[\hat{a}^{\dagger}\hat{a}+(1+\hat{\sigma}_{z})/2]\}$
is a conserved quantity based on the commutative relation $[\hat{\Pi},\hat{H}_{\mathrm{Rabi}}]=0$,
then the Hilbert space of the QRM can be divided into two subspaces
with odd and even parities. It is known that the QRM has a $Z_{2}$
parity symmetry and it is integrable~\cite{PhysRevLett.105.263603_2010,PhysRevLett.107.100401,PhysRevA.87.023835_2013}.
The analytic eigensystem is determined by a transcendental equation
given in Refs.~\cite{PhysRevLett.107.100401,PhysRevA.86.023822,PhysRevX.4.021046,PhysRevA.90.063839}.
Due to the lack of closed-form solution, several methods for approximatively
solving the QRM have also been proposed~\cite{PhysRevB.72.195410,PhysRevLett.99.173601,PhysRevA.83.065802,PhysRevA.86.015803,Zhong_2013,PhysRevA.92.053823_2015,Liu_2015,PhysRevA.91.053834,PhysRevA.94.063824}.
It has been found that a QPT takes place in the QRM at the critical
point $g=\sqrt{\omega_{c}\omega_{0}}/2$~\cite{PhysRevLett.115.180404}.
This feature motivates us to detect the dynamic sensitivity of quantum
criticality in the QRM by introducing an auxiliary atom $S$ coupled
to the cavity field of the QRM. The auxiliary atom and its interaction
with the cavity field are described by the Hamiltonian
\begin{equation}
\hat{H}_{I}=\dfrac{\omega_{s}}{2}\hat{\sigma}_{z}^{(s)}-g_{s}(\hat{a}^{\dagger}\hat{\sigma}_{-}^{(s)}+\hat{\sigma}_{+}^{(s)}\hat{a}),\label{eq:2}
\end{equation}
where $\hat{\sigma}_{z}^{(s)}=\vert e\rangle_{\!s\,s\!}\langle e\vert-\vert g\rangle_{\!s\,s\!}\langle g\vert$
is the $z$-direction Pauli operator, $\hat{\sigma}_{+}^{(s)}=\vert e\rangle_{\!s\,s\!}\langle g\vert$
and $\hat{\sigma}_{-}^{(s)}=\vert g\rangle_{\!s\,s\!}\langle e\vert$
are the raising and lowing operators of the auxiliary atom, respectively.
$\omega_{s}$ is the transition frequency between the ground state
$\vert g\rangle_{\!s}$ and excited states $\vert e\rangle_{\!s}$
of the auxiliary atom. $g_{s}$ is the coupling strength between the
cavity field and the auxiliary atom. Note that here we consider the
case where the interaction between the auxiliary atom and the cavity
field works in the Jaynes-Cummings (JC) coupling regime~\cite{1443594}
and then the rotating-wave approximation has been made in Hamiltonian~(\ref{eq:2}).
We assume that the auxiliary two-level atom $S$ is far-off-resonantly
coupled with the single-mode cavity field, namely the detuning $\Delta_{s}\equiv\omega_{s}-\omega_{c}$
is much larger than the coupling strength $g_{s}\sqrt{n}$ with $n$
being the involved photon number. In the large-detuning regime, the
interaction between the auxiliary atom $S$ and the cavity field is
described by the dispersive JC model~\cite{Wolfgang_2001}. Therefore,
the Hamiltonian of the whole system including the QRM and the auxiliary
atom reads
\begin{align}
\hat{H}_{\mathrm{eff}}= & \ \omega_{c}\hat{a}^{\dagger}\hat{a}+\frac{\omega_{0}}{2}\hat{\sigma}_{z}-g\hat{\sigma}_{x}(\hat{a}+\hat{a}^{\dagger})\nonumber \\
& +\frac{1}{2}\omega_{s}\hat{\sigma}_{z}^{(s)}+\chi\hat{\sigma}_{z}^{(s)}\hat{a}^{\dagger}\hat{a}+\chi\hat{\sigma}_{+}^{(s)}\hat{\sigma}_{-}^{(s)},\label{eq:3}
\end{align}
where $\chi\equiv g_{s}^{2}/\Delta_{s}$ is the dispersive JC coupling
strength between the auxiliary atom and the cavity field. The dispersive
JC coupling describes a conditional frequency shift for the cavity
field. To clearly see the dynamic sensitivity of the finite-component
system response to the auxiliary atom, we rewrite Hamiltonian~(\ref{eq:3})
as the following form
\begin{equation}
\hat{H}_{\mathrm{eff}}=\hat{H}_{e}\otimes\vert e\rangle_{\!s\,s\!}\langle e\vert+\hat{H}_{g}\otimes\vert g\rangle_{\!s\,s\!}\langle g\vert,\label{eq:4}
\end{equation}
where
\begin{align}
\hat{H}_{e} & =\omega_{e}\hat{a}^{\dagger}\hat{a}+\frac{\omega_{0}}{2}\hat{\sigma}_{z}-g\hat{\sigma}_{x}(\hat{a}+\hat{a}^{\dagger})+\frac{\omega_{s}}{2}+\chi,\label{eq:5}\\
\hat{H}_{g} & =\omega_{g}\hat{a}^{\dagger}\hat{a}+\frac{\omega_{0}}{2}\hat{\sigma}_{z}-g\hat{\sigma}_{x}(\hat{a}+\hat{a}^{\dagger})-\frac{\omega_{s}}{2},\label{eq:6}
\end{align}
with the state-dependent cavity frequencies $\omega_{e}=\omega_{c}+\chi$
and $\omega_{g}=\omega_{c}-\chi$. Note that up to the constant terms,
both the two Hamiltonians in Eqs.~(\ref{eq:5}) and (\ref{eq:6})
describe the QRM with different cavity-field frequencies.
\section{QUANTUM CRITICAL EFFECT }
\label{section3} In this section, we derive the relation between
the LE and the photon number variance of the cavity field. We also
calculate the expression of the photon number varianance in the finite
and infinite $\eta$ cases when the QRM works in both the normal and
superradiance phases.
\subsection{Expression of the LE}
To study quantum critical effect in this system, we investigate the
dynamic evolution of the system, which is governed by Hamiltonian~(\ref{eq:4}).
To this end, we assume that the QRM is initially in its ground state
$\vert G\rangle$ and the auxiliary atom is in a superposed state
$\alpha\vert g\rangle_{\!s}+\beta\vert e\rangle_{\!s}$, where $\alpha$
and $\beta$ are the superposition coefficients, satisfying the normalization
condition $\vert\alpha\vert^{2}+\vert\beta\vert^{2}=1$. Corresponding
to the auxiliary atom in states $\vert g\rangle_{\!s}$ and $\vert e\rangle_{\!s}$,
the evolution of the QRM is governed by the Hamiltonians $\hat{H}_{g}$
and $\hat{H}_{e}$, respectively. Then, the state of the total system
at time $t$ becomes
\begin{equation}
\left|\Psi(t)\right\rangle =\alpha\vert g\rangle_{\!s}\otimes\vert\Phi_{g}(t)\rangle+\beta\vert e\rangle_{\!s}\otimes\vert\Phi_{e}(t)\rangle,\label{eq:7}
\end{equation}
where $\vert\Phi_{g}(t)\rangle\equiv\mathrm{e}^{-i\hat{H}_{g}t}\vert G\rangle$
and $\vert\Phi_{e}(t)\rangle\equiv\mathrm{e}^{-i\hat{H}_{e}t}\vert G\rangle$.
The central task of this paper is to study the dynamic sensitivity
of the QRM with respect to the state of the auxiliary atom, which
could play the role of a sensor to detect the criticality of QRM.
To show this physical mechanism, we trace over the degrees of freedom
of the QRM, and obtain the reduced density matrix of the auxiliary
atom as
\begin{equation}
\hat{\rho}_{s}(t)=\vert\alpha\vert^{2}\vert g\rangle_{\!s\,s\!}\langle g\vert+\vert\beta\vert^{2}\vert e\rangle_{\!s\,s\!}\langle e\vert+[D(t)\alpha^{\ast}\beta\vert e\rangle_{\!s\,s\!}\langle g\vert+\mathrm{H.c.}],\label{eq:8}
\end{equation}
where the decoherence factor $D(t)$ is defined by
\begin{equation}
D(t)=\langle\Phi_{g}(t)\vert\Phi_{e}(t)\rangle=\langle G\vert\mathrm{e}^{i\hat{H}_{g}t}\mathrm{e}^{-i\hat{H}_{e}t}\vert G\rangle.\label{eq:9}
\end{equation}
To understand the dynamic sensitivity in this finite-component system,
we calculate the LE of the auxiliary atom by
\begin{equation}
L(t)=\vert D(t)\vert^{2}=\vert\langle G\vert\mathrm{e}^{i\hat{H}_{g}t}\mathrm{e}^{-i\hat{H}_{e}t}\vert G\rangle\vert^{2}.\label{eq:10}
\end{equation}
In the short-time limit, the LE can be approximated as
\begin{equation}
L(t)\approx\exp(-4\gamma\chi^{2}t^{2}),\label{eq:11}
\end{equation}
where $\gamma=\langle G\vert(\hat{a}^{\dagger}\hat{a})^{2}\vert G\rangle-\langle G\vert\hat{a}^{\dagger}\hat{a}\vert G\rangle^{2}$
is the photon number variance. Note that the operator average here
is taken over the ground state of the QRM. Equation~(\ref{eq:11})
shows that the decay rate of the LE depends on $t^{2}$ and the photon
number variance $\gamma$. To obtain the LE, we need to know the ground
states of the QRM working in both the normal and the superradiance
phases.
The QRM undergoes a quantum phase transition from the normal phase
to the superradiance phase by increasing the coupling strength crossing
the critical point $g_{c}=\sqrt{\omega_{c}\omega_{0}}/2$. When the
QRM goes through the critical point, the ground state of the QRM experience
a huge change. In this paper, we will exhibit some special features
around the critical point by calculating the photon number variance
$\gamma$ in these two phases. When $L(t)$ approaches zero, the QRM
will evolve into two orthogonal states $\vert\Phi_{g}(t)\rangle$
and $\vert\Phi_{e}(t)\rangle$. This feature could be used as a measurement
tool for detecting the state of the auxiliary atom.
\subsection{Photon number variance in the normal phase}
In this subsection, we calculate the photon number variance $\gamma$
in the ground state of the QRM working in the normal phase. Note that
the ground state of the QRM in the infinite and finite $\eta$ cases
have been calculated in Ref.~\cite{PhysRevLett.115.180404}. Here,
we present the calculation of the ground state for keeping the completeness
of this paper.
\subsubsection{The infinite $\eta$ case}
We first consider the infinite-frequency limit, i.e., the ratio $\eta=\omega_{0}/\omega_{c}$
of the atomic transition frequency $\omega_{0}$ over the cavity-field
frequency $\omega_{c}$ approaches infinity. In this case, the quantum
criticality has been analytically found in QRM~\cite{PhysRevLett.115.180404}.
By introducing the unitary transformation operator $\hat{U}_{\mathrm{np}}=\exp[i(g/\omega_{0})(\hat{a}+\hat{a}^{\dagger})\hat{\sigma}_{y}]$,
the Hamiltonian~(\ref{eq:1}) can be transformed to a decoupling
form corresponding to the spin subspaces $\mathcal{H}_{e}$ and $\mathcal{H}_{g}$.
Keeping the terms up to the second order of $g/\omega_{0}$, the transformed
Hamiltonian becomes~\cite{PhysRevLett.115.180404}
\begin{align}
\hat{H}_{\mathrm{np}} & =\hat{U}_{\mathrm{np}}^{\dagger}\hat{H}_{\mathrm{Rabi}}\hat{U}_{\mathrm{np}}\nonumber \\
& \approx\omega_{c}\hat{a}^{\dagger}\hat{a}+\frac{\omega_{c}\lambda^{2}}{4}(\hat{a}+\hat{a}^{\dagger})^{2}\hat{\sigma}_{z}+\frac{\omega_{0}}{2}\hat{\sigma}_{z}+\hat{\mathcal{O}}[(g/\omega_{0})^{2}],\label{eq:12}
\end{align}
where we introduce the dimensionless coupling strength $\lambda=2g/\sqrt{\omega_{0}\omega_{c}}$.
Hamiltonian~(\ref{eq:12}) can be diagonalized in terms of the squeezing
operator $\hat{S}(r_{\mathrm{np}})=\exp[r_{\mathrm{np}}(\hat{a}^{\dagger2}-\hat{a}^{2})/2]$
with $r_{\mathrm{np}}(\lambda)=-\frac{1}{4}\ln(1+\lambda^{2}\hat{\sigma}_{z})$.
The diagonalized Hamiltonian reads~\cite{PhysRevLett.115.180404}
\begin{equation}
\hat{H}_{\mathrm{np}}^{\mathrm{d}}=\hat{S}^{\dagger}(r_{\mathrm{np}})\hat{H}_{\mathrm{np}}\hat{S}(r_{\mathrm{np}})=\hat{\epsilon}_{\mathrm{np}}\hat{a}^{\dagger}\hat{a}+\hat{E}_{\mathrm{np}},\label{eq:13}
\end{equation}
where we introduce the conditional frequency $\hat{\epsilon}_{\mathrm{np}}=\omega_{c}\sqrt{1+\lambda^{2}\hat{\sigma}_{z}}$
and the spin-state dependent energy $\hat{E}_{\mathrm{np}}=(\hat{\epsilon}_{\mathrm{np}}-\omega_{c}+\omega_{0}\hat{\sigma}_{z})/2$.
By finding the minimum energy, the ground state of the diagonalized
Hamiltonian~(\ref{eq:13}) in the normal phase is $\text{\ensuremath{\vert}}0\rangle\ensuremath{\vert}g\rangle$.
To keep the Hamiltonian $\hat{H}_{\mathrm{np}}^{\mathrm{d}}$ in the
low spin subspace to be Hermitian, the coupling strength $g$ should
be smaller than $\sqrt{\omega_{0}\omega_{c}}/2$ such that $\lambda<\lambda_{c}=1$,
which defines the parameter space of the \textit{normal phase}. In
our model, the cavity frequency $\omega_{c}$ conditionally depends
on the states of the auxiliary atom, and hence the atom $S$ can be
used as a trigger to induce the criticality in the QRM.
Based on the above analyses, we know that the ground state of the
QRM is approximately expressed as~\cite{PhysRevLett.115.180404}
\begin{equation}
\vert\psi_{\mathrm{np}}^{G}(r_{\mathrm{np}})\rangle=\hat{U}_{\mathrm{np}}\hat{S}(r_{\mathrm{np}})\vert0\rangle\vert g\rangle,\label{eq:15}
\end{equation}
where the operators $\hat{U}_{\mathrm{np}}$ and $\hat{S}(r_{\mathrm{np}})$
have been defined before. In the ground state $\vert\psi_{\mathrm{np}}^{G}\rangle$,
the photon number variance in normal phase can be obtained as
\begin{equation}
\gamma_{\mathrm{np}}=\frac{1}{2}\sinh^{2}(2r_{\mathrm{np}})+\frac{g^{2}}{\omega_{0}^{2}}\mathrm{e}^{-2r_{\mathrm{np}}}.\label{eq:16}
\end{equation}
In terms of Eqs.~(\ref{eq:11}) and~(\ref{eq:16}), the analytical
result of the LE in the normal phase can be obtained.
\subsubsection{The finite $\eta$ case}
The above discussions are valid in the infinite $\eta$ case. To beyond
this limit case, below we calculate the ground state of the QRM in
a large finite $\eta$ case. To this end, we perform a unitary transfomation
with the transformation operator
\begin{equation}
\hat{U}_{\mathrm{np}}^{\sigma}=\exp\left\{ i\left[\frac{g}{\omega_{0}}(\hat{a}+\hat{a}^{\dagger})-\frac{4g^{3}}{3\omega_{0}^{3}}(\hat{a}+\hat{a}^{\dagger})^{3}\right]\hat{\sigma}_{y}\right\} \label{eq:17}
\end{equation}
to the Hamiltonian $\hat{H}_{\mathrm{Rabi}}$~\cite{PhysRevLett.115.180404}.
Up to the fourth order of $g/\omega_{0}$, the transformed Hamiltonian
becomes~\cite{PhysRevLett.115.180404}
\begin{align}
\hat{\tilde{H}}_{\mathrm{np}}^{\sigma}= & \ (\hat{U}_{\mathrm{np}}^{\sigma})^{\dagger}\hat{H}_{\mathrm{Rabi}}\hat{U}_{\mathrm{np}}^{\sigma}\nonumber \\
= & \ \omega_{c}\hat{a}^{\dagger}\hat{a}+\frac{g^{2}}{\omega_{0}}(\hat{a}+\hat{a}^{\dagger})^{2}\hat{\sigma}_{z}-\frac{g^{4}}{\omega_{0}^{3}}(\hat{a}+\hat{a}^{\dagger})^{4}\hat{\sigma}_{z}\nonumber \\
& +\frac{\omega_{0}}{2}\hat{\sigma}_{z}+\frac{g^{2}\omega_{c}}{\omega_{0}^{3}}+\hat{\mathcal{O}}[(g/\omega_{0})^{4}].\label{eq:18}
\end{align}
By projecting the effective Hamiltonian~(\ref{eq:18}) into the low
spin subspace $\mathcal{H}_{g}$, we obtain
\begin{align}
\hat{H}_{\mathrm{np}}^{\sigma}= & \ \langle g\vert\hat{\tilde{H}}_{\mathrm{np}}^{\sigma}\vert g\rangle\nonumber \\
= & \ \omega_{c}\hat{a}^{\dagger}\hat{a}-\frac{\omega_{c}\lambda^{2}}{4}(\hat{a}+\hat{a}^{\dagger})^{2}+\frac{\lambda^{4}\omega_{c}^{2}}{16\omega_{0}}(\hat{a}+\hat{a}^{\dagger})^{4}\nonumber \\
& -\frac{\omega_{0}}{2}+\frac{\lambda^{2}\omega_{c}^{2}}{4\omega_{0}}.\label{eq:19}
\end{align}
To know the ground state of the Hamiltonian $\hat{H}_{\mathrm{np}}^{\sigma}$,
we adopt the variational method and assume a trial wave function $\vert\Psi_{\mathrm{np}}^{G}(s_{\mathrm{np}})\rangle=\hat{S}(s_{\mathrm{np}})\vert0\rangle\vert g\rangle$,
where $\hat{S}$ is a squeezing operator, and $s_{\mathrm{np}}$ is
the undetermined variational squeezing parameter~\cite{PhysRevLett.115.180404,PhysRevA.94.063824}.
The ground-state energy can be calculated as
\begin{align}
E_{\mathrm{np}}^{G}(s_{\mathrm{np}})= & \ \omega_{c}\sinh^{2}s_{\mathrm{np}}-\frac{\omega_{c}\lambda^{2}}{4}\mathrm{e}^{2s_{\mathrm{np}}}+\frac{3\lambda^{4}\omega_{c}^{2}}{16\omega_{0}}\mathrm{e}^{4s_{\mathrm{np}}}\nonumber \\
& -\frac{\omega_{0}}{2}+\frac{\lambda^{2}\omega_{c}^{2}}{4\omega_{0}}.\label{eq:19-1}
\end{align}
Here, the second-order derivative of $E_{\mathrm{np}}^{G}(s_{\mathrm{np}})$
with respect to $s_{\mathrm{np}}$ is positive, and then the minimum
energy can be obtained by the zero point of the first-order derivative~\cite{PhysRevLett.115.180404},
namely
\begin{equation}
\frac{\mathrm{d}E_{\mathrm{np}}^{G}(s_{\mathrm{np}})}{\mathrm{d}s_{\mathrm{np}}}=\ \frac{\omega_{c}}{2\mathrm{e}^{2s_{\mathrm{np}}}}\left[\frac{3\lambda^{4}\mathrm{e}^{6s_{\mathrm{np}}}}{2\eta}+(1-\lambda^{2})\mathrm{e}^{4s_{\mathrm{np}}}-1\right]=0.\label{eq:21}
\end{equation}
By solving Eq.~(\ref{eq:21}), we obtain the only physical solution
as
\begin{equation}
s_{\mathrm{np}}=\frac{1}{2}\ln\left\{ \mathrm{Re}\left[\frac{\sqrt[3]{A}}{9\lambda^{4}}+\frac{2(\lambda^{2}-1)\eta}{9\lambda^{4}}+\frac{4(\lambda^{2}-1)^{2}\eta^{2}}{9\lambda^{4}\sqrt[3]{A}}\right]\right\} ,\label{eq:21-1}
\end{equation}
where we introduce
\begin{align}
A= & \ 9\sqrt{3}\sqrt{243\lambda^{16}\eta^{2}+(\lambda^{6}-3\lambda^{4}+3\lambda^{2}-1)16\lambda^{8}\eta^{4}}\nonumber \\
& +243\lambda^{8}\eta+(\lambda^{6}-3\lambda^{4}+3\lambda^{2}-1)8\eta^{3}.
\end{align}
The ground state of QRM in the finite $\eta$ case can be expressed
as
\begin{equation}
\vert\varphi_{\mathrm{np}}^{G}\rangle=\hat{U}_{\mathrm{np}}^{\sigma}\hat{S}(s_{\mathrm{np}})\vert0\rangle\vert g\rangle.
\end{equation}
Further, the average photon number in the ground state $\vert\varphi_{\mathrm{np}}^{G}\rangle$
can be calculated as
\begin{equation}
\langle\hat{a}^{\dagger}\hat{a}\rangle_{\mathrm{np}}=\sinh^{2}s_{\mathrm{np}}+\frac{g^{2}}{\omega_{0}^{2}}-\frac{8g^{4}}{\omega_{0}^{4}}\mathrm{e}^{2s_{\mathrm{np}}},\label{eq:23-1}
\end{equation}
and the photon number variance can be obtained as
\begin{equation}
\gamma'_{\mathrm{np}}\approx\frac{1}{2}\sinh^{2}(2s_{\mathrm{np}})+\frac{g^{2}}{\omega_{0}^{2}}\mathrm{e}^{-2s_{\mathrm{np}}}-\frac{8g^{4}\mathrm{e}^{4s_{\mathrm{np}}}}{\omega_{0}^{4}}.\label{eq:24-2}
\end{equation}
Then the LE can be calculated based on Eqs.~(\ref{eq:11}) and~(\ref{eq:24-2}).
\begin{figure}
\includegraphics[width=8.5cm]{Fig1} \caption{(Color online) (a) and (b) The ground-state energy and the average
photon number as functions of the frequency ratio $\eta$ in the normal
phase $(\lambda=0.99)$. Here, the exact and approximate results are
obtained based on the origin Hamiltonian~(\ref{eq:1}) (black solid
line) and the approximate Hamiltonian~(\ref{eq:19}) (squares). The
variational results are obtained based on Eqs.~(\ref{eq:19-1}),~(\ref{eq:21-1}),
and~(\ref{eq:23-1}) (crosses). }
\label{Fig:1}
\end{figure}
We have used the variational method to solve the effective Hamiltonian
and obtained the photon number variance of the ground state in the
normal phase at a finite $\eta$. To check the validity of the effective
Hamiltonian and the approximate method, in Figs.~\ref{Fig:1}(a)
and~\ref{Fig:1}(b) we plot the ground-state energy and the average
photon number obtained by the variational and numerical methods, when
the QRM works in the normal phase $(\lambda=0.99)$. Meanwhile, we
present the exact result based on the origin Hamiltonian~(\ref{eq:1})
for reference. In Fig.~\ref{Fig:1}(a), the ground-state energy obtained
by these three methods in the normal phase are consistent with each
other in the large $\eta$ case. As the ratio $\eta$ decreases, the
deviation between the approximate result and the exact result in the
finite $\eta$ case becomes large. However, the variational result
agrees well with the numerical result of the effective Hamiltonian.
The average photon numbers obtained with these three methods match
well in the large $\eta$ case as shown in Fig.~\ref{Fig:1}(b).
In contrast, the difference between the variational result and the
numerical results increases with the decrease of $\eta$. The reason
for this difference is that the trial wave function only preserves
the low spin state, and the excite spin state is also important in
the finite $\eta$ case. Nevertheless, the variational method can
still catch the main physics.
\subsection{Photon number variance in the superradiance phase}
In this subsection, we calculate the photon number variance $\gamma$
in the ground state of the QRM working in the superradiance phase.
Here, we follow some derivations of the ground state of the QRM in
the superradiance phase given in Ref.~\cite{PhysRevLett.115.180404}
for keeping the completeness of this paper.
\subsubsection{The infinite $\eta$ case}
Physically, when the light-matter coupling strength increases to be
larger than the critical coupling strength, the coupled system will
acquire macroscopic excitations. Then the high-order terms which contain
the average photon number cannot be ignored. In this case, the approximate
Hamiltonian~(\ref{eq:13}) will not be valid as $\lambda>\lambda_{c}$.
To achieve the effective Hamiltonian in this case, we introduce a
displacement operator $\hat{D}(\alpha)=\exp[\alpha(a^{\dagger}-a)]$
to make a transformation upon Hamiltonian~(\ref{eq:1})~\cite{PhysRevLett.115.180404}
\begin{align}
\hat{\tilde{H}}_{\mathrm{Rabi}}(\alpha)= & \ \hat{D}^{\dagger}(\alpha)\hat{H}_{\mathrm{Rabi}}\hat{D}(\alpha)\nonumber \\
= & \ \omega_{c}(\hat{a}^{\dagger}+\alpha)(\hat{a}+\alpha)-g(\hat{a}+\hat{a}^{\dagger})\hat{\sigma}_{x}\nonumber \\
& +\dfrac{\omega_{c}}{2}\hat{\sigma}_{z}-2g\alpha\hat{\sigma}_{x}.\label{eq:22}
\end{align}
Further, we introduce new spin eigenstates $\vert\tilde{e}\rangle$
and $\vert\tilde{g}\rangle$ of the atomic Hamiltonian $\omega_{c}\hat{\sigma}_{z}/2-2g\alpha\hat{\sigma}_{x}$.
In terms of the new spin states $\vert\tilde{e}\rangle=\cos\theta\vert e\rangle+\sin\theta\vert g\rangle$
and $\vert\tilde{g}\rangle=-\sin\theta\vert e\rangle+\cos\theta\vert g\rangle$
with $\tan(2\theta)=-4g\alpha/\omega_{0}$, the Pauli operators in
the new spin space can be defined by $\hat{\tau}_{0}=\vert\tilde{e}\rangle\langle\tilde{e}\vert+\vert\tilde{g}\rangle\langle\tilde{g}\vert$,
$\hat{\tau}_{x}=\vert\tilde{e}\rangle\langle\tilde{g}\vert+\vert\tilde{g}\rangle\langle\tilde{e}\vert$,
and $\hat{\tau}_{z}=\vert\tilde{e}\rangle\langle\tilde{e}\vert-\vert\tilde{g}\rangle\langle\tilde{g}\vert$.
Then Hamiltonian~(\ref{eq:22}) can be expressed as~\cite{PhysRevLett.115.180404}
\begin{align}
\hat{\tilde{H}}_{\mathrm{Rabi}}(\alpha)= & \ \omega_{c}\hat{a}^{\dagger}\hat{a}-g(\hat{a}+\hat{a}^{\dagger})\cos(2\theta)\hat{\tau}_{x}+\frac{\tilde{\omega}_{0}}{2}\hat{\tau}_{z}+\omega_{c}\alpha^{2}\nonumber \\
& +[\omega_{c}\alpha\hat{\tau}_{0}-g\sin(2\theta)\hat{\tau}_{z}](\hat{a}+\hat{a}^{\dagger}).
\end{align}
To eliminate the block-diagonal perturbation term in the subspace
$\mathcal{H}_{\tilde{g}}$, we obtain, $\omega_{0}\alpha+g\sin(2\theta)=0$,
which leads to the displacement parameters~\cite{PhysRevLett.115.180404}
\begin{equation}
\alpha=\pm\alpha_{\lambda}=\pm\sqrt{\omega_{0}(\lambda^{4}-1)/(4\lambda^{2}\omega_{c})}.
\end{equation}
In the infinite-frequency limit, the term $2\omega_{c}\alpha(\hat{a}+\hat{a}^{\dagger})\vert\tilde{e}\rangle\langle\tilde{e}\vert$
can be ignored in the new low spin subspace, then the reformulated
Hamiltonian becomes~\cite{PhysRevLett.115.180404}
\begin{equation}
\hat{\tilde{H}}_{\mathrm{Rabi}}(\pm\alpha_{\lambda})\approx\omega_{c}\hat{a}^{\dagger}\hat{a}+\frac{\tilde{\omega}_{0}}{2}\hat{\tau}_{z}^{\pm}-\tilde{g}(\hat{a}+\hat{a}^{\dagger})\hat{\tau}_{x}^{\pm}+\omega_{c}\alpha_{\lambda}^{2},\label{eq:25}
\end{equation}
where we introduce the parameters $\tilde{\omega}_{0}=\lambda^{2}\omega_{0}$
and $\tilde{g}=\sqrt{\omega_{c}\omega_{0}}/2\lambda$. The signs “$\pm$”
in $\hat{\tau}_{x,z}^{\pm}$ denote the direction of the displacement.
Note that the two different signs of the displacement parameter $\alpha$
indicate that the ground state exists twofold degeneracy in QRM~\cite{PhysRevLett.124.040404_2020}.
Hamiltonian~(\ref{eq:25}) has a similar structure as that of the
QRM, we could use a similar method to obtain the diagonalized Hamiltonian
by two unitary transformations as~\cite{PhysRevLett.115.180404}
\begin{align}
\hat{H}_{\mathrm{sp}}^{\mathrm{d}} & =\hat{S}^{\dagger}(r_{\mathrm{sp}})\hat{U}_{\mathrm{sp}}^{\dagger}\hat{\tilde{H}}_{\mathrm{Rabi}}\hat{U}_{\mathrm{sp}}\hat{S}(r_{\mathrm{sp}})\nonumber \\
& =\hat{\epsilon}_{\mathrm{sp}}\hat{a}^{\dagger}\hat{a}+\hat{E}_{\mathrm{sp}}\label{eq:29-1}
\end{align}
where we introduce the conditional frequency $\hat{\epsilon}_{\mathrm{sp}}=\omega_{c}\sqrt{1+\lambda^{-4}\hat{\tau}_{z}^{\pm}}$
and the new spin-state dependent energy $\hat{E}_{\mathrm{sp}}=(\hat{\epsilon}_{\mathrm{sp}}-\omega_{c}+\omega_{0}\hat{\tau}_{z}^{\pm})/2+\omega_{c}\alpha_{\lambda}^{2}$.
The unitary transformation is defined as $\hat{U}_{\mathrm{sp}}=\exp[i(\tilde{g}/\tilde{\omega}_{0})(\hat{a}+\hat{a}^{\dagger})\hat{\tau}_{y}^{\pm}]$,
and the squeezing parameter here is $r_{\mathrm{sp}}=-\frac{1}{4}\ln(1+\lambda^{-4}\hat{\tau}_{z}^{\pm})$.
The ground state of the diagonalized Hamiltonian~(\ref{eq:29-1})
in the superradiance phase is $\left|0\right\rangle \left|\tilde{g}^{\pm}\right\rangle $.
Similarly, to keep the Hamiltonian $\hat{H}_{\mathrm{sp}}^{\mathrm{d}}$
in the low spin subspace to be Hermitian, the coupling strength $g$
should be larger than $\sqrt{\omega_{0}\omega_{c}}/2$ such that $\lambda>\lambda_{c}=1$,
which defines the parameter space of the \textit{superradiance phase}.
The ground state of the QRM in the superradiance phase can be expressed
as~\cite{PhysRevLett.115.180404}
\begin{equation}
\vert\psi_{\mathrm{sp}}^{G}(r_{\mathrm{sp}})\rangle_{\pm}=\hat{D}(\pm\alpha_{\lambda})\hat{U}_{\mathrm{sp}}\hat{S}(r_{\mathrm{sp}})\vert0\rangle\vert\tilde{g}^{\pm}\rangle,\label{eq:28}
\end{equation}
where the transformation operators have defined before. In the ground
state $\vert\psi_{\mathrm{sp}}^{G}\rangle_{\pm}$, the photon number
variance in the superradiance phase can be obtained as
\begin{equation}
\gamma_{\mathrm{sp}}=\frac{1}{2}\sinh^{2}(2r_{\mathrm{sp}})+\alpha_{\lambda}^{2}\mathrm{e}^{2r_{\mathrm{sp}}}+\frac{\tilde{g}^{2}}{\tilde{\omega}_{0}^{2}}\mathrm{e}^{-2r_{\mathrm{sp}}}.\label{eq:29}
\end{equation}
In terms of Eqs.~(\ref{eq:11}) and~(\ref{eq:29}), the analytical
result of the LE in the superradiance phase can be obtained.
\subsubsection{The finite $\eta$ case}
To beyond the infinite-frequency case, below we calculate the photon
number $\gamma_{\mathrm{sp}}^{\prime}$ by the variational method.
To this end, we perform a unitary transfomation with the transformation
operator
\begin{equation}
\hat{U}_{\mathrm{sp}}^{\sigma}=\exp\left\{ i\left[\frac{\tilde{g}}{\tilde{\omega}_{0}}(\hat{a}+\hat{a}^{\dagger})-\frac{4\tilde{g}^{3}}{3\tilde{\omega}_{0}^{3}}(\hat{a}+\hat{a}^{\dagger})^{3}\right]\hat{\tau}_{y}^{\pm}\right\}
\end{equation}
to Hamiltonian~(\ref{eq:25}) and projecting the transformed Hamiltonian
into the low spin subspace, then the effective Hamiltonian becomes~\cite{PhysRevLett.115.180404}
\begin{equation}
\hat{H}_{\mathrm{sp}}^{\sigma}=\omega_{c}\hat{a}^{\dagger}\hat{a}-\frac{\tilde{g}^{2}}{\tilde{\omega}_{0}}(\hat{a}+\hat{a}^{\dagger})^{2}+\frac{\tilde{g}^{4}}{\tilde{\omega}_{0}^{3}}(\hat{a}+\hat{a}^{\dagger})^{4}-\frac{\tilde{\omega}_{0}}{2}+\frac{\tilde{g}^{2}\omega_{c}}{\tilde{\omega}_{0}^{3}}+\omega_{c}\alpha_{\lambda}^{2}.\label{eq:31}
\end{equation}
Similar to the treatment in the normal-phase case, the trial wave
function in the superradiance phase is assumed as $\vert\Psi_{\mathrm{sp}}^{G}(s_{\mathrm{sp}})\rangle=\hat{S}(s_{\mathrm{sp}})\vert0\rangle\vert\tilde{g}^{\pm}\rangle$.
The corresponding ground state energy can be obtained as
\begin{align}
E_{\mathrm{sp}}^{G}(s_{\mathrm{sp}})= & \ \omega_{c}\sinh^{2}s_{\mathrm{sp}}-\frac{\omega_{c}}{4\lambda^{4}}\mathrm{e}^{2s_{\mathrm{sp}}}+\frac{3\omega_{c}^{2}}{16\tilde{\omega}_{0}\lambda^{8}}\mathrm{e}^{4s_{\mathrm{sp}}}\nonumber \\
& -\frac{\tilde{\omega}_{0}}{2}+\frac{\omega_{c}^{2}}{4\tilde{\omega}_{0}\lambda^{4}}+\omega_{c}\alpha_{g}^{2},\label{eq:35}
\end{align}
where the parameter $s_{\mathrm{sp}}$ is determined by the zero point
of the first-order derivative,
\begin{equation}
\frac{\mathrm{d}E_{\mathrm{sp}}^{G}(s_{\mathrm{sp}})}{\mathrm{d}s_{\mathrm{sp}}}=\ \frac{\omega_{c}}{2\mathrm{e}^{2s_{\mathrm{np}}}}\left[\frac{3\mathrm{e}^{6s_{\mathrm{sp}}}}{2\eta\lambda^{10}}+(1-\lambda^{-4})\mathrm{e}^{4s_{\mathrm{sp}}}-1\right]=0.\label{eq:33}
\end{equation}
The only physical solution of Eq.~(\ref{eq:33}) is given by
\begin{equation}
s_{\mathrm{sp}}=\frac{1}{2}\ln\left\{ \mathrm{Re}\left[\frac{\sqrt[3]{B}}{9}-\frac{2(\lambda^{4}-1)}{9\lambda^{-6}\eta^{-1}}+\frac{4(\lambda^{4}-1)^{2}}{9\sqrt[3]{B}\lambda^{-12}\eta^{-2}}\right]\right\} ,\label{eq:38}
\end{equation}
where we introduce
\begin{align}
B= & \ 9\sqrt{3}\sqrt{243\lambda^{20}\eta^{2}+(1-\lambda^{12}+3\lambda^{8}-3\lambda^{4})16\lambda^{28}\eta^{4}}\nonumber \\
& +243\lambda^{10}\eta+(-\lambda^{12}+3\lambda^{8}-3\lambda^{4}+1)8\lambda^{18}\eta^{3}.
\end{align}
The ground state of QRM in this case can be expressed as
\begin{equation}
\vert\varphi_{\mathrm{sp}}^{G}(s_{\mathrm{sp}})\rangle=\hat{D}(\pm\alpha_{\lambda})\hat{U}_{\mathrm{sp}}^{\sigma}\hat{S}(s_{\mathrm{sp}})\vert0\rangle\vert\tilde{g}^{\pm}\rangle.\label{eq:39-1}
\end{equation}
Based on the ground state~(\ref{eq:39-1}), the average photon number
in the superradiance phase can be calculated as
\begin{equation}
\langle\hat{a}^{\dagger}\hat{a}\rangle_{\mathrm{sp}}=\sinh^{2}s_{\mathrm{sp}}+\frac{\tilde{g}^{2}}{\tilde{\omega}_{0}^{2}}-\frac{8\tilde{g}^{4}}{3\tilde{\omega}_{0}^{4}}\mathrm{e}^{2s_{\mathrm{sp}}}+\alpha_{g}^{2},\label{eq:39}
\end{equation}
and the photon number variance can be obtained
\begin{equation}
\gamma'_{\mathrm{sp}}\approx\frac{1}{2}\sinh^{2}(2s_{\mathrm{sp}})+\frac{\tilde{g}^{2}}{\tilde{\omega}_{0}^{2}}\mathrm{e}^{-2s_{\mathrm{sp}}}+\left(\alpha_{g}^{2}-\frac{8\tilde{g}^{4}\mathrm{e}^{2s_{\mathrm{sp}}}}{3\tilde{\omega}_{0}^{4}}\right)\mathrm{e}^{2s_{\mathrm{sp}}}.
\end{equation}
\begin{figure}
\includegraphics[width=8.5cm]{Fig2} \caption{(Color online) (a) and (b) The ground-state energy and the average
photon number as functions of the frequency ratio $\eta$ in the superradiance
phase $(\lambda=1.01)$. Here, the exact and approximate results are
obtained based on the displaced Hamiltonian~(\ref{eq:22}) (black
solid line) and the approximate Hamiltonian~(\ref{eq:31}) (squares),
respectively. The variational results are obtained based on Eqs.~(\ref{eq:35}),~(\ref{eq:38}),
and~(\ref{eq:39}) (crosses). }
\label{Fig:2}
\end{figure}
In order to examine the validity of the effective Hamiltonian~(\ref{eq:31})
and the variational method in the superradiance phase, in Figs.~\ref{Fig:2}(a)
and~\ref{Fig:2}(b) we compare the variational result with the numerical
result through solving the effective Hamiltonian~(\ref{eq:31}) and
the displaced Hamiltonian (\ref{eq:22}), respectively. In Fig.~\ref{Fig:2}(a),
we plot the ground-state energy of the QRM solved based on the exact
displaced Hamiltonian, the approximate Hamiltonian, and the variational
method. Here, we find that the results based on these three methods
match better as the frequency ratio $\eta$ increases. For the average
photon number as plotted in Fig.~\ref{Fig:2}(b), we observe that
its average value is very large, which corresponds to the characteristics
of the system in the superradiance phase. We also see that the variational
results match the numerical results well in the superradiance phase
(inset), which indicates the validity of the variational method.
\begin{figure}
\center \includegraphics[width=8cm]{Fig3.eps} \caption{(Color online) The LE of the auxiliary atom as a function of the scaled
coupling strength $\lambda/\lambda_{c}$ and the scaled evolution
time $\omega_{c}t$ in both the normal phase ($\lambda<\lambda_{c}$)
and the superradiance phase ($\lambda>\lambda_{c}$). Here, we take
$\eta=5000$ and $\chi=g_{s}^{2}/\Delta_{s}=0.001\omega_{c}$.}
\label{Fig:3}
\end{figure}
By far, we have already calculated the photon number variance of the
QRM in both the normal phase and the superradiance phase. In the next
section, we will study the LE of the auxiliary atom corresponding
to the QRM in the normal and superradiance phases.
\section{The LOSCHMIDT ECHO}
\label{section4}
In this section, we study how to exhibit the critical dynamics of
the QRM by checking the LE of the auxiliary atom when the system goes
across the critical point from the normal phase to the superradiance
phase. In the short-time limit, the LE has been simplified to Eq.~(\ref{eq:11}).
Here, the LE can reflect the main character of the QPT by the quantum
decoherence of the auxiliary atom. To show the dependence of the LE
on the criticality, in Fig.~\ref{Fig:3} we plot the LE versus the
dimensionless coupling strength $\lambda$ and the evolution time
$t$ based on Eqs.~(\ref{eq:11}), (\ref{eq:16}), and (\ref{eq:29})
in the normal and superradiance phases. Figure~\ref{Fig:3} shows
that, in the vicinity of the critical point, the LE experiences a
sharp change within a small range of $\lambda/\lambda_{c}$. In the
normal phase, the LE decays sharply to zero as the dimensionless coupling
strength $\lambda$ approaches the critical point $\lambda_{c}$.
In the superradiance phase, the LE decays faster as the parameter
$\lambda$ increases far away from the critical point $\lambda_{c}$,
and reaches the minimal value at a large coupling strength. The rapid
change indicates that the coherence of the auxiliary atom is supersensitive
to a perturbation inflicted on the QRM near the critical point. We
can measure the QPT of QRM based on the supersensitive coherence of
the auxiliary atom in QRM. In addition, the coherence of the auxiliary
atom near the critical point decreases to zero sharply with time at
the critical point $\lambda_{c}$. During this process, the detected
atom evolves from a pure state to a mixed one.
\begin{figure}[t]
\center \includegraphics[width=8.5cm]{Fig4} \caption{(Color online) The LE of the auxiliary atom versus the scaled coupling
strength $\lambda/\lambda_{c}$ at $\omega_{c}t=60$ when $\eta$
takes different values: $\eta=2000$ (green dashed line), $\eta=4000$
(blue dashed line), $\eta=6000$ (purple dashed line), $\eta=8000$
(orange dashed line), and $\eta=10000$ (black solid line). Other
parameters used are the same as those given in Fig.~\ref{Fig:3}.}
\label{Fig:4}
\end{figure}
In our previous discussions, we have discussed the ground states of
the QRM in both infinite and finite $\eta$ cases. To know the influence
of the ratio $\eta$ on the LE, below we study the dependence of the
LE on the parameter $\lambda$ at different values of $\eta$. In
Fig.~\ref{Fig:4}, the LE is plotted as a function of $\lambda$
at a fixed time when the ratio $\eta$ takes different values: $\eta=2000,4000,6000,8000,$
and $10000$. Here, we can see that, in the normal phase, the LE decays
from a finite value to zero when the scaled coupling strength $\lambda/\lambda_{c}$
increases approach to one. In particular, the LE is independent of
the ratio $\eta$ in the normal phase. In the superradiance phase,
with the increase of the ratio $\lambda/\lambda_{c}$, the LE experiences
an increase from zero to peak values and then decays to zero. Different
from the normal phase, the revival peak value of the LE is smaller
for a larger value of $\eta$.
It should be pointed out that, though our analytical discussions are
valid for the infinite $\eta$ case, the dynamic sensitivity of the
quantum criticality also exists in the large finite $\eta$ case.
To show this point, in Fig.~\ref{Fig:5} we plot the LE of the auxiliary
atom as a function of $\lambda/\lambda_{c}$ at a large finite $\eta=10^{5}$.
For comparison, here we plot the LE using three different methods.
We numerically solve the dynamics governed by the effective Hamiltonians
given in Eqs.~(\ref{eq:19}) and~(\ref{eq:31}). The exact numerical
results are based on the original Rabi Hamiltonian~(\ref{eq:1})
in the normal phase and the displaced Hamiltonian~(\ref{eq:22})
in the superradiance phase. We also plot the LE based on the variational
method. In addition, we present the analytical result in the infinite
$\eta$ case for reference~\cite{PhysRevLett.115.180404}. Here,
we can see that, the quantum criticality exists in the large finite
$\eta$ case, and that the results obtained with three methods are
in consistent with each other. In the normal phase, the LE decays
from a finite value to zero, and there is no obvious revival in the
superradiance phase.
\begin{figure}[tbh]
\center \includegraphics[width=8.5cm]{Fig5} \caption{(Color online) The LE of the auxiliary atom versus the scaled coupling
strength $\lambda/\lambda_{c}$ at $\omega_{c}t=60$. These curves
are plotted with different methods: the numerical result based on
the effective Hamiltonians given by Eqs.~(\ref{eq:19}) and~(\ref{eq:31})
(squares), the numerical result based on the original Rabi Hamiltonian
given by Eq.~(\ref{eq:1}) in the normal phase and Eq.~(\ref{eq:22})
in the superradiance phase (green dashed line), and the variational
method (crosses) for $\eta=10^{5}$. We also present the analytical
result in the infinite $\eta$ case for reference (black solid line).
Other parameters used are the same as those given in Fig.~\ref{Fig:3}.}
\label{Fig:5}
\end{figure}
\section{SUMMARY}
\label{section5}
In summary, we have studied the dynamic sensitivity of quantum phase
transition in the QRM by checking the LE of an auxiliary atom, which
is far-off-resonantly coupled to the cavity field of the QRM. In the
vicinity of the critical point, the LE displays a sudden decay, which
is associated with the quantum decoherence of the auxiliary atom.
We have checked quantum criticality of the QRM in both the infinite
and finite $\eta$ cases. The analytical results in the infinite $\eta$
case clearly indicate the dynamic sensitivity in this model. Moreover,
when the ratio $\eta$ between the frequency of the two-level atom
and frequency of the cavity field is finite but large, i.e., in the
finite $\eta$ case, the effective Hamiltonian can also reflect the
quantum criticality in the QRM. Our proposal provides a simple scheme
for observation of quantum criticality in the QRM by checking the
quantum coherence of an atomic sensor.
\begin{acknowledgments}
J.-Q.L. is supported in part by National Natural Science Foundation
of China (Grants No.~11822501, No.~11774087, and No.~11935006)
and Hunan Science and Technology Plan Project (Grant No.~2017XK2018).
J.-F.H. is supported in part by the National Natural Science Foundation
of China (Grant No.~12075083), Scientific Research Fund of Hunan
Provincial Education Department (Grant No.~18A007), and Natural Science
Foundation of Hunan Province, China (Grant No.~2020JJ5345). Q.-T.X.
is supported in part by National Natural Science Foundation of China
(Grants No.~11965011).
\end{acknowledgments}
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
|
1,477,468,750,647 | arxiv | \section*{Introduction}
Let $X$ be a smooth projective variety over an algebraically
closed field $\mathbb{K}$ of characteristic 0. If $X$ has trivial canonical bundle (torsion is enough), then the deformations of $X$ are unobstructed: this is
the well known Bogomolov-Tian-Todorov theorem. The first proofs of this theorem, due to Bogolomov in \cite{bogomolov}, Tian \cite{Tian} and Todorov \cite{Todorov}, are trascendental and rely on the underlying differentiable structure of the variety $X$. More algebraic proof, based on the $T^1$-lifting theorem and the degeneration of the Hodge-to-de Rham spectral sequence, are due to Ran \cite{zivran}, Kawamata \cite{Kaw1} and Fantechi-Manetti \cite{FM2}.
The Bogomolov-Tian-Todorov theorem is also a consequence of the stronger fact that the differential graded Lie algebra associated with the infinitesimal deformations of $X$ is homotopy abelian, i.e.,
quasi-isomorphic to an abelian differential graded Lie algebra.
For $\mathbb{K}=\mathbb{C}$, this was first proved in \cite{GoMil2}, see also \cite{manRENDICONTi}.
For any algebraically closed field $\mathbb{K}$ of characteristic 0, this was proved in a completely algebraic way in \cite{algebraicBTT}, using the degeneration of the Hodge-to-de Rham spectral sequence and the notion of Cartan homotopy.
The aim of this paper is then to extend these techniques to analyse the infinitesimal deformations of pairs; indeed, we prove that the DGLA associated with these deformations is homotopy abelian in many cases, and hence the deformations are unobstructed.
This extension can be viewed as an application of the Iitaka's philosophy: \lq\lq whenever we have a theorem about non singular complete varieties whose statement is dictated by the behaviour of the regular differentials forms (the canonical bundles), there should exist a corresponding theorem about logarithmic paris (pairs consisting of nonsingular complete varieties and boundary divisors with only normal crossings) whose statement is dictated by the behaviour of the logarithmic forms (the logarithmic canonical bundles) and vice versa\rq\rq \cite[Principle 2-1-4]{matsuki}.
More precisely, let $X$ be a smooth projective variety, $D$ a smooth divisor and consider the deformations of the pair $(X,D)$, i.e., the deformations of the closed embedding $ j: D \hookrightarrow X$. As first step, we give an explicit description of a differential graded Lie algebra controlling the deformations of $j$. Namely, let ${\Theta}_X(-\log D)$ be the sheaf of germs of the tangent vectors to $X$ which are tangent to $D$. Once we fix an open affine cover $\mathcal{U}$ of $X$, the Thom-Whitney construction applied to ${\Theta}_X(-\log D)$ provides a differential graded Lie algebra $TW( {\Theta}_X(-\log D)(\mathcal{U}))$ controlling the deformations of $j$ (Theorem \ref{teo. DGLA controlling def of j}).
If the ground field is $\mathbb{C}$, then we can simply take the DGLA associated with the Dolbeault resolution of ${\Theta}_X(-\log D)$, i.e., $A^{0,*}_X({\Theta}_X(-\log D))= \oplus_i \Gamma(X, \mathcal{A}^{0,i}_X({\Theta}_X(-\log D)))$ (Example \ref{exa DGLA (X,D) on C}).
Then, we provide a condition that ensures that the DGLA $TW( {\Theta}_X(-\log D)(\mathcal{U}))$ is homotopy abelian.
\begin{theorems}
Let $X$ be a smooth projective variety of dimension $n$, defined
over an algebraically closed field of characteristic 0 and $D\subset X$ a smooth divisor D. If the contraction map
\[
H^*(X,\Theta_X (-\log D))\xrightarrow{\boldsymbol{i}}\operatorname{Hom}^*(H^*(X,\Omega^n_X (\log D)),H^*(X,\Omega^{n-1}_X (\log D)))\]
is injective, then the DGLA $TW( {\Theta}_X(-\log D)(\mathcal{U}))$ is homotopy abelian, for every affine open cover $\mathcal{U}$ of $X$.
\end{theorems}
As in \cite{algebraicBTT}, we recover this result using the power of the Cartan homotopy construction and the degeneration of the Hodge-to-de Rham spectral sequence associated in this case with the complex of logarithmic differentials $\Omega^\ast_X (\log D)$.
As corollary, we obtain an alternative (algebraic) proof, that, in the case of a log Calabi-Yau pair (Definition \ref{definiiton log calabi yau}), the DGLA controlling the infinitesimal deformations of the pair $(X,D)$ is homotopy abelian (Corollary \ref{corollari log calabi yau formal smooth}). In particular, we are able to prove the following result about smoothness of deformations (Corollary \ref {cor.log calabi yau no obstruction}).
\begin{theorems}
Let $X$ be a smooth projective $n$-dimensional variety defined over an algebraically closed field of characteristic 0 and $D \subset X$ a smooth divisor. If $(X,D)$ is a log Calabi-Yau pair, i.e., the logarithmic canonical bundle $\Omega^n_X (\log D)\cong \mathcal{O}(K_X+D)$ is trivial, then the pair $(X,D)$ has unobstructed deformations.
\end{theorems}
The unobstructedness of the deformations of a log Calabi-Yau pair $(X,D)$ is also interesting from the point of view of mirror symmetry.
The deformations of the log Calabi Yau pair $(X,D)$ should be mirror to the deformations of the (complexified) symplectic form on the mirror Landau-Ginzburg model. Therefore, these deformations are also smooth \cite{Auroux, Auroux2,KKP}.
\smallskip
Then, we focus our attention on the deformations of pairs $(X,D)$, with $D$ is a smooth divisor in a smooth projective Calabi Yau variety $X$. Also in this case, we provide an alternative (algebraic) proof that the DGLA controlling these infinitesimal deformations is homotopy abelian (Theorem \ref{theorem smoothness of D inside CY})). We also show the following statement about smoothness of deformations (Corollary \ref{cor. D in calabi yau no obstruction}).
\begin{theorems}
Let $X$ be a smooth projective Calabi Yau variety defined over an algebraically closed field of characteristic 0 and $D \subset X$ a smooth divisor. Then, the pair $(X,D)$ has unobstructed deformations.
\end{theorems}
The previous results are also sketched in \cite{KKP}, see also \cite{zivran,kinosaki}, where the authors work over the field of complex number and make a deep use of transcendental methods.
More precisely, using Dolbeault type complexes, one can construct a differential Batalin-Vilkovisky algebra such that the associated DGLA controls the deformation problem (Definition \ref{def dbv}). If the differential Batalin-Vilkovisky algebras has a degeneration property then the associated DGLA is homotopy abelian \cite {terilla, KKP, BraunLaza}.
Using our approach and the powerful notion of the Cartan homotopy, we are able to give an alternative proof of this result (Theorem \ref{theorem dbv degener implies homotopy abelian}).
\medskip
In a very recent preprint \cite{Sano}, the $T^1$-lifting theorem is applied in order to prove the unobstructedness of the deformations $(X,D)$, for $X$ smooth projective variety and $D$ a smooth divisor in $|-m K_X|$, for some positive integers m, under the assumption $H^1(X,\mathcal{O}_X)=0$ \cite[Theorem 2.1]{Sano}.
Inspired by this paper, we also study the infinitesimal deformations of these pairs $(X,D)$.
Using the cyclic covers of $X$ ramified over $D$, we relate the deformations of the pair $(X,D)$ with the deformations of the pair (ramification divisor, cover) and
we show that the DGLA associated with the deformations of the pair $(X,D)$ is homotopy abelian. In particular,
we can prove the following result about smoothness of deformations (Proposition \ref{proposition D in -mKx smooth pair}).
\begin{theorems}
Let $X$ be a smooth projective variety and $D$ a smooth divisor such that $D \in | -mK_X|$, for some positive integer $m$. Then, the pair $(X,D)$ has unobstructed deformations.
\end{theorems}
We refer the reader to \cite{Sano} for examples in the Fano setting and the relation with the unobstructedness of weak Fano manifold.
\smallskip
Once the unobstructedness of a pair $(X,D)$ is proved, then studying the forgetting morphism of functors $\phi: \operatorname{Def}_{(X,D)} \to \operatorname{Def}_X$, one can prove the unobstructedness of $\operatorname{Def}_X$, for instance when $D$ is stable in $X$, i.e., $\phi$ is smooth \cite[Definition 3.4.22]{Sernesi}.
\medskip
The paper goes as follows. With the aim of providing a full introduction to the subject, we include
Section \ref{section log diff} on the notion of the logarithmic differentials and Section \ref{section back ground DGLA} on the DGLAs, Cartan homotopy and cosimplicial constructions, such as the Thom-Whitney DGLA.
In Section \ref{section deformation}, we review the definition of the infinitesimal deformations of the pair $(X,Z)$, for any closed subscheme $Z \subset X$ of a smooth variety $X$, describing the DGLA controlling these deformations.
Section \ref{section obstruction computations} is devoted to the study of obstruction and it contains the proof of the first three theorems.
In Section \ref{section cyclic cover}, we study cyclic covers of a smooth projective variety $X$ ramified on a smooth divisor $D$ and we prove the last theorem stated above.
In the last section, we apply the notion of Cartan homotopy construction to the
the differential graded Batalin Vilkovisky algebra setting, providing a new proof of the fact that if the differential Batalin-Vilkovisky algebras has a degeneration property then the associated DGLA is homotopy abelian (Theorem \ref{theorem dbv degener implies homotopy abelian}).
\medskip
\noindent{\bf{Notation.}}
Unless otherwise specified, we work over an algebraically closed field $\mathbb{K}$ of characteristic 0.
Throughout the paper, we also assume that
$X$ is always a smooth projective variety over $\mathbb{K}$. Actually, the main ingredient of the proofs is the degeneration at the $E_1$-level of some Hodge-to-de Rham spectral sequences and it holds whenever X is smooth proper over a field of characteristic 0 \cite{DI}.
By abuse of notation, we denote by $K_X$ both the canonical divisor and the canonical bundle of $X$.
$\mathbf{Set}$ denotes the category of sets (in a fixed universe)
and $\mathbf{Art} $ the category of local Artinian
$\mathbb{K}$-algebras with residue field $\mathbb{K}$. Unless otherwise specified,
for every objects $A\in \mathbf{Art}$, we denote by
$\mathfrak{m}_A$ its maximal ideal.
\medskip
\begin{acknowledgement}
The author wish to thank Richard Thomas for useful discussions and comments and for pointing out the papers \cite{kinosaki} and \cite{fujino1}, and Marco Manetti for drawing my attention to the paper \cite{Sano} and for useful suggestions and advices, especially on Section 5. In particular, M.M. shared with the author Theorem \ref{theorem dbv degener implies homotopy abelian}. I also thank Taro Sano for comments and for pointing out a mistake in a previous version.
The author is supported by the Marie Curie Intra-European Fellowship FP7-PEOPLE-2010-IEF Proposal $N^\circ$: 273285.
\end{acknowledgement}
\section{Review of logarithmic differentials}\label{section log diff}
Let $X$ be a smooth projective variety of dimension $n$ and $j:Z \hookrightarrow X$ a closed embedding of a closed subscheme $Z$. We denote by ${\Theta}_X(-\log Z)$ the sheaf of germs of the tangent vectors to $X$ which are tangent to $Z$ \cite[Section 3.4.4]{Sernesi}.
Note that, denoting by $\mathcal{I}\subset \mathcal{O}_X$ the ideal sheaf of $Z$ in $X$, then $\Theta_X(-\log Z)$ is the subsheaf of the derivations of the sheaf $\mathcal{O}_X$ preserving the ideal sheaf $\mathcal{I}$ of $Z$, i.e.,
\[
\Theta_X(-\log Z)=\{f\in \operatorname{Der}(\mathcal{O}_X,\mathcal{O}_X)\mid f(\mathcal{I}) \subset\mathcal{I}\}.
\]
\begin{remark}
If $Z$ is smooth in $X$, then we have the short exact sequence
\[
0 \to {\Theta}_X(-\log Z) \to {\Theta}_X \to N_{Z / X} \to 0.
\]
Note also that if the codimension of $Z$ is at least 2, then the sheaf $ {\Theta}_X(-\log Z)$ is not locally free, see also Remark \ref{remark T log dual sheaf locally free}.
\end{remark}
Next, assume to be in the divisor setting, i.e., let $D \subset X$ be a globally normal crossing divisor in $X$. With the divisor assumption, we can define the sheaves of logarithimc differentials, see for instance \cite[p. 72]{deligne}, \cite{Kawamata}, \cite[Chapter 2]{librENSview} or \cite[Chapter 8]{voisin}. For any $k \leq n$, we denote by $\Omega^k_X(\log D)$ the locally free sheaf of differential $k$-forms with logarithmic poles along $D$.
More explicitly, let $\tau: V= X-D \to X$ and $\Omega^k_X(\ast D)= \lim_{\stackrel{\to}{ \nu}}
\Omega^k_X(\nu \cdot D)=\tau_* \Omega^k_V$. Then, $(\Omega^\ast_X(\ast D),d)$ is a complex and $(\Omega^\ast_X(\log D),d)$ is the subcomplex such that, for evey
open $U$ in $X$, we have
\[
\Gamma(U,\Omega^k_X(\log D))=\{\alpha \in \Gamma(U,\Omega^k_X(\ast D))\ | \, \alpha \mbox{ and } d \alpha \mbox{ have simple poles along D} \}.
\]
\begin{remark}\label{remark exac sequence Omega(logD)(-D)}
For every $p$, the following short exact sequence of sheaves
\[
0\to \Omega^p_X (\log D) \otimes \mathcal{O}_X(-D) \to
\Omega^p_X \to \Omega^p_D \to 0
\]
is exact \cite[2.3]{librENSview} or \cite[Lemma 4.2.4]{lazar}.
\end{remark}
\begin{example}\cite[Chapter 8]{voisin}
In the holomorphic setting,
$\Omega^k_X(\log D)$ is the sheaf of meromorphic differential forms $\omega$ that admit a pole of order at most 1 along (each component) of $D$, and the same holds for $d\omega$.
Let $z_1,z_2, \ldots, z_n$ be holomorphic coordinates on an open set $U$ of $X$, in which $D \cap U$ is defined by the equation $ z_1z_2\cdots z_r=0$. Then, $\Omega^k_X(\log D)_{\mid U}$ is a sheaf of free $\mathcal{O}_U$-modules, for which $\displaystyle \frac{dz_{i_1}}{z_{i_1}}\wedge \cdots \frac{dz_{i_l}}{z_{i_l}}\wedge dz_{j_1} \wedge \cdots \wedge dz_{j_m} $
with $i_s \leq r$, $j_s >r$ and $l+m=k$ form a basis.
\end{example}
\begin{remark}\label{remark T log dual sheaf locally free}
The sheaves of logarithmic $k$-forms $\Omega^k_X(\log D)= \wedge^k \Omega^1_X(\log D)$ are locally free and the sheaf $\Theta_X (-\log D)$ is dual to the sheaf $\Omega^1_X(\log D)$, so it is in particular locally free for $D$ global normal crossing divisor.
The sheaf of logarithmic $n$-forms $\Omega^n_X(\log D)\cong \mathcal{O}_X(K_X+D)$ is a line bundle called the \emph{logarithmic canonical bundle} for the pair $(X,D)$.
\end{remark}
\begin{definition}\label{definiiton log calabi yau}
A \emph{log Calabi-Yau pair} $(X,D)$ is a pair where $D$ is a smooth divisor in a smooth projective variety $X$
of dimension $n$, and the logarithmic canonical bundle $\Omega^{n}_X(\log D)$ is trivial.
\end{definition}
\begin{example}
Let $X$ be a smooth projective variety and $D$ an effective smooth divisor such that $D \in | -K_X|$.
Then, the sheaf $\Omega^n_X(\log D)\cong \mathcal{O}_X(K_X+D)$ is trivial, i.e., the pair $(X,D)$ is a log Calabi Yau pair.
\end{example}
The complex $(\Omega^{\ast}_X(\log D),d)$ is equipped with the Hodge filtration, which induces a filtration on the hypercohomology $\mathbb{H}^{\ast}(X, \Omega^{\ast}_X(\log D)) $. As for the algebraic de Rham complex, the spectral sequence associated with the Hodge filtration on $\Omega^{\ast}_X(\log D)$ has its first term equal to $E_1^{p,q}=H^q(X,\Omega^{p}_X(\log D))$.
The following degeneration properties hold.
\begin{theorem}\label{teo degen deligne}
(Deligne) Let $X$ be a smooth proper variety and $D \subset X$ be a globally normal crossing divisor. Then, the spectral sequence associated with the Hodge filtration
\[
E_{1}^{p,q}=H^q(X,\Omega^{p}_X(\log D)) \Longrightarrow \mathbb{H}^{p+q}( X,\Omega^{\ast}_X(\log D))
\]
degenerates at the $E_1$-level.
\end{theorem}
\begin{proof}
This is the analogous of the degeneration of the Hodge-to-de Rham spectral sequence. As in this case, there is a complete algebraic way to prove it, avoiding analytic technique, see
\cite[Section 3]{deligneII}, \cite{DI}, \cite[Corollary 10.23]{librENSview} or \cite[Theorem 8.35]{voisin}).
\end{proof}
\begin{theorem}\label{teo degen tensor}
Let $X$ be a smooth proper variety and $D \subset X$ be a globally normal crossing divisor. Then, the spectral sequence associated with the Hodge filtration
\[
E_{1}^{p,q}=H^q(X,\Omega^{p}_X(\log D) \otimes \mathcal{O}_X(-D)) \Longrightarrow
\mathbb{H}^{p+q}( X,\Omega^{\ast}_X(\log D) \otimes \mathcal{O}_X(-D))
\]
degenerates at the $E_1$-level.
\end{theorem}
\begin{proof}
See \cite[Section 2.29]{fujino1} or \cite[Section 5.2]{fujino2}.
\end{proof}
\section{Background on DGLAs and Cartan Homotopies}\label{section back ground DGLA}
\subsection{DGLA}
A \emph{differential graded Lie algebra} is
the data of a differential graded vector space $(L,d)$ together
with a bilinear map $ [- , - ] \colon L \times L \to L$ (called bracket)
of degree 0, such that the following conditions are satisfied:
\begin{enumerate}
\item (graded skewsymmetry) $[a,b]=-(-1)^{\bar{a} \; \bar{b}} [b,a]$.
\item (graded Jacobi identity)
$ [a,[b,c]] = [[a,b],c] + (-1)^{\bar{a}\;\bar{b}} [b, [a,c]]$.
\item (graded Leibniz rule)
$ d[a,b] =[ da,b]+ (-1)^{\bar{a}}[a, db]$.
\end{enumerate}
In particular, the Leibniz rule implies that the bracket of a DGLA induces
a structure of graded Lie algebra on its cohomology.
Moreover, a DGLA is \emph{abelian} if its bracket is trivial.
A \emph{morphism} of differential graded Lie algebras $\chi \colon L \to M$ is a linear map
that commutes with brackets and differentials and preserves degrees.
A \emph{quasi-isomorphism} of DGLAs is a morphism
that induces an isomorphism in cohomology. Two DGLAs $L$ and $M$ are said to be
\emph{quasi-isomorphic}, or \emph{homotopy equivalent}, if they are equivalent under the
equivalence relation generated by: $L\sim M$ if there exists
a quasi-isomorphism $\chi\colon L\to M$.
A DGLA is \emph{homotopy abelian} if it is quasi-isomorphic to an abelian DGLA.
\begin{remark}
The category ${\bf{DGLA}}$ of DGLAs is too strict for our purpose and we require to enhance
this category allowing $L_{\infty}$ morphisms of DGLAs. Therefore, we work in the category whose objects are DGLAs and whose morphisms are $L_{\infty}$ morphisms of DGLAs.
This category is equivalent to the homotopy category of ${\bf{DGLA}}$, obtained inverting all quasi-isomorphisms.
Using this fact, we do not give the explicit definition of an $L_{\infty}$ morphism of DGLAs: by an $L_{\infty}$ morphism we mean a morphism in this homotopy category (a zig-zag morphism) and we denote it with a dash-arrow. We only emphasize that an $L_{\infty}$ morphism of DGLAs has a linear part that is a morphism of complexes and therefore it induces a morphism in cohomology.
For the detailed descriptions of such structures we refer to
\cite{LadaStas,LadaMarkl,EDF,fukaya,K,getzler,manRENDICONTi,cone,IaconoIMNR}.
\end{remark}
\begin{lemma}\label{lem.criterioquasiabelianita}
Let $f_{\infty}\colon M_1 \dashrightarrow M_2$ be a $L_{\infty}$ morphism of
DGLAs
with $M_2$ homotopy abelian. If $f_{\infty}$ induces an injective morphism in cohomology, then $M_1$ is also homotopy abelian.
\end{lemma}
\begin{proof}
See \cite[Proposition 4.11]{KKP} or \cite[Lemma 1.10]{algebraicBTT}.
\end{proof}
The \emph{homotopy fibre} of a morphism of DGLA $\chi\colon L\to M$ is the DGLA
\[TW(\chi):=\{(l, m(t,dt)) \in L \times M[t,dt] \ \mid \ m(0,0)=0, \, m(1,0)=\chi(l) \}.
\]
\begin{remark} \label{rem.quasiisoTWcono}
If $\chi\colon L\to M$ is an injective morphism of DGLAs,
then its cokernel $M/\chi(L)$ is a differential graded vector space and the map
\[
TW(\chi)\to (M/\chi(L))[-1], \qquad (l,p(t)m_0+q(t)dt m_1)
\mapsto
\left(\int_0^1q(t)dt\right) m_1 \pmod{\chi(L)},
\]
is a surjective quasi-isomorphism.
\end{remark}
\begin{lemma} \label{lem.criterio TW abelian}
Let $\chi \colon L\to M$ be an injective morphism of differential graded Lie algebras such that:
$\chi \colon H^*(L)\to H^*(M)$ is injective. Then, the homotopy fibre $TW(\chi)$
is homotopy abelian.
\end{lemma}
\begin{proof} \cite[Proposition 3.4]{algebraicBTT} or \cite[Lemma 2.1]{semireg}.
\end{proof}
\begin{example}
\cite[Example 3.5]{algebraicBTT}
Let $W$ be a differential graded vector space and let $U \subset W$
be a differential graded subspace. If the induced morphism in cohomology
$H^*(U)\to H^*(W)$ is injective, then the inclusion of DGLAs
\[
\chi \colon \{f\in \operatorname{Hom}^*_{\mathbb{K}}(W,W) \mid f(U) \subset U\} \to \operatorname{Hom}^*_{\mathbb{K}}(W,W)
\]
satisfies the hypothesis of Lemma~\ref{lem.criterio TW abelian} and so the DGLA $TW(\chi)$ is homotopy abelian.
\end{example}
\subsection{Cartan homotopies}\label{Section cartan homoto}
Let $L$ and $M$ be two differential graded Lie algebras. A \emph{Cartan homotopy} is a linear map of degree $-1$
\[ \boldsymbol{i} \colon L \to M \]
such that, for every $a,b\in L$, we have:
\[ \boldsymbol{i}_{[a,b]}=[\boldsymbol{i}_a,d_M\boldsymbol{i}_b] \qquad \text{and } \qquad [\boldsymbol{i}_a,\boldsymbol{i}_{b}] =0.\]
For every Cartan homotopy $\boldsymbol{i}$, it is defined the Lie derivative map
\[ \boldsymbol{l} \colon L\to M,\qquad
\boldsymbol{l}_a=d_M\boldsymbol{i}_a+\boldsymbol{i}_{d_L a}.
\]
It follows from the definiton of a Cartan homotopy $\boldsymbol{i}$ that $\boldsymbol{l}$ is a morphism of DGLAs.
Therefore, the conditions of Cartan homotopy become
\[\boldsymbol{i}_{[a,b]}=[\boldsymbol{i}_a,\boldsymbol{l}_b] \qquad \text{and } \qquad [\boldsymbol{i}_a,\boldsymbol{i}_{b}]=0.\]
Note that, as a morphism of complexes, $\boldsymbol{l}$ is homotopic to 0 (with homotopy $\boldsymbol{i}$).
\begin{example}\label{exam.cartan su ogni aperto}
Let $X$ be a smooth algebraic variety. Denote by $\Theta_X$ the tangent
sheaf and by $(\Omega^{\ast}_X,d)$ the algebraic de Rham complex.
Then, for every open subset $U \subset X$, the contraction of a vector space with a differential form
\[
\Theta_X(U) \otimes \Omega^k_X(U) \xrightarrow{\quad{\mspace{1mu}\lrcorner\mspace{1.5mu}}\quad}
\Omega^{k-1}_X(U)
\]
induces a linear map of degree $-1$
\[
\boldsymbol{i} \colon \Theta_X(U) \to \operatorname{Hom}^*(\Omega^{*}_X(U),
\Omega^{*}_X(U)), \qquad \boldsymbol{i}_{\xi} (\omega) = \xi {\mspace{1mu}\lrcorner\mspace{1.5mu}}\omega
\]
that is a Cartan homotopy. Indeed, the above conditions coincide
with the classical Cartan's homotopy formulas.
\end{example}
We are interested in the logarithmic generalization of the previous example.
\begin{example}\label{exam.cartan relativo su ogni aperto}
Let $X$ be a smooth algebraic variety and $D$ a normal crossing divisor.
Let $(\Omega^{\ast}_X(\log D),d)$ be the logarithmic differential complex and $\Theta_X(-\log D)$ the subsheaf of the tangent sheaf $\Theta_X$ of the derivations that preserve the ideal sheaf of $D$ as in the previous section.
It is easy to prove explicitly that for every open subset $U\subset X$, we have
\[(\, \Theta_X(-\log D)(U)\ {\mspace{1mu}\lrcorner\mspace{1.5mu}}\ \Omega^k_X(\log D)(U)\,) \subset
\Omega^{k-1}_X(\log D)(U).\]
Then, as above, the induced linear map of degree $-1$
\[
\boldsymbol{i}\colon \Theta_X(-\log D)(U)\to \operatorname{Hom}^*(\Omega^{*}_X(\log D)(U),
\Omega^{*}_X(\log D)(U)),\qquad \boldsymbol{i}_{\xi}(\omega)=\xi{\mspace{1mu}\lrcorner\mspace{1.5mu}}\omega
\]
is a Cartan homotopy.
\end{example}
\begin{lemma} \label{lem.cartan induce morfismo TW}
Let $L,M$ be DGLAs and $\boldsymbol{i} \colon L\to M$ a Cartan homotopy. Let $N\subset M$ be a differential graded Lie subalgebra such that $\boldsymbol{l}(L)\subset N$ and
\[ TW(\chi)= \{(x,y(t))\in N\times M[t,dt]\mid y(0)=0,\; y(1)=x\}\]
the homotopy fibre of the inclusion $\chi \colon N\hookrightarrow M$. Then, it is well defined an $L_{\infty}$ morphism $L\stackrel{(\boldsymbol{l},\boldsymbol{i})}{\dashrightarrow }TW(\chi)$.
\end{lemma}
\begin{proof}
See \cite[Corollary 7.5]{semireg} for an explicit description of this morphism. We only note that the linear part, i.e., the induced morphism of complexes, is given by $(\boldsymbol{l},\boldsymbol{i})(a):= (\boldsymbol{l}_a,t\boldsymbol{l}_a+dt\boldsymbol{i}_a)$, for any $a \in L$.
\end{proof}
\subsection{Simplicial objects and Cartan homotpies}
Let $\mathbf{\Delta}_{\operatorname{mon}}$ be the category whose objects are finite
ordinal sets and whose morphisms are order-preserving injective
maps between them.
A \emph{semicosimplicial differential graded Lie algebra} is a
covariant functor $\mathbf{\Delta}_{\operatorname{mon}}\to
\mathbf{DGLA}$. Equivalently, a
semicosimplicial DGLA ${\mathfrak g}^\Delta$ is a diagram
\[
\xymatrix{ {{\mathfrak g}_0}
\ar@<2pt>[r]\ar@<-2pt>[r] & { {\mathfrak g}_1}
\ar@<4pt>[r] \ar[r] \ar@<-4pt>[r] & { {\mathfrak g}_2}
\ar@<6pt>[r] \ar@<2pt>[r] \ar@<-2pt>[r] \ar@<-6pt>[r]&
\cdots},
\]
where each ${\mathfrak g}_i$ is a DGLA, and for each
$ i > 0 $, there are $ i + 1$ morphisms of DGLAs
\[
\partial_{k,i} \colon {\mathfrak g}_{i-1}\to {\mathfrak
g}_{i},
\qquad k=0,\dots,i,
\]
such that $\partial_{ k+1, i+1} \partial_{l , i}= \partial_{l,i+1}\partial_{k,i}$,
for any $k\geq l$.
In a semicosimplicial DGLA ${\mathfrak g}^\Delta$, the maps
\[
\partial_i=\partial_{0,i}-\partial_{1,i}+\cdots+(-1)^{i}
\partial_{i,i}
\]
endow the vector space $\prod_i{\mathfrak g}_i$ with the
structure of a differential complex. Moreover, being a
DGLA, each ${\mathfrak g}_i$ is in particular a differential
complex; since the maps $\partial_{k,i}$ are morphisms of DGLAs,
the space $
{\mathfrak g}^\bullet_\bullet$
has a natural bicomplex structure. We emphasise that the associated total complex
\[({\rm Tot}({\mathfrak g}^\Delta),d_{\operatorname{Tot}})\quad\text{where}\quad
{\rm Tot}({\mathfrak g}^\Delta)=\prod_{i}{\mathfrak
g}_i[-i],\quad d_{\operatorname{Tot}}=\sum_{i,j}\partial_i+(-1)^jd_j\] has no
natural DGLA structure.
However, there is another bicomplex naturally associated with a semicosimplicial DGLA, whose total complex is naturally a
DGLA.
For every $n\ge 0$, let
$(A_{PL})_n$ be the differential graded commutative algebra
of polynomial differential forms on the standard $n$-simplex
$\{(t_0,\ldots,t_n) \in \mathbb{K}^{n+1}\mid \sum t_i=1\}$ \cite{FHT}:
\[ (A_{PL})_n = \frac{\mathbb{K}[t_0,\ldots,t_n,dt_0,\ldots,dt_n]}
{(1-\sum t_i, \sum dt_i)}.\]
Denote
by $\delta^{k,n} \colon (A_{PL})_n \to (A_{PL})_{n-1}$, $k = 0,\ldots,n$,
the face maps; then, there are well-defined
morphisms of differential graded vector spaces
\[
\delta^{k} \otimes Id \colon (A_{PL})_{n} \otimes \mathfrak{g}_n \to (A_{PL})_{n-1} \otimes
\mathfrak{g}_n,\]
\[Id \otimes \partial_{k} \colon
(A_{PL})_{n-1} \otimes \mathfrak{g}_{n-1} \to (A_{PL})_{n-1} \otimes \mathfrak{g}_{n},
\]
for every $0\le k\le n$.
The Thom-Whitney bicomplex is then defined as
\[
C^{i,j}_{TW}(\mathfrak{g}^\Delta)
=\{ (x_n)_{n\in {\mathbb
N}}\in \prod_n (A_{PL})_n^i\otimes {\mathfrak g}_n^j
\mid ( \delta^{k} \otimes Id)x_n=
(Id \otimes \partial_{k})x_{n-1},\; \forall\; 0\le k\le n\},
\]
where $(A_{PL})_n^i$ denotes the degree $i$ component of $(A_{PL})_n$.
Its total complex is denoted by $( {TW}(\mathfrak{g}^\Delta), d_{TW})$ and it
is a DGLA, called the \emph{Thom-Whitney} DGLA.
Note that the integration maps
\[ \int_{\Delta^n}\otimes \operatorname{Id}\colon (A_{PL})_{n}\otimes
{\mathfrak g}_n\to {\mathbb K}[n]\otimes {\mathfrak g}_n= {\mathfrak g}_n[n]\]
give a quasi-isomorphism
of differential graded vector spaces
\[
I\colon ( TW( {\mathfrak g}^\Delta), d_{TW})\to
({\operatorname{Tot}}( {\mathfrak g}^\Delta),d_{\operatorname{Tot}}).
\]
For more details, we refer the reader to \cite{whitney,navarro,getzler,cone,chenggetzler}.
\begin{remark}
For any semicosimplicial DGLA ${\mathfrak g}^\Delta$, we have just defined the Thom-Whitney DGLA. Therefore, using the Maurer-Cartan equation,
we can associate with any ${\mathfrak g}^\Delta$ a deformation functor, namely
\[
\operatorname{Def}_{{TW}(\mathfrak{g}^{\Delta})}: \mathbf{Art} \to \mathbf{Set},
\]
\[
\operatorname{Def}_{{TW}(\mathfrak{g}^{\Delta})}(A)=\frac{\operatorname{MC}_{{TW}(\mathfrak{g}^{\Delta})}(A)}{\text{gauge}}=\frac{\{ x \in {{TW}(\mathfrak{g}^{\Delta})}^1\otimes \mathfrak{m}_A \ |\ dx+
\displaystyle\frac{1}{2} [x,x]=0 \}}{\exp({{TW}(\mathfrak{g}^{\Delta})}^0\otimes \mathfrak{m}_A ) }.
\]
In particular, the tangent space to $\operatorname{Def}_{{TW}(\mathfrak{g}^{\Delta})}$ is
\[
T\operatorname{Def}_{{TW}(\mathfrak{g}^{\Delta})}:=\operatorname{Def}_{{TW}(\mathfrak{g}^{\Delta})}( \mathbb{K}[\epsilon]/ \epsilon^2 ) \cong H^1({TW}(\mathfrak{g}^{\Delta}))\cong H^1({\operatorname{Tot}}(\mathfrak{g}^{\Delta}))
\]
and obstructions are contained in
\[
H^2({TW}(\mathfrak{g}^{\Delta}))\cong H^2({\operatorname{Tot}}(\mathfrak{g}^{\Delta})).
\]
\end{remark}
\begin{example}\label{ex.cech semicosimplicial}
Let $\mathcal{L}$ be a sheaf of differential graded vector spaces over an algebraic variety $X$ and $\mathcal{U}=\{U_i\}$ an open cover of $X$; assume that the set of indices $i$ is totally ordered. We can then define
the semicosimplicial DG vector space of \v{C}ech cochains of $\mathcal{L}$ with respect to the cover $\mathcal{U}$:
\[ \mathcal{L}(\mathcal{U}):\quad \xymatrix{ {\prod_i\mathcal{L}(U_i)}
\ar@<2pt>[r]\ar@<-2pt>[r] & { \prod_{i<j}\mathcal{L}(U_{ij})}
\ar@<4pt>[r] \ar[r] \ar@<-4pt>[r] &
{\prod_{i<j<k}\mathcal{L}(U_{ijk})}
\ar@<6pt>[r] \ar@<2pt>[r] \ar@<-2pt>[r] \ar@<-6pt>[r]& \cdots},\]
where the coface maps $ \displaystyle \partial_{h}\colon
{\prod_{i_0<\cdots <i_{k-1}} \mathcal{L}(U_{i_0 \cdots i_{k-1}})}\to
{\prod_{i_0<\cdots <i_k} \mathcal{L}(U_{i_0 \cdots i_k})}$
are given by
\[\partial_{h}(x)_{i_0 \ldots i_{k}}={x_{i_0 \ldots
\widehat{i_h} \ldots i_{k}}}_{|U_{i_0 \cdots i_k}},\qquad
\text{for }h=0,\ldots, k.\]
The total complex $\operatorname{Tot}(\mathcal{L}(\mathcal{U}))$ is the associated \v{C}ech complex $C^*(\mathcal{U},\mathcal{L})$ and we denote by $TW(\mathcal{L}(\mathcal{U}))$ the associated Thom-Whitney complex. The integration map
$TW(\mathcal{L}(\mathcal{U}))\to C^*(\mathcal{U},\mathcal{L})$ is a surjective quasi-isomorphism.
If $\mathcal{L}$ is a quasicoherent DG-sheaf and every $U_i$ is affine, then the cohomology of $TW(\mathcal{L}(\mathcal{U}))$ is the same of the cohomology of $\mathcal{L}$.
\end{example}
\begin{example}\label{example.funtore se DGLA sono LIE} \cite{FIM,FMM}
If each $\mathfrak{g}_i$ is concentrated in degree zero, i.e.,
$\mathfrak{g}^\Delta$ is a semicosimplicial Lie
algebra, then the functor $\operatorname{Def}_{ TW(\mathfrak{g}^{\Delta})} $ has another explicit description; namely, it is isomorphic to the following functor:
\[
H^1_{\rm sc}(\exp \mathfrak{g}^\Delta): \mathbf{Art} \to \mathbf{Set}
\]
\[
H ^1_{sc}(\exp { \mathfrak{g}}^\Delta )(A)=\frac{\{ x \in {\mathfrak{g}}_1 \otimes
\mathfrak{m}_A \ |\ e^{\partial_{0}x}e^{-\partial_{1}x}e^{\partial_{2}x}=1
\}}{\sim},
\]
where $x \sim y$ if and only if there exists
$a\in {\mathfrak{g}}_0\otimes\mathfrak{m}_A$, such that
$e^{-\partial_{1}a}e^{x}e^{\partial_{0}a}=e^y$.
\medskip
In particular, let $Z \subset X$ be a closed subscheme of a smooth variety $X$, $\mathcal{U}=\{U_i\}$ an open affine cover of $X$ and consider
$\mathfrak{g}^\Delta = {TW( {\Theta}_X(-\log Z)(\mathcal{U}))}$. Then, for every $A \in \mathbf{Art}$, we have
\[
\operatorname{Def}_{TW(\mathfrak{g}^\Delta )}(A)\cong\frac{\{ \{x_{ij}\} \in \prod_{i<j} {\Theta}_X(-\log Z)(U_{ij})
\otimes
\mathfrak{m}_A \ |\ e^{x_{jk}} e^{-x_{ik}} e^{x_{ij}}=1
\}}{\sim},
\]
where $x \sim y$ if and only if there exists
$\{a_i\}_i \in \prod_i{\Theta}_X(-\log Z)(U_{i })\otimes\mathfrak{m}_A$, such that $e^{-a_i}e^{x_{ij}}e^{a_j}=e^{y_{ij}}$ \cite[Theorem 4.1]{FMM}.
\end{example}
\bigskip
The notion of Cartan homotopy is related to the notion of calculus and it can be extended to the semicosimplicial setting.
\begin{definition}\label{def.contraction} \cite{TT05,semireg}
Let $L$ be a differential graded Lie algebra and $V$ a differential
graded vector space. A bilinear map
\[
L \times V \xrightarrow{\quad{\mspace{1mu}\lrcorner\mspace{1.5mu}}\quad}
V
\]
of degree $-1$ is called a \emph{calculus} if the induced map
\[ \boldsymbol{i} \colon L \to \operatorname{Hom}^*_{\mathbb{K}}(V,V), \qquad \boldsymbol{i}_l(v) = l{\mspace{1mu}\lrcorner\mspace{1.5mu}} v,\]
is a Cartan homotopy.
\end{definition}
\begin{definition}\label{def.contractioncosimpl}
Let $\mathfrak{g}^\Delta$ be a semicosimplicial DGLA and
$V^\Delta$ a semicosimplicial differential graded vector space.
A \emph{semicosimplicial Lie-calculus}
\[ \mathfrak{g}^\Delta\times V^\Delta\xrightarrow{\;{\mspace{1mu}\lrcorner\mspace{1.5mu}}\;} V^\Delta,\]
is a sequence of calculi $\mathfrak{g}_n\times V_n\xrightarrow{\;
{\mspace{1mu}\lrcorner\mspace{1.5mu}}\;} V_n$, $n\ge 0$,
commuting with coface maps, i.e., $\partial_k(l{\mspace{1mu}\lrcorner\mspace{1.5mu}} v)=\partial_k(l){\mspace{1mu}\lrcorner\mspace{1.5mu}}
\partial_k(v)$, for every $k$.
\end{definition}
\begin{lemma}\label{lem.TWforcontractions}
Every semicosimplicial calculus
\[ \mathfrak{g}^\Delta\times V^\Delta\xrightarrow{\;{\mspace{1mu}\lrcorner\mspace{1.5mu}}\;} V^\Delta\]
extends naturally to a calculus
\[ {TW}(\mathfrak{g}^\Delta)\times {TW}(V^\Delta)
\xrightarrow{\;{\mspace{1mu}\lrcorner\mspace{1.5mu}}\;} {TW}(V^\Delta).\]
Therefore, the induced map
\[ \boldsymbol{i}\colon {TW}(\mathfrak{g}^\Delta)
\to \operatorname{Hom}^*_{\mathbb{K}}({TW}(V^\Delta),{TW}(V^\Delta)) \]
is a Cartan homotopy.
\end{lemma}
\begin{proof}
\cite[Proposition 4.9]{algebraicBTT}.
\end{proof}
\begin{example}\label{exe. cartan homoto relativa TW}
Let $X$ be a smooth algebraic variety and $D$ a normal crossing divisor.
Denote by $(\Omega^{\ast}_X(\log D),d)$ the logarithmic differential complex and $\Theta_X(-\log D)$ the usual subsheaf of $\Theta_X$ preserving the ideal of $D$.
According to Example~\ref{exam.cartan relativo su ogni aperto},
for every open subset $U\subset X$, we have a contraction
\[\Theta_X(-\log D)(U)\ \times\ \Omega^*_X(\log D)(U)\, \xrightarrow{\;{\mspace{1mu}\lrcorner\mspace{1.5mu}}\;}
\Omega^*_X(\log D)(U).\]
Since it commutes with restrictions to open subsets, for every affine open cover $\mathcal{U}$ of $X$, we have a
semicosimplicial contraction
\[
\Theta_X(-\log D)(\mathcal{U})\times\Omega^*_X (\log D)(\mathcal{U})
\xrightarrow{\;{\mspace{1mu}\lrcorner\mspace{1.5mu}}\;} \Omega^*_X (\log D)(\mathcal{U}).
\]
According to Lemma \ref{lem.TWforcontractions}, it is well defined
the Cartan homotopy
\[\boldsymbol{i}\colon {TW}( \Theta_X(-\log D)(\mathcal{U})) \longrightarrow
\operatorname{Hom}^*( {TW}(\Omega^*_X (\log D)(\mathcal{U})),
{TW}(\Omega^*_X (\log D)(\mathcal{U}))).\]
\end{example}
\section{Deformations of pairs}\label{section deformation}
Let $Z \subset X$ be a closed subscheme of a smooth variety $X$ and denote by $j:Z \hookrightarrow X$ the closed embedding. Note that at this point we are not assuming neither $Z$ divisor nor $Z$ smooth.
We recall the definition of infinitesimal deformations of the closed embedding $j:Z \hookrightarrow X$, i.e., infinitesimal deformations of the pair $(X,Z)$; full details can be found for instance in \cite[Section 3.4.4]{Sernesi} or \cite{Kaw0}.
\begin{definition}
Let $A\in \mathbf{Art}$. An \emph{infinitesimal deformation} of $j:Z \hookrightarrow X$
over $\operatorname{Spec}(A)$ is a diagram
\begin{center}
$\xymatrix{\mathcal{Z} \ar[rr]^J\ar[dr]_p & &\mathcal{X} \ar[dl]^\pi \\
& \operatorname{Spec}(A), & & \\ }$
\end{center}
where $p$ and $\pi$ are flat maps, such that the diagram is isomorphic to
$j:Z \hookrightarrow X$ via the pullback $\operatorname{Spec}(\mathbb{K}) \to \operatorname{Spec}(A)$. Note that $J$ is also a closed embedding \cite[pag 185]{Sernesi}.
Given another infinitesimal deformation of $j$:
\begin{center}
$\xymatrix{\mathcal{Z}' \ar[rr]^{J'}\ar[dr]_{p'} & & \mathcal{X}' \ar[dl]^{\pi'} \\
& \operatorname{Spec}(A), & & \\ }$
\end{center}
an isomorphism between these two deformations is a pair of isomorphisms of deformations:
\[
\alpha: \mathcal{Z} \to \mathcal{Z}' , \qquad \beta: \mathcal{X} \to \mathcal{X}'
\]
such that the following diagram
\begin{center}
$\xymatrix{\mathcal{Z} \ar[rr]^{J}\ar[d]_{\alpha} & & \mathcal{X } \ar[d]^{\beta} \\
\mathcal{Z}' \ar[rr]^{J'} & & \mathcal{X}' \\ }$
\end{center}
is commutative.
The associated infinitesimal deformation functor is
$$
\operatorname{Def}_{(X,Z)} : \mathbf{Art} \to \mathbf{Set},
$$
$$
\operatorname{Def}_{(X,Z)}(A)= \{
\mbox{isomorphism classes of infinitesimal deformations of $j$ over
$\operatorname{Spec}(A)$} \}.
$$
\end{definition}
Furthermore, we define the sub-functor
$$
{\operatorname{Def}'}_{(X,Z)} : \mathbf{Art} \to \mathbf{Set},
$$
$$
{\operatorname{Def}'}_{(X,Z)} =
\left\{\begin{array}{c} \mbox{ isomorphism classes of locally trivial}\\
\mbox{ infinitesimal deformations $j$ over $\operatorname{Spec}(A)$}\end{array} \right\}.
$$
\begin{remark}
Since every affine non singular algebraic variety is rigid \cite[Theorem 1.2.4]{Sernesi}, whenever $Z\subset X$ is smooth, every deformation of $j$ is locally trivial and so ${\operatorname{Def}}_{(X,Z)}\cong {\operatorname{Def}'}_{(X,Z)}$.
\end{remark}
Let $\mathcal{U}=\{U_i\}_{i \in I}$ be an affine open cover of $X$ and $TW( {\Theta}_X(-\log Z)(\mathcal{U}))$ the DGLA associated with the sheaf of Lie algebras ${\Theta}_X(-\log Z)$ as in Example \ref{ex.cech semicosimplicial}.
\begin{theorem}\label{teo. DGLA controlling def of j}
In the assumption above, the DGLA $TW( {\Theta}_X(-\log Z)(\mathcal{U}))$ controls the locally trivial deformation of the closed embedding $j:Z \hookrightarrow X$, i.e., there exists an isomorphism of deformation functors
\[
\operatorname{Def}_{TW( {\Theta}_X(-\log Z)(\mathcal{U}))} \cong
\operatorname{Def}'_{(X,Z)}.
\]
In particular, if $Z\subset X$ is smooth, then
$\operatorname{Def}_{TW( {\Theta}_X(-\log Z)(\mathcal{U}))} \cong
\operatorname{Def}_{(X,Z)}$.
\end{theorem}
\begin{proof}
See also \cite[Proposition 3.4.17]{Sernesi}, \cite[Theorem 4.2]{donarendiconti}.
Denote by $\mathcal{V}=\{V_i=U_i\cap Z\}_{i \in I}$ the induced affine open cover of $Z$.
Every locally trivial deformation of $j$ is obtained by the gluing of the trivial deformations
\[
\xymatrix{ V_i \ar[rr] \ar[d]& & V_{i } \times \operatorname{Spec}(A) \ar[d]\\
U_i \ar[rr]& & U_{i } \times \operatorname{Spec}(A), \\ }
\]
in a compatible way along the double intersection $V_{ij} \times \operatorname{Spec}(A)$ and $U_{ij} \times \operatorname{Spec}(A)$.
Therefore, it is determined by automorphisms of the trivial deformations, that glues over triple intersections, i.e., by pairs of automorphisms $(\varphi_{ij}, \phi_{ij})$, where
\[ \varphi_{ij} \colon V_{ij} \times \operatorname{Spec}(A) \to V_{ij} \times \operatorname{Spec}(A)
\qquad \mbox{ and } \qquad \phi_{ij} \colon U_{ij} \times \operatorname{Spec}(A) \to U_{ij} \times \operatorname{Spec}(A)
\]
are automorphisms of deformations, satisfying the cocycle condition on triple intersection and such that the following diagram
\[
\xymatrix{ V_{ij} \times \operatorname{Spec}(A) \ar[rr]^{\varphi_{ij}} \ar[d]& & V_{ij} \times \operatorname{Spec}(A) \ar[d]\\
U_{ij} \times \operatorname{Spec}(A) \ar[rr]^{\phi_{ij}}& & U_{ij} \times \operatorname{Spec}(A) \\ }
\]
commutes.
Equivalently, we have ${\phi_{ij}}_{| V_{ij}}=\varphi_{ij}$.
Since we are in characteristic zero, we can take the logarithms
so that $\varphi_{ij}=e^{d_{ij}}$, for some
$d_{ij}\in\Theta_Z(V_{ij})\otimes\mathfrak{m}_A$, and $\phi_{ij}=e^{D_{ij}}$, for some $D_{ij}\in\Theta_X(U_{ij})\otimes\mathfrak{m}_A$. The compatibility condition is equivalent to the condition $D_{ij} \in \Gamma(U_{ij},{\Theta}_X(-\log Z))\otimes\mathfrak{m}_A$. Summing up, a deformation of $j$ over $\operatorname{Spec} (A)$ corresponds to the datum of a sequence $\{D_{ij}\}_{ij} \in \prod_{ij}{\Theta}_X(-\log Z))(U_{ij})\otimes\mathfrak{m}_A$ satisfying the cocycle condition
\begin{equation}\label{equa.cociclo auto Vij}
e^{D_{jk}} e^{-D_{ik}} e^{D_{ij}} =\operatorname{Id} , \qquad
\forall \ i<j<k \in I.
\end{equation}
Next, let $J'$ be another deformation of
$j$ over $\operatorname{Spec}(A)$. To give an isomorphism of deformations between $J$ and $J'$ is equivalent to give, for every $i$, an automorphism $\alpha_i$ of $V_{i } \times \operatorname{Spec}(A)$ and an automorphism $\beta_i$ of $U_{i } \times \operatorname{Spec}(A)$, that are isomorphisms of deformations of $X$ and $Z$, respectively, i.e., for every $i<j$,
$\varphi_{ij}= {\alpha_i}^{-1}
{\varphi_{ij}'}^{-1}\alpha_j$ and $\phi_{ij}= {\beta_i}^{-1}
{\phi_{ij}'}^{-1}\beta_j$.
Moreover, they have to be compatible, i.e., the following diagram
\begin{center}
$\xymatrix{V_i \times \operatorname{Spec}(A) \ar[rr] \ar[d]_{\alpha_i} & & U_i \times \operatorname{Spec}(A) \ar[d]^{\beta_i} \\
V_i \times \operatorname{Spec}(A) \ar[rr] & & U_i \times \operatorname{Spec}(A) \\ }$
\end{center}
has to commutes, for every $i$.
Taking again logarithms, an isomorphism between the deformations $J$ and $J'$ is equivalent to the existence of a sequence $\{a_i\}_i \in \prod_i\Theta_X(-\log Z)(U_i)\otimes\mathfrak{m}_A$,
such that $e^{-a_i}e^{D'_{ij}}e^{a_j}=e^{D_{ij}}$. Then, the conclusion follows from
the explicit description of the functor $
\operatorname{Def}_{TW( {\Theta}_X(-\log Z)(\mathcal{U}))} $ given in
Example \ref{example.funtore se DGLA sono LIE}.
\end{proof}
\begin{remark}\label{remark def X as def of trivial pair}
If $Z=0$, then we are analysing nothing more than the infinitesimal deformations of the smooth variety $X$ and they are controlled by the tangent sheaf, i.e.,
$ \operatorname{Def}_{TW( {\Theta}_X (\mathcal{U}))} \cong \operatorname{Def}_X$, for any open affine cover $\mathcal{U}$ of $X$
\cite[Theorem 5.3]{algebraicBTT}.
\end{remark}
\begin{example}\label{exa DGLA (X,D) on C}
In the case $\mathbb{K}=\mathbb{C}$, we can consider the DGLA $(A^{0,*}_X({\Theta}_X(-\log Z))= \oplus_i \Gamma(X, \mathcal{A}^{0,i}_X({\Theta}_X(-\log Z))), {\overline{\partial}}, [ , ])$ as an explicit model for $TW( {\Theta}_X(-\log Z)(\mathcal{U}))$ \cite[Section 5]{ManettiSemireg} \cite[Corollary V.4.1]{Iaconophd}.
\end{example}
\section{Obstructions of pairs}
\label{section obstruction computations}
In this section, we analyse obstructions for the infinitesimal deformations of pairs, whenever the sub variety is a divisor, so that we can make use of the logarithmic differential complex $(\Omega^{\ast}_X(\log D),d)$.
\begin{theorem}\label{thm.maintheorem}
Let $X$ be a smooth projective variety of dimension $n$, defined
over an algebraically closed field of characteristic 0 and $D \subset X$ a smooth divisor. If
the contraction map
\begin{equation}\label{eqazione semiregolarity}
H^*(X,\Theta_X (-\log D))\xrightarrow{\boldsymbol{i}}\operatorname{Hom}^*(H^*(X,\Omega^n_X (\log D)),H^*(X,\Omega^{n-1}_X (\log D)))
\end{equation}
is injective, then the DGLA $TW( {\Theta}_X(-\log D)(\mathcal{U}))$ is homotopy abelian, for every affine open cover $\mathcal{U}$ of
$X$.
\end{theorem}
\begin{proof}
According to Lemma~\ref{lem.criterioquasiabelianita}, it is
sufficient to prove the existence of a homotopy abelian
DGLA $H$ and an $L_\infty$-morphism
$TW( {\Theta}_X(-\log D)(\mathcal{U}))
\dashrightarrow H$, such that the induced map of complexes is injective in cohomology.
We use the Cartan homotopy to construct the morphism, as in Lemma \ref{lem.cartan induce morfismo TW}, and the homotopy fibre construction to provide an homotopy abelian DGLA $H$, as in Lemma \ref{lem.criterio TW abelian}.
Let $\mathcal{U}$ be an affine open cover of $X$. For every $i\le n$, denote by $\check{C}(\mathcal{U},\Omega^i_X (\log D))$ the \v{C}ech complex of the coherent sheaf $ \Omega^i_X (\log D)$, and
$\check{C}(\mathcal{U},\Omega^\ast_X (\log D))$ the total complex of the
logarithmic de Rham complex $\Omega^*_X (\log D)$ with
respect to the cover $\mathcal{U}$, as in Example \ref {ex.cech semicosimplicial}.
We note that
\[
\check{C}(\mathcal{U},\Omega^n_X (\log D))^i=
\bigoplus_{a+b=i}\check{C}(\mathcal{U},\Omega^a_X (\log D))^b.
\]
and that $\check{C} (\mathcal{U}, \Omega^n_X (\log D))$ is a subcomplex of
$\check{C} (\mathcal{U}, \Omega^\ast_X (\log D))$.
\smallskip
We also have a commutative diagram of complexes with horizontal
quasi-isomorphisms:
\[
\xymatrix{
\check{C}(\mathcal{U}, \Omega^n_X (\log D))
\ar[rr] \ar@{^{(}->}[d] & & {TW} ( \Omega^n_X (\log D)(\mathcal{U}))
\ar@{^{(}->}[d] \\
\check{C} (\mathcal{U},\Omega^\ast_X (\log D))
\ar[rr] & & {TW}(\Omega^\ast_X (\log D) (\mathcal{U})).
}\]
According to Theorem \ref{teo degen deligne}, the spectral sequence associated with the Hodge filtration degenerates at the $E_1$-level, where $E_1^{p,q}=H^q(X,\Omega^{p}_X(\log D))$;
this implies that we have the following injections:
\[ H^*(X,\Omega^n_X (\log D)) = H^*(\check{C} (\mathcal{U},\Omega^n_X (\log D)))
\hookrightarrow
H^*(\check{C} (\mathcal{U},\Omega^*_X (\log D))).\]
\[ H^*(X, \Omega^{n-1}_X (\log D)) = H^* ( \check{C} ( \mathcal{U}, \Omega^{n-1}_X (\log D) ) )
\hookrightarrow H^*\left ( \frac{ \check{C}(\mathcal{U},\Omega^*_X (\log D))}{
\check{C}(\mathcal{U},\Omega^n_X (\log D))}\right).\]
Thus, the natural inclusions of complexes
\[
{TW}(\Omega^n_X (\log D) (\mathcal{U})) \to {TW}(
\Omega^\ast_X (\log D) (\mathcal{U})),
\]
\[
{TW} (\Omega^{n-1}_X (\log D)(\mathcal{U}))\to \frac{{TW}(
\Omega^\ast_X (\log D)(\mathcal{U}))}{{TW}(\Omega^n_X (\log D)(\mathcal{U}))},\]
induces injective morphisms in cohomology.
Consider the differential graded Lie algebras
\[
M=\operatorname{Hom}^*( {TW}( \Omega^*_X (\log D)(\mathcal{U})), {TW}
(\Omega^*_X (\log D)(\mathcal{U})),
\]
\[
L=\{f\in M \mid
f( {TW}(\Omega^n_X (\log D)(\mathcal{U})))\subset
{TW}(\Omega^n_X (\log D)(\mathcal{U}))\},
\]
and denote by $\chi\colon L \to M$ the inclusion. Lemma \ref{lem.criterio TW abelian} implies that the homotopy fibre $TW(\chi^{\Delta})$ is homotopy abelian.
Next, we provide the existence of a morpshim to this homotopy abelian DGLA.
According to Example \ref{exe. cartan homoto relativa TW}, it is well defined the Cartan homotopy
\[\boldsymbol{i}\colon {TW}( \Theta_X(-\log D)(\mathcal{U})) \longrightarrow
\operatorname{Hom}^*( {TW}(\Omega^*_X (\log D)(\mathcal{U})),
{TW}(\Omega^*_X (\log D)(\mathcal{U}))).\]
In particular, for every $\xi\in {TW}(\Theta_X(-\log D)(\mathcal{U}))$
and every $k$, we note that
\[ \boldsymbol{i}_{\xi}( {TW}(\Omega^k_X (\log D)(\mathcal{U})))\subset
{TW}(\Omega^{k-1}_X (\log D)(\mathcal{U})),\]
\[ \boldsymbol{l}_{\xi}( {TW}(\Omega^k_X (\log D)(\mathcal{U})))\subset
{TW}(\Omega^k_X (\log D)(\mathcal{U})),\qquad \boldsymbol{l}_{\xi}
=d\boldsymbol{i}_{\xi}+\boldsymbol{i}_{d\xi}.\]
Therefore, $\boldsymbol{l}( {TW}(\Theta_X(-\log D)(\mathcal{U})))\subset L$ and so, by
Lemma \ref{lem.cartan induce morfismo TW}, there exists an
$L_\infty$-morphism
\[
{TW}( \Theta_X(-\log D)(\mathcal{U})) \stackrel{(\boldsymbol{l},\boldsymbol{i})}{ \dashrightarrow}
TW(\chi^{\Delta}).
\]
Finally, since the map $\chi$ in injective, according to Remark \ref{rem.quasiisoTWcono}, the homotopy fibre
$TW(\chi^{\Delta})$ is quasi-isomorphic to the suspension of its cokernel
\[\operatorname{Coker} \chi[-1]= \operatorname{Hom}^*\left( {TW}(\Omega^n_X (\log D)\mathcal{U})),
\frac{ {TW}(\Omega^*_X (\log D)(\mathcal{U}))}{
{TW}(\Omega^{n}_X (\log D)(\mathcal{U}))}\right)[-1].\]
Summing up, since the $L_\infty$-morphism induces a morphism of complexes, we have the following commutative diagram of complexes
\[\xymatrix{ {TW}(\Theta_X(-\log D)(\mathcal{U}))
\ar[rr]^{(\boldsymbol{l},\boldsymbol{i})} \ar[d]^{\boldsymbol{i}} & & TW(\chi^{\Delta})
\ar[d]^{q-iso} \\
\operatorname{Hom}^*\left( {TW}(\Omega^n_X (\log D)(\mathcal{U})),
{TW}(\Omega^{n-1}_X (\log D)(\mathcal{U}))\right)
\ar[rr]^{\qquad\qquad\qquad\alpha\!\!\!\!} & & \operatorname{Coker} \chi[-1].
\\ }\]
By the assumption of the theorem, together with
\cite[3.1]{navarro}, the left-hand map is injective in
cohomology.
Since $\alpha$ is also injective in cohomology, we conclude that
the $L_\infty$-morphism $(\boldsymbol{l},\boldsymbol{i})$ is injective in cohomology.
\end{proof}
\begin{theorem}\label{thm.kodairaprinciple}
Let $X$ be a smooth projective variety defined over an algebraically closed field of characteristic 0 and $D \subset X$ a smooth divisor. Then, the obstructions to the deformations of the pair $(X,D)$ are contained in the kernel of the contraction map
\[ H^2(\Theta_X( (-\log D)))\xrightarrow{\boldsymbol{i}}\prod_{p}
\operatorname{Hom}(H^p((\Omega^{n}_X (\log D)),H^{p+2}((\Omega^{n-1}_X (\log D)).\]
\end{theorem}
\begin{proof}
Following the proof of Theorem~\ref{thm.maintheorem},
for every affine open cover $\mathcal{U}$ of $X$,
there exists an $L_{\infty}$-morphism
$TW(\Theta_X( (-\log D)(\mathcal{U})))\dashrightarrow TW
(\chi^{\Delta})$ such that
$TW(\chi^{\Delta})$ is homotopy abelian. Therefore, the deformation functor
associated with $TW(\chi^{\Delta})$ is unobstructed
and the obstructions of
$\operatorname{Def}_{(X,D)}\simeq\operatorname{Def}_{TW(\Theta_X( (-\log D)(\mathcal{U}))}$ are
contained in the kernel of the obstruction map
$H^2(TW(\Theta_X( (-\log D)(\mathcal{U})))\to H^2(TW(\chi^{\Delta}))$.
\end{proof}
\begin{remark}
In the previous theorem, we prove that all obstructions are annihilated by the contraction map; in general, the $T^1$-lifting theorem is definitely insufficient for
proving this kind of theorem, see also \cite{IaconoSemireg,ManettiSeattle}.
\end{remark}
\begin{corollary} \label{corollari log calabi yau formal smooth}
Let $\mathcal{U}=\{U_i\}$ be an affine open cover of a smooth
projective variety $X$ defined over an algebraically closed field
of characteristic 0 and $D \subset X$ a smooth divisor. If $(X,D)$ is a log Calabi-Yau pair, then the DGLA $TW( {\Theta}_X(-\log D))(\mathcal{U})$ is homotopy abelian.
\end{corollary}
\begin{proof}
Let $n$ be the dimension of $X$, then by definition the sheaf $\Omega^n_X (\log D)$ is trivial (Definition \ref{definiiton log calabi yau}).
Therefore, the cup product with a nontrivial section of it gives the isomorphisms
$H^i(X,\Theta_X (-\log D))\simeq H^i(X,\Omega^{n-1}_X (\log D))$. Then, the conclusion follows from Theorem~\ref{thm.maintheorem}.
\end{proof}
\begin{corollary}\label{cor.log calabi yau no obstruction}
Let $X$ be a smooth projective $n$-dimensional variety defined over an algebraically closed field of characteristic 0 and $D \subset X$ a smooth divisor. If $(X,D)$ is a log Calabi-Yau pair, i.e., the logarithmic canonical bundle $\Omega^n_X (\log D)\cong \mathcal{O}(K_X+D)$ is trivial, then the pair $(X,D)$ has unobstructed deformations.
\end{corollary}
\begin{proof}
According to Theorem \ref{teo. DGLA controlling def of j}, for every affine open cover $\mathcal{U}$ of $X$, there exists an isomorphism of functor
$\operatorname{Def}_{(X,D)} \cong \operatorname{Def}_{TW((\Theta_X( (-\log D))(\mathcal{U}))}$. Then,
Corollary~\ref{corollari log calabi yau formal smooth} implies that they are both smooth.
\end{proof}
\begin{remark}
For the degeneration of the spectral sequence associated with the logarithmic complex, it is enough to have a normal crossing divisor $D$ in a smooth proper variety $X$ (Theorem \ref{teo degen deligne}). Therefore, we can still perform the same computations of Theorem \ref{thm.maintheorem} and prove that the obstructions to the locally trivial deformations of a pair $(X,D)$, with $X$ smooth proper variety and D normal crossing divisor, are contained in the kernel of the contraction map (\ref{eqazione semiregolarity}).
Analogously, if the sheaf $\Omega^n_X (\log D)$ is trivial, the above computations prove the unobstructedness for the locally trivial deformations of the pair $(X,D)$.
\end{remark}
We end this section, by proving that the DGLA associated with the infinitesimal deformations of the pair $(X,D)$, with $D$ a smooth divisor in a smooth projective Calabi Yau variety $X$ is homotopy abelian; hence, we show that the deformations of these pairs $(X,D)$ are unobstructed.
\begin{theorem}\label{theorem smoothness of D inside CY}
Let $\mathcal{U}=\{U_i\}$ be an affine open cover of a smooth
projective variety $X$ of dimension n defined over an algebraically closed field
of characteristic 0 and $D \subset X$ a smooth divisor.
If $\Omega ^n_X$ is trivial, i.e., $X$ is Calabi Yau, then the DGLA
$TW( {\Theta}_X(-\log D))(\mathcal{U})$ is homotopy abelian.
\end{theorem}
\begin{proof}
The proof is similar to the one of Theorem \ref{thm.maintheorem}.
According to Theorem \ref{teo degen tensor}, the Hodge-to-de Rham spectral sequences associated with the complex $\Omega^*_X (\log D) \otimes \mathcal{O}_X(-D) $ degenerates at the $E_1$ level.
Therefore, we have injective maps
\[ H^*(\check{C}(\mathcal{U},\Omega^n_X (\log D) \otimes \mathcal{O}_X(-D)))
\hookrightarrow
H^*(\check{C}(\mathcal{U},\Omega^*_X (\log D) \otimes \mathcal{O}_X(-D))) \]
\[ H^*(\check{C}(\mathcal{U},\Omega^{n-1}_X (\log D) \otimes \mathcal{O}_X(-D)))
\hookrightarrow
H^*\left(\frac{\check{C}(\mathcal{U},\Omega^*_X (\log D) \otimes \mathcal{O}_X(-D))}{
\check{C}(\mathcal{U},\Omega^n_X (\log D) \otimes \mathcal{O}_X(-D))}\right).\]
and so the natural inclusions of complexes
\begin{equation}\label{eq tot injective in cohomology Omega tensor}
{TW}(\Omega^n_X (\log D) \otimes \mathcal{O}_X(-D)(\mathcal{U}))\to {TW}(
\Omega^*_X (\log D) \otimes \mathcal{O}_X(-D)(\mathcal{U})),
\end{equation}
\begin{equation}\label{eq tot injective in cohomology Omega n-1 tensor in quot}
{TW}(\Omega^{n-1}_X (\log D) \otimes \mathcal{O}_X(-D)(\mathcal{U}))\to \frac{ {TW}(
\Omega^*_X (\log D) \otimes \mathcal{O}_X(-D)(\mathcal{U}))}{ {TW}(\Omega^n_X (\log D) \otimes \mathcal{O}_X(-D)(\mathcal{U}))},
\end{equation}
are injective in cohomology. Observe that, $\Omega^n_X (\log D)\otimes \mathcal{O}_X(-D) \cong \Omega^n_X$,
according to Remark \ref{remark exac sequence Omega(logD)(-D)}.
Consider the inclusion of DGLAs $\chi:L \to M$, where
\[
M=\operatorname{Hom}^*( {TW}( \Omega^*_X (\log D)\otimes \mathcal{O}_X(-D) (\mathcal{U})), {TW}
(\Omega^*_X (\log D)\otimes \mathcal{O}_X(-D) (\mathcal{U})))
\]
and
\[
L=\{f\in M \mid
f( {TW}( \Omega^n_X (\mathcal{U})))\subset
{TW}( \Omega^n_X (\mathcal{U}))\}.
\]
is the sub-DGLA preserving $ {TW}( \Omega^n_X (\mathcal{U}))={TW}(\Omega^n_X (\log D)\otimes \mathcal{O}_X(-D) (\mathcal{U}))$.
Note that
\[\operatorname{Coker} \chi[-1]= \operatorname{Hom}^*\left( {TW}(\Omega^n_X( \mathcal{U})),
\frac{ {TW}(\Omega^*_X (\log D)\otimes \mathcal{O}_X(-D)(\mathcal{U}))}{
{TW}(\Omega^{n}_X (\mathcal{U}))}\right)[-1].
\]
Lemma \ref{lem.criterio TW abelian} together with Equation \eqref{eq tot injective in cohomology Omega tensor} implies that $TW(\chi^{\Delta})$ is homotopy abelian.
As in the proof of Theorem \ref{thm.maintheorem}, it is well defined the Cartan homotopy
\[\boldsymbol{i}\colon {TW}( \Theta_X(-\log D)(\mathcal{U})) \longrightarrow M
\]
and, in particular, $\boldsymbol{l}( {TW}(\Theta_X(-\log D)(\mathcal{U})))\subset L$. Therefore,
Lemma \ref{lem.cartan induce morfismo TW} implies the existence of an
$L_\infty$-morphism
\[
{TW}( \Theta_X(-\log D)(\mathcal{U})) \stackrel{(\boldsymbol{l},\boldsymbol{i})}{ \dashrightarrow}
TW(\chi^{\Delta}).
\]
According to Lemma~\ref{lem.criterioquasiabelianita}, to conclude the proof it is enough to show that this map is
injective in cohomology.
As morphism of complexes, the previous map fits in the following commutative diagram of complexes
\[\xymatrix{ {TW}(\Theta_X(-\log D)(\mathcal{U}))
\ar[rr]^{(\boldsymbol{l},\boldsymbol{i})} \ar[d]^{\boldsymbol{i}} & & TW(\chi^{\Delta})
\ar[d]^{q-iso} \\
\operatorname{Hom}^*\left( {TW}(\Omega^n_X(\mathcal{U})),
{TW}(\Omega^{n-1}_X (\log D)\otimes \mathcal{O}_X(-D) (\mathcal{U}))\right)
\ar[rr]^{\qquad\qquad\qquad\alpha\!\!\!\!} & & \operatorname{Coker} \chi[-1],
\\ }\]
where $\alpha$ is injective in cohomology by Equation \eqref{eq tot injective in cohomology Omega n-1 tensor in quot}.
At this point, we use the fact that $X$ is a smooth projective Calabi Yau variety.
Since $\Omega ^n_X$ is trivial, the cup product with a nontrivial section gives the isomorphisms
$H^i(X,\Theta_X (-\log D))\simeq H^i(X,\Omega^{n-1}_X (\log D)\otimes \mathcal{O}_X(-D) )$, for every $i$.
Therefore, the left map in the diagram is injective in cohomology and so the same holds for ${(\boldsymbol{l},\boldsymbol{i})}$.
\end{proof}
\begin{corollary}\label{cor. D in calabi yau no obstruction}
Let $X$ be a smooth projective Calabi Yau variety defined over an algebraically closed field of characteristic 0 and $D \subset X$ a smooth divisor. Then, the pair $(X,D)$ has unobstructed deformations.
\end{corollary}
\section{Application to cyclic covers}\label{section cyclic cover}
Let $X$ be a smooth projective variety over an algebraically
closed field $\mathbb{K}$ of characteristic 0. If $X$ has trivial canonical bundle, then the deformations of $X$ are unobstructed. It is actually enough that the canonical bundle $K_X$ is a torsion line bundle, i.e., there exists $m>0$ such that $K_X^{ m}= \mathcal{O}_X$, see for instance \cite[Corollary 2]{zivran}, \cite[Corollary B]{manetti adv}, \cite[Corally 6.5]{algebraicBTT}.
Indeed, consider the unramified m-cyclic cover defined by the line bundle $L=K_X$, i.e.,
$\pi: Y= \operatorname{Spec} (\bigoplus_{i=0}^{m-1} L^{-i}) \to X $. Then, $\pi$ is a finite flat map of degree m and $Y$ is a smooth projective variety with trivial canonical bundle ($K_Y \cong \pi^* K_X\cong \mathcal{O}_Y$) and so it has unobstructed deformations.
Let $\mathcal{U}=\{U_i\}_i$ be an affine cover of $X$ and fix $\mathcal{V}=\{\pi^{-1}(U_i)\}_i$ the induced cover of $Y$. Then, the pull back map induces a morphism of DGLAs
$ TW( \Theta_X(\mathcal{U})) \to TW( \Theta_Y(\mathcal{V})) $
that is injective in cohomology; since the DGLA $ TW( \Theta_Y(\mathcal{V})) $ is homotopy abelian, Lemma \ref{lem.criterioquasiabelianita} implies that $TW( \Theta_X(\mathcal{U}))$ is also homotopy abelian and so $\operatorname{Def}_X$ is unobstructed \cite[Theorem 6.2]{algebraicBTT}.
As observed in Remark \ref{remark def X as def of trivial pair}, the infinitesimal deformations of $X$ can be considered as deformations of the pair $(X,D)$ with $D=0$.
Then, according to the Iitaka's philosophy and inspired by \cite{Sano}, the idea is to extend the previous computations to the logarithmic case, i.e., a pair $(X,D)$ with $D$ a smooth divisor, by considering
cyclic covers of $X$ branched on the divisor $D$ (indeed, if $D=0$ we obtain the unramified covers).
We firstly recall some properties of these covers; for full details see for instance \cite{pardini},
\cite[Section 3]{librENSview} or \cite[Section 2.4]{kollarmori}.
Suppose we have a line bundle $L$ on $X$, a positive integer $m \geq 1$ and a non zero section $s \in \Gamma (X, L^{ m})$ which defines the smooth divisor $D \subset X$ (as usual $L^m$ stands for $L^{\otimes m}$).
The cyclic cover $\pi: Y \to X$ of degree $m$ and branched over $D$ is, in the language of \cite{pardini}, the simple abelian cover determined by its building data $L$ and $D$, such that $mL \equiv D$, associated with the cyclic group $G$ of order $m$.
More explicitly, the variety $Y=\operatorname{Spec} (\bigoplus_{i=0}^{m-1} L^{-i}) $ is smooth and there exists
a section $s' \in \Gamma(Y, \pi^* L)$, with $(s')^m = \pi^* s$. The divisor $\Delta= (s')$ is also smooth and maps isomorphically to $D$ so that $\pi^* D= m \Delta$ and $\pi^*L= \mathcal{O}_Y(\Delta)$. Moreover,
\[
\pi_* \mathcal{O}_Y = \bigoplus_{i=0}^{m-1} L^{-i},
\qquad K_Y=\pi^*K_X \otimes \mathcal{O}_Y((m-1)\Delta)= \pi^*(K_X \otimes L^{m-1})
\]
and
\[
\pi^* \Omega_X^i(\log D)\cong \Omega_Y^i(\log \Delta) \ \mbox{ for all } \ i;
\]
in particular, $ K_Y\otimes \mathcal{O}_Y(\Delta)=\pi^*(K_X) \otimes \mathcal{O}_Y(m\Delta)= \pi^*(K_X \otimes\mathcal{O}_X(D))$
\cite[Lemma 3.16]{librENSview} or \cite[Proposition 4.2.4]{lazar}.
\medskip
Since $\pi:Y \to X$ is a finite map, for any sheaf $\mathcal{F}$ on $Y$, the higher direct images sheaves vanish and so the Leray spectral sequence $E_2^{p,q} =H^p(X, R^q\pi_* \mathcal{F}) \Rightarrow H^{p+q}(Y,\mathcal{F})$ degenerates at level $E_2$; therefore, it induces isomorphisms:
$
H^p(X, \pi_* \mathcal{F}) \cong H^{p}(Y,\mathcal{F}), \ \forall \ p.
$
In particular, for any locally free sheaf $\mathcal{E}$ on $X$ we have:
\[
H^p(X, \pi_*\pi^* \mathcal{E}) \cong H^{p}(Y, \pi^*\mathcal{E}), \quad \forall \ p.
\]
By the projection formula
\[
\pi_*\pi^* \mathcal{E} \cong \pi_*(\pi^* \mathcal{E}\otimes \mathcal{O}_Y) \cong \mathcal{E}\otimes \pi_* \mathcal{O}_Y
\cong \mathcal{E}\otimes \bigoplus_{i=0}^{m-1} L^{-i} ;\]
then, for any locally free sheaf $\mathcal{E}$ on $X$
\begin{equation}\label{equation. comolo summand G action}
H^p(X, \mathcal{E}\otimes L^{ -i} ) \subseteq H^p(X, \pi_*\pi^* \mathcal{E})\cong H^{p}(Y, \pi^*\mathcal{E}) , \ \forall \ p, \, i.
\end{equation}
and in particular
\begin{equation}\label{equation. comolo summand invariant G action}
H^p(X, \mathcal{E}) \subseteq H^p(X, \pi_*\pi^* \mathcal{E})\cong H^{p}(Y, \pi^*\mathcal{E}) , \ \forall \ p.
\end{equation}
\begin{remark}
Note that, the m-cyclic group $G$ acts on $\pi_*\pi^* \mathcal{E} $: the invariant summand of $\pi_*\pi^* \mathcal{E} $ is $(\pi_*\pi^* \mathcal{E} )^{inv} = \mathcal{E} $, while $ \mathcal{E} \otimes L^{ -i}$ is the direct summand of $\pi_*\pi^* \mathcal{E} $ on which $G$ acts via multiplication by $\zeta ^i \ (\zeta^m =1)$.
\end{remark}
\begin{proposition}\label{proposition morismo DGLA cover}
In the above notation, let $\pi: Y \to X$ be the $m$-cyclic cover branched over $D$ with $\pi^* D= m \Delta$. Let $\mathcal{U}=\{U_i\}_i$ be an affine open cover of $X$ and $\mathcal{V}=\{\pi^{-1}(U_i)\}_i$ the induced affine open cover of $Y$; then, the pull back define a morphism of DGLAs
\[
TW( \Theta_X(-\log D) (\mathcal{U})) \to TW( {\Theta}_Y(-\log \Delta)(\mathcal{V}))
\]
that is injective in cohomology.
\end{proposition}
\begin{proof}
Let $U\subset X$ be an affine open subset and $ V=\pi^{-1}(U) $. Then, the pull back map induce a morphism
$ {\Theta}_X(-\log D)(U) \to \pi^* {\Theta}_X(-\log D)(V)$, that behaves well under the restriction to open sets.
Therefore, fixing an affine cover $\mathcal{U}=\{U_i\}_i$ of $X$ and denoting by $\mathcal{V}=\{\pi^{-1}(U_i)\}_i$ the induced affine cover of $Y$, the pull back map induces a morphism of DGLAs
\[
TW( \Theta_X(-\log D) (\mathcal{U})) \to TW( \pi^* {\Theta}_X(-\log D)(\mathcal{V})) .
\]
Since $ \pi^* \Omega_X^1(\log D)\cong \Omega_Y^1(\log \Delta)$, we have $
\pi^*{\Theta}_X(-\log D)\cong {\Theta}_Y(-\log \Delta) $.
Moreover, the pull back morphism
\[
{\Theta}_X(-\log D)\stackrel{\pi^*}{\to} \pi^*{\Theta}_X(-\log D) \cong {\Theta}_Y(-\log \Delta)
\]
induces injective morphisms on the cohomology groups. Indeed,
$H^i(X, {\Theta}_X(-\log D))$ is a direct summand of $H^i(X, \pi_* \pi^* {\Theta}_X(-\log D)) \cong H^i(Y, \pi^* {\Theta}_X(-\log D)) \cong H^i(Y, {\Theta}_Y(-\log \Delta)) $.
It follows that the induced DGLAs morphism
\[
TW( \Theta_X(-\log D) (\mathcal{U})) \to TW( {\Theta}_Y(-\log \Delta)(\mathcal{V}))
\]
is injective in cohomology.
\end{proof}
\begin{remark}\label{remark no obstruct up implies no obstr down}
The DGLAs morphism $TW( \Theta_X(-\log D) (\mathcal{U})) \to TW( {\Theta}_Y(-\log \Delta)(\mathcal{V})) $, induces a morphism of the associated deformation functor
\[ \operatorname{Def}_{(X,D)} \to \operatorname{Def}_{(Y,\Delta)}.
\]
According to Lemma \ref{lem.criterioquasiabelianita}, if $TW( {\Theta}_Y(-\log \Delta)(\mathcal{V})) $ is homotopy abelian, so that the deformations of the pair $(Y, \Delta)$ are unobstructed, then $TW( \Theta_X(-\log D) (\mathcal{U})$ is also homotopy abelian and so the deformations of the pair $(X,D) $ are also unobstructed.
In particular, this happen if the pair $(Y, \Delta)$ is a log Calabi-Yau. Note that this is a sufficient but not necessary condition for the unosbtructedness of $(X,D) $, as we can observe in the following example.
\end{remark}
\begin{proposition}\label{proposition D in -mKx smooth pair}
Let $X$ be a smooth projective variety and $D$ a smooth divisor such that $D \in | -mK_X|$, for some positive integer $m$. Then, the DGLA $TW( \Theta_X(-\log D) (\mathcal{U}))$ is homotopy abelian and so the deformations of the pair $(X,D) $ are unobstructed.
\end{proposition}
\begin{proof}
Let $n$ be the dimension of $X$ and consider the m-cyclic cover $\pi: Y\to X$ branched over $D$ defined by the line bundle $L=K_X^{ - 1}$ together with a section $s \in H^0(X, L ^{ m})$ defining $D$. Note that $ \Omega_X^n(\log D)\otimes L^{ -m+1} \cong L^{ - m}\otimes \mathcal{O}_X(D)\cong \mathcal{O}_X (mK_X+D)\cong \mathcal{O}_X$.
Defining $\Delta$ as before, i.e., $\pi^* D= m \Delta$, we also have
\[
K_Y=\pi^*K_X \otimes \mathcal{O}_Y((m-1)\Delta)= \pi^*(K_X \otimes L^{ m-1})=\pi^*( L^{ m-2})
\]
and in particular,
\[
K_Y\otimes \mathcal{O}_Y(\Delta) = \pi^*(K_X \otimes\mathcal{O}_X(D))=\pi^*( L^{ m-1}).
\]
According to Equations \eqref{equation. comolo summand G action} and \eqref{equation. comolo summand invariant G action}, we have the following inclusions
\[
H^p(Y, {\Theta}_Y(-\log \Delta)) \supset H^p(X, \pi_*( {\Theta}_Y(-\log \Delta))^{inv}) \cong H^p(X, {\Theta}_X(-\log D)) \ \forall \, p, \]
\[
H^p(Y, \Omega_Y^a(\log \Delta) ) \supset H^p(X, \Omega_X^a(\log D)\otimes L^{ -i} ) \ \forall \ p,\ a, \ i;
\]
in particular for $a=n$, $p=0$ and $i=m-1$, we have
\[
H^0(Y, \Omega_Y^n(\log \Delta) ) \supset H^0(X, \Omega_X^n(\log D)\otimes L^{-m+1} ) \cong H^0(X, \mathcal{O}_X).
\]
Then the constant section of $\mathcal{O}_X $ gives a section $\omega$ of the logarithmic canonical bundle $ \Omega_Y^n(\log \Delta)$, vanishing only on $\Delta$ (of order $m-1$). In particular,
the cup product with $\omega \in H^0(X, \Omega_X^n(\log D)\otimes L^{ -m+1} )$, gives isomorphisms
$H^p(X, {\Theta}_X(-\log D) )\cong H^{p}(X,\Omega^{n-1}_X (\log \Delta)\otimes L^{ -m+1})$, for all $p$.
Therefore, the following composition
\[\xymatrix{ H^p(Y,\Theta_Y (-\log \Delta)) \ar[rr] ^{\boldsymbol{i} \qquad \qquad \qquad }\ & &
\prod_j \operatorname{Hom}(H^j(Y,\Omega^n_Y (\log \Delta)),H^{j+p}(Y,\Omega^{n-1}_Y (\log \Delta))) \ar[d]^{{\mspace{1mu}\lrcorner\mspace{1.5mu}} \omega}\\
H^p(X,{\Theta}_X(-\log D) ) \ar@{^{(}->}[u]^{j}& & H^{p}(X,\Omega^{n-1}_X (\log \Delta)\otimes L^{ -m+1}) \ }\]
is injective and in particular the composition $\boldsymbol{i} \circ j$ is injective, for all $p$.
According to Proposition \ref{proposition morismo DGLA cover},
fixing an affine cover $\mathcal{U}=\{U_i\}_i$ of $X$ and denoting by $\mathcal{V}=\{\pi^{-1}(U_i)\}_i$ the induced affine cover of $Y$, the pull back map induces a morphism of DGLAs
\[
TW( \Theta_X(-\log D) (\mathcal{U})) \to TW( {\Theta}_Y(-\log \Delta)(\mathcal{V}))
\]
that is injective in cohomology.
Finally, as in the proof of Theorem \ref{thm.maintheorem} denote by $TW(\chi ^{\Delta})$ the homotopy abelian differential graded Lie algebra associated with the inclusion $\chi:L \to M$, with \[
M=\operatorname{Hom}^*( {TW}( \Omega^*_Y (\log \Delta)(\mathcal{V})), {TW}
\Omega^*_Y (\log \Delta)(\mathcal{V})),
\]
\[
L=\{f\in M \mid
f( {TW}(\Omega^n_Y (\log \Delta)(\mathcal{V})))\subset
{TW}(\Omega^n_Y (\log \Delta)(\mathcal{V}))\}.
\]
Then, the composition morphism
\[
TW( \Theta_X(-\log D) (\mathcal{U})) \to TW( {\Theta}_Y(-\log \Delta)(\mathcal{V}))
\dashrightarrow TW(\chi ^{\Delta}).
\]
is injective in cohomolgy and so by Lemma \ref{lem.criterioquasiabelianita}, the DGLA $TW( \Theta_X(-\log D) (\mathcal{U})) $ is homotopy abelian.
\end{proof}
\begin{remark}
In the case $m=2$, the results is a consequence of Theorem \ref{theorem smoothness of D inside CY} and Remark \ref{remark no obstruct up implies no obstr down}.
Indeed, in this case
the canonical line bundle $K_Y $ of $Y$ is trivial, i.e., Y is a smooth Calabi Yau variety and so the DGLA associated with the pair $(Y, \Delta)$ is homotopy abelian.
\end{remark}
\begin{remark}
The previous proposition is a generalisation of \cite[Theorem 2.1]{Sano}, avoiding the assumption $H^1(X, \mathcal{O}_X)=0$.
Moreover, if $H^1(D, N_{D|X})=0$, then $D$ is stable in $X$, i.e., the forgetting morphism $\phi: \operatorname{Def}_{(X,D)}\to \operatorname{Def}_X$ is smooth; this implies that the deformations of $X$ are unobstructed, e.g., deformations of weak Fano manifolds are unobstructed \cite[Theorem 1.1]{Sano}.
\end{remark}
\section{Application to differential graded Batalin-Vilkovisky algebras}\label{section dbv}
If the ground field is $\mathbb{C}$, we already noticed in Example \ref{exa DGLA (X,D) on C}, that the differential graded Lie algebra $(A_X^{0,*}( \Theta_X(-\log D)), {\overline{\partial}}, [ , ])$ controls the deformations of the pair $(X,D)$, for $D$ a smooth divisor in a projective smooth manifold $X$.
In \cite {terilla, KKP, BraunLaza}, the authors use the differential Batalin-Vilkovisky algebras and a degeneration property for these algebras to prove that the associated DGLA is homotopy abelian.
Using the power of the notion of Cartan homotopy, we can give an alternative proof of these results and so we provide alternative proofs, over $\mathbb{C}$, of Corollary \ref{corollari log calabi yau formal smooth} and Corollary \ref{cor. D in calabi yau no obstruction}.
First of all we recall the fundamental definitions in this setting, for more details we refer the reader to \cite{bavi,getzler94,KKP}.
\begin{definition}\label{def dbv}
Let $k$ be a fixed odd integer.
A \emph{differential Batalin-Vilkovisky algebra} (dBV for short) of degree $k$ over $\mathbb{K}$ is the data
$(A, d, \Delta)$, where $(A,d)$ is a differential $\mathbb{Z}$-graded commutative algebra with unit $1\in A$,
and $\Delta$ is an operator of degree $-k$, such that $\Delta^2=0$, $\Delta (1)=0$ and
\begin{multline*}
\Delta(abc)+\Delta(a)bc+(-1)^{\bar{a}\;\bar{b}}\Delta(b)ac+(-1)^{\bar{c}(\bar{a}+\bar{b})}
\Delta(c)ab=\\
=\Delta(ab)c+(-1)^{\bar{a}(\bar{b}+\bar{c})}\Delta(bc)a+(-1)^{\bar{b}\bar{c}}\Delta(ac)b.
\end{multline*}
\end{definition}
The previous equality is often called the seven-term relation.
It is well known \cite{koszul} or \cite[Section 4.2.2]{KKP}, that given a graded dBV algebra
$(A, d, \Delta)$ of degree $k$, it is canonically defined a differential graded Lie algebra
$(\mathfrak{g},d,[-,-])$, where:
$\mathfrak{g}=A[k]$, $d_{\mathfrak{g}}=-d_A$ and,
\[
[a,b]= (-1)^{p}(\Delta(ab)-\Delta(a)b)-a\Delta(b),\qquad a\in A^p.
\]
\smallskip
Next, let $(A, d, \Delta)$ be a dBV algebra and
$t$ a formal central variable of (even) degree $1+k$. Denote by $A[[t]]$ the graded vector space of formal power series with coefficients in $A$ and by
by $A((t))=\bigcup_{p\in\mathbb{Z}} t^pA[[t]]$ the graded vector space of formal Laurent power series.
We have $d(t)=\Delta(t)=0$ and $d-t\Delta$ is a well-defined differential on $A((t))$.
\begin{lemma}\label{lem.cartanhomotopyforDBV}
In the above notation, the map
\[\boldsymbol{i}\colon \mathfrak{g} \to\operatorname{Hom}^*_{\mathbb{K}}(A((t)),A((t))),\qquad
a\longmapsto \boldsymbol{i}_a(b)=\frac{1}{t}ab \]
is a Cartan homotopy.
\end{lemma}
\begin{proof}
We have to verify the two conditions of being a Cartan homotopy, given in
Section \ref{Section cartan homoto}. The former identity $[\boldsymbol{i}_a,\boldsymbol{i}_b]=0$ is trivial.
As regard the latter $[\boldsymbol{i}_a,\boldsymbol{l}_b]-\boldsymbol{i}_{[a,b]}=0$, we
recall that $\boldsymbol{l}_b=[d-t\Delta, \boldsymbol{i}_b] -\boldsymbol{i}_{db}$ (note that the differential changes sign on the $k$-fold suspension). Moreover, we have the following explicit description
\[
\boldsymbol{l}_b(c)=-\Delta(bc)+ (-1)^{\bar{b}} b \Delta(c).
\]
Indeed,
\[
\boldsymbol{l}_b(c)= [d-t\Delta, \boldsymbol{i}_b](c) -\frac{(db)c}{t}
\]
\[
= (d-t\Delta)(\frac{bc}{t}) - (-1)^{\bar{b}}\boldsymbol{i}_b(dc-t\Delta(c))-\frac{(db)c}{t}=
\]
\[ \frac{1}{t}(d(bc) -(-1)^{\bar{b}} b(dc) -(db)c ) -\Delta(bc)+ (-1)^{\bar{b}} b \Delta(c). \]
Then,
\[
[\boldsymbol{i}_a,\boldsymbol{l}_b](c)-\boldsymbol{i}_{[a,b]}(c)=
\boldsymbol{i}_a(-\Delta(bc)+ (-1)^{\bar{b}} b \Delta(c)) -(-1)^{\bar{a}(\bar{b}+1)}\boldsymbol{l}_b(\frac{ac}{t}) -\frac{1}{t}[a,b]c\]
\[=\frac{1}{t}(-a\Delta(bc) + (-1)^{\bar{b}} ab \Delta(c) -(-1)^{\bar{a}(\bar{b}+1)} (-\Delta(bac)+ (-1)^{\bar{b}} b \Delta(ac)) \]
\[-(-1)^{\bar{a}}(\Delta(ab)c-\Delta(a)bc)+a\Delta(b)c)=0.
\]
The last equality follows from the the seven-terms relation satisfied by $\Delta$ (multiplying $(-1)^a t$).
\end{proof}
\begin{definition}
A dBV algebra $(A,d,\Delta)$ of degree $k$ has the \emph{degeneration property} if and only if for every $a_0 \in A$,
such that $d a_0=0$, there exists a sequence $a_i$, $i\geq 0$, with $\deg(a_i)=\deg(a_{i-1})-k-1$ and
such that
\[
\Delta a_i= da_{i+1}, \qquad i\geq 0.
\]
\end{definition}
\begin{example}\label{example E1 degen implies dbV degener}
Let $(A, d, \Delta)$ be a dBV algebra and suppose that it is bigraded, i.e., $A= \bigoplus _{i,j \geq 0}A^{i,j}$ and
$d: A^{i+1,j}\to A^{i,j}$ and $\Delta: A^{i,j}\to A^{i,j+1}$.
Then, the filtration $F_p = \oplus_{j\geq p} A^{i,j}$ define a decreasing filtration of the double complex and therefore a spectral sequence.
If this spectral sequence degenerates at the first page $E_1$, then the dBV algebra $(A, d, \Delta)$ has the degeneration property
\cite[Lemma 1.5]{Morgan}, \cite[Proposition 1.5]{DSV12}.
\end{example}
\begin{example}\label{example A((t)) dbV}
Let $(A, d, \Delta)$ be a dBV algebra.
On the complex $(A((t)), d-t\Delta)$, consider the filtration $F^\bullet$, defined by
$F^p=t^pA[[t]]$, for every $p \in \mathbb{Z}$. Note that $A((t))=\bigcup _{p\in \mathbb{Z}} F^p$ and $F^0=A[[t]]$.
Then, the dBV algebra $(A, d, \Delta)$ has the degeneration property
if and only if the morphism of complexes $(A[[t]],d-t\Delta) \to (A,d)$, given by $t\mapsto 0$ is surjective in cohomology, if and only if the inclusion of complexes $(tA[[t]],d-t\Delta) \to (A[[t]],d-t\Delta)$ is injective in cohomology.
In particular, the degeneration property implies that the inclusion $F^p \to A((t))$ is injective in cohomology, for every $p$, and so $A[[t]]\to A((t))$ is also injective in cohomology.
\end{example}
\begin{theorem}\label{theorem dbv degener implies homotopy abelian}
Let $(A,d,\Delta)$ be a dBV algebra with the degeneration property. Then, the associated DGLA $\mathfrak{g}=A[k]
$ is homotopy abelian.
\end{theorem}
\begin{proof}
According to the previous Lemma \ref{lem.cartanhomotopyforDBV}, it is well defined a Cartan homotopy
$\boldsymbol{i}\colon \mathfrak{g} \to\operatorname{Hom}^*_{\mathbb{K}}(A((t)),A((t)))$, whose associated Lie derivative has the following explicit expression
\[\boldsymbol{l}_b(c)=-\Delta(bc)+ (-1)^{\bar{b}} b \Delta(c).
\]
Therefore, considering the filtration $F^p=t^pA[[t]]$ of the complex $(A((t)), d-t\Delta)$ as in Example \ref{example A((t)) dbV}, we note that
\[\boldsymbol{i}: \mathfrak{g} \to \operatorname{Hom}^*(F^p,F^{p-1}) \qquad \mbox{and} \qquad \boldsymbol{l}: \mathfrak{g} \to \operatorname{Hom}^*(F^p,F^p),\quad \forall \ p.\]
Next, consider the differential graded Lie algebra
\[
M=\operatorname{Hom}^*_{\mathbb{K}}(A((t)),A((t))),
\]
the sub-DGLA
\[
N=\{\varphi \in M \mid \varphi(A[[t]]) \subset A[[t]] \},
\]
and let $\chi\colon N \to M$ be the inclusion. Since $\boldsymbol{l}(\mathfrak{g}) \subset N$, according to
Lemma \ref{lem.cartan induce morfismo TW}, there exists an induced $L_{\infty}$-morphism $\psi:\mathfrak{g} \dashrightarrow TW(\chi)$.
As observed in the Example \ref{example A((t)) dbV}, the degeneration property implies that the inclusion $A[[t]]\to A((t))$ is injective in cohomology. Therefore, the
DGLA $TW(\chi)$ is homotopy abelian by Lemma \ref{lem.criterio TW abelian}. According to Lemma \ref{lem.criterioquasiabelianita}, to conclude the proof it is enough to show that $\psi$ induces an injective morphism in cohomology.
As observed in Remark \ref{rem.quasiisoTWcono}, $TW(\chi)$ is quasi-isomorphic to
\[\operatorname{Coker}(\chi)[-1]=\operatorname{Hom}^*_{\mathbb{K}}\left(A[[t]], \frac{A((t))}{A[[t]]}\right)[-1];
\]
therefore, it is sufficient to prove that the morphism of complexes
\[\boldsymbol{i}\colon A\to \operatorname{Hom}^*_{\mathbb{K}}\left(A[[t]], \frac{A((t))}{A[[t]]}\right)[-k-1]\]
is injective in cohomology. It is actually enough
to prove the injectivity for the composition with the evaluation at $1\in A[[t]]$, i.e., the map
\[ A\to \frac{A((t))}{A[[t]]},\qquad a\mapsto \frac{a}{t},\]
is injective in cohomology.
Note that this is equivalent to the statement that the inclusion
\[ \frac{F^{-1}}{F^0}\hookrightarrow \frac{A((t))}{F^0}\]
is injective in cohomology, since
the map $a\mapsto \dfrac{a}{t}$ defines an isomorphism of DG-vector spaces $A\to F^{-1}/F^0$. The claim follows considering the short exact sequences
\[ \xymatrix{ 0\ar[r]& F^0\ar[r]\ar@{=}[d]& F^{-1} \ar[d]_{j}\ar[r]& \dfrac{F^{-1}}{F^0} \ar[d]\ar[r]&0\\
0\ar[r]& F^0\ar[r]& A((t)) \ar[r]& \dfrac{A((t))}{F^0} \ar[r]&0\\
}
\]
and keeping in mind that the inclusion $j$ is injective in cohomology by the degeneration property (Example \ref{example A((t)) dbV}).
\end{proof}
\begin{remark}
The original proof of this theorem (for $k=1$) can be found in \cite[Theorem 1]{terilla} or \cite[Theorem 4.14]{KKP}. This proof was suggested to the author by Marco Manetti.
\end{remark}
\begin{example}
\cite[Theorem 4.18]{KKP} Let X be a compact projective Calabi Yau variety of dimension $n$ over $\mathbb{C}$.
In this situation, the relevant dBV algebra is $(A, d, \Delta)$ with
$A= \Gamma (X, \mathcal{A}_X^{0,*}( \wedge^\bullet \Theta_X))$,
$d= {\overline{\partial}}$ and $\Delta =div_\omega= {i _ \omega}^{-1} \circ \partial \circ {i }_ \omega$. Here $\omega$ is a non vanishing section of
$\Omega^n_X$ and ${i }_ \omega: \wedge^\bullet \Theta_X \to \Omega^{n-\bullet}_X $ is the isomorphism given by the contraction with $\omega$. The contraction ${i }_ \omega$
gives an isomorphism of bicomplexes between the dBV algebra $(A, d, \Delta)$
and the Dolbeault bicomplex $(A^{*,*}(X), {\overline{\partial}},\partial)$. According to Example \ref{example E1 degen implies dbV degener}, the degeneration of the Hodge-to-de Rham spectral sequence implies that $(A, d, \Delta)$ has the degeneration property.
Therefore, Theorem \ref{theorem dbv degener implies homotopy abelian} implies that the associated DGLA $L= \Gamma (X, \mathcal{A}_X^{0,*}( \wedge^\bullet \Theta_X))$,
is homotopy abelian. The Kodaira Spencer DGLA of $X$ $\Gamma (X, \mathcal{A}_X^{0,*}( \Theta_X))$
is embedded in L and it is also an embedding in cohomology. According to Lemma \ref{lem.criterioquasiabelianita}, the Kodaira Spencer DGLA is also homotopy abelian and the deformations of $X$ are unobstructed.
\end{example}
\begin{example}
\cite[Section 4.3.3 (i)]{KKP}
Let $X$ be a smooth projective n dimensional variety over $\mathbb{C}$ and $D$ a smooth divisor, such that $\Omega^n(\log D)$ is trivial.
In this case, the relevant dBV algebra is $(A, d, \Delta)$ with $A= \Gamma (X, \mathcal{A}_X^{0,*} (\wedge^\bullet \Theta_X(-\log D) )$
$d= {\overline{\partial}}$ and $\Delta =div_\omega= {i _ \omega}^{-1} \circ \partial \circ {i }_ \omega$. Here $\omega$ is the non vanishing section of $\Gamma(X, \Omega^n_X(\log D))$ and ${i }_ \omega: \wedge^\bullet \Theta_X(-\log D) \to \Omega^{n-\bullet}_X(\log D)) $ is the isomorphism given by the contraction with $\omega$. The map ${i }_ \omega$ identifies $(A,d, \Delta)$ with the logarithmic Dolbeault bicomplex $(A^{*,*}(\log D), {\overline{\partial}},\partial)$.
Arguing as in the previous example and using the degeneration of the spectral sequence of Theorem \ref{teo degen deligne}, we can conclude that the DGLA $(A_X^{0,*}( \Theta_X(-\log D)), {\overline{\partial}}, [ , ])$ is homotopy abelian and so the deformations of the pair $(X,D)$ are unobstructed \cite[Lemma 4.19]{KKP}.
\end{example}
\begin{example}
\cite[Section 4.3.3 (ii)]{KKP}
Let $X$ be a smooth projective n-dimensional Calabi Yau variety over $\mathbb{C}$ and $D$ a smooth divisor. In this case the relevant dBV algebra $(A, d, \Delta)$ is similar to the one introduced in the previous example, indeed
$A= \Gamma (X, \mathcal{A}_X^{0,*} (\wedge^\bullet \Theta_X(-\log D) )$
$d= {\overline{\partial}}$ and $\Delta =div_\omega= {i _ \omega}^{-1} \circ \partial \circ {i }_ \omega$. Here $\omega$ is a non vanishing section of
$\Omega^n_X$. The degeneration of the spectral sequence of Theorem \ref{teo degen tensor} implies that
$(A, d, \Delta)$ has the degeneration property and so that $(A_X^{0,*}( \Theta_X(-\log D)), {\overline{\partial}}, [ , ])$ is homotopy abelian \cite[Lemma 4.20]{KKP}.
\end{example}
|
1,477,468,750,648 | arxiv | \section{Introduction}
General Relativity is one of the most successful theories of nature, but there are compelling reasons to explore modifications to the behaviour of gravity on both large and small scales. Most of the precise predictions of General Relativity have consistently been demonstrated experimentally: among many others these include the perihelion shift of Mercury~\cite{will1990general} and the existence of gravitational waves~\cite{abbott2016observation}. Similarly, the current standard cosmological model, the $\Lambda$ Cold Dark Matter ($\Lambda$CDM) model, is another of General Relativity's success stories. However, in order to match observation, $\Lambda$CDM requires a positive cosmological constant~\cite{Ade:2015rim,Aghanim:2018eyx}. This is backed up by observations of supernovae, which indicate that the Universe's expansion is accelerating~\cite{riess1998observational}. While a natural part of General Relativity, a cosmological constant poses a theoretical challenge to particle physics since the small observed value is inherently sensitive to high-energies, requiring delicate balancing~\cite{Padilla:2015aaa}. Furthermore, many theories of high energy physics that attempt to solve this and other problems -- such as building a consistent quantum theory of gravity -- predict deviations from General Relativity. These theories are collectively known as \emph{modified gravity theories}.
Modified gravity theories, however, typically face a difficult challenge in the form of solar system tests of Newton's laws. Models that differ from General Relativity significantly enough to explain the observed acceleration of the Universe on large scales are typically ruled out by their predicted deviations on smaller scales (solar system and laboratory tests)~\cite{Will:2005va,PhysRevLett.92.121101,2003Natur.425..374B}. There are a large variety of approaches to modified gravity -- see Koyama~\cite{Koyama:2015vza} for a comprehensive review -- but many models attempt to address the problem of solar system tests via a \textit{screening mechanism}~\cite{Joyce:2014kja}. Such mechanisms can be built into modified gravity theories to conceal deviations on solar system scales, without changing the large scale behaviour. An approach considered by many authors is the chameleon mechanism~\cite{Khoury:2003aq,PhysRevD.69.044026,Brax:2004qh}; the basic idea is to add a scalar field that couples directly to gravity in a manner that depends on the local density of matter. In high-density regions, such as inside a galaxy, the effects of modified gravity are screened out, allowing the theory to evade solar system tests. In the low-density void regions between galaxies, however, the effects of modified gravity would be unscreened.
If such a density-dependent gravity mechanism is at play, it ought to be detectable in principle by high-precision laboratory experiments. In particular, the fundamental sensitivity improvements offered by quantum systems are especially promising~\cite{giovannetti2006quantum}.
At the moment, the detection of modified gravity, and in particular, chameleon fields, has been explored through a diverse variety of methods. Searches with classical systems include theoretical proposals for torsion balance tests of fifth forces~\cite{upadhye2006unv,Mota2006strongly,Mota2007evading,Adelberger2007particle,PhysRevD.78.104021,upadhye2012dark,Upadhye:2012fz}, some of which have already been carried out as experiments~\cite{PhysRevD.70.042004, PhysRevLett.98.021101}.
Additional proposals suggest that experiments which measure Casimir forces may also be used to constrain chameleon theories~\cite{Mota2007evading,Brax2007detecting,Brax2010tuning,Almasi2015force,Brax2015casimir}.
In atom interferometry, which is already routinely used for quantum sensing, the uniformity of the atoms as well as the additional sensitivity gained from the superposition of flight-paths has led to impressive precision gravimetry sensitivities~\cite{peters2001high, mcguirk2002sensitive, bidel2013compact, hu2013demonstration}.
Several proposals have explored in depth the possibilities of searching for modified gravity and dark energy with atom interferometry~
\cite{Burrage:2014oza,Burrage:2015lya,elder2016cham,schloegel2016prob,burrage2016proposed,chiow2018multiloop,Hartley:2019wzu}, and some of the most stringent bounds on existing theories have been obtained in this way~\cite{hamilton2015atom,sabulsky2019exp}.
Further viable routes towards detecting modified gravity include ultra-cold neutron experiments~\cite{Brax:2014zta,Serebrov2011search,Brax2011strongly,Serebrov2014experimental,brax2014testing,Jenke2014gravity,Cronenberg:2016Pt} and neutron interferometry~\cite{Brax2013probing,pokotilovski2013strongly,brax2014testing,lemmel2015neutron,Li2016neutron}. Finally, tests of atomic transition frequencies~\cite{Brax2011atomic,Frugiuele2017constraining}, close examination of vacuum chambers and photo-detectors~\cite{Rybka2010search,upadhye2012design}, as well as tests of the electron magnetic moment~\cite{PhysRevD.97.084050} have also been proposed.
\begin{figure}[t!]
\includegraphics[width=0.8\linewidth]{Cavity.pdf}
\caption{A gold source mass attached to a shear piezo oscillates to create a time-varying gravitational field. The field, which potentially contains deviations from Newtonian gravity, is detected by an optomechanical probe system where the photon number $\hat a^\dag \hat a$ couples to the mechanical position $\hat x_{\mathrm{mech}}$ as $\hat a^\dag \hat a \hat x_{\mathrm{mech}}$, here presented as a moving-end mirror in a Fabry--P\'erot cavity. The amplitude $\epsilon x_0$ of the source mass oscillation is a fraction of the total distance $x_0$ between the systems. By accounting for the vacuum background density, we may also compute bounds on the parameters of the chameleon screening mechanism. }\label{fig:cavity}
\end{figure}
An additional approach to detecting the small-scale effects of modified gravity and screening is to take advantage of recent developments in the field of optomechanics, where a small mechanical element is coupled to a laser through radiation-pressure~\cite{bowenbook, aspelmeyer2014cavity}. Optomechanical system encompass a diverse set of platforms which range from microscopic movable mirrors as part of a Fabry--P\'{e}rot cavity~\cite{favero2009optomechanics}, levitated particles~\cite{barker2010cavity}, clamped membranes~\cite{tsaturyan2017ultracoherent}, liquid Helium~\cite{shkarin2019quantum} and trapped cold atoms~\cite{purdy2010tunable}. When the mechanical element is cooled down to sufficiently low temperatures, it enters into a quantum state that can be manipulated through measurements and optical control techniques. Ground-state cooling has been demonstrated across a number of platforms, including clamped membranes~\cite{chan2011laser,teufel2011sideband} and recently also for levitated systems~\cite{Delic892}. Optomechanical systems show promising potential as both classical and quantum-limited sensors~\cite{arcizet2006high, geraci2010short, hempston2017force}, and recent studies have proposed their use as gravity sensors~\cite{qvarfort2018gravimetry, armata2017quantum, schneiter2020optimal, qvarfort2020optimal}. In fact, experimental searches for fifth forces with classical optomechanical setups have already been performed (see e.g.~\cite{PhysRevLett.117.101101,PhysRevD.96.104002}), where the bounds achieved fell within those excluded by atom interferometry. A key question, which we explore in this work, therefore becomes whether an optomechanical sensor in the quantum regime can improve on these bounds. For an overview of searches for new physics with levitated optomechanical systems, see the recent review by Moore \textit{et al}.~\cite{moore2020searching}. The advantage of optomechanical sensors, as opposed to, for example, cold atom interferometry is that the sensitivity of the system can be improved while retaining the compact setup of the experiment. In contrast, improving the sensitivity of atom interferometry primarily relies on increasing the length of the flight-path of the atoms.
The key question we seek to answer in this work is: what fundamental range of parameters of modified gravity theories could ideally be excluded with a quantum optomechanical sensor? To address this question, we consider an idealised system described by a nonlinear, dispersive, optomechanical Hamiltonian which couples the optical and mechanical degrees of freedom through a nonlinear radiation-pressure term. This Hamiltonian is often linearised for a strong coherent input drive, however the fully nonlinear (in the sense of the equations of motion) Hamiltonian is a more fundamental description.
While all quantum systems are affected by noise, we here assume that the coherence times can be made long enough for the measurement protocol to be carried out. As a result, our analysis explores the bounds in the absence of environmental noise and decoherence.
We then consider the gravitational field that arises when a source mass is placed next to the sensor.
Since it is often difficult in experiments to distinguish a signal against a constant noise floor, we consider an oscillating source mass, which gives rise to a time-dependent gravitational field. Such a signal can then be isolated from other common low-frequency $1/f$ noise sources via a Fourier analysis of the data.
To determine whether our analysis is valid in the case of a chameleon field, we derive the time-dependent potential that results from the source mass from first principles, where we find that a potential that moves with the mass is the correct choice for non-relativistic velocities. Another key consideration for optomechanical systems is the relatively large size of the optomechanical probe. This has been found to be significant in previous classical experiments with chameleon fields, such as the MICROSCOPE experiment~\cite{pernot2019general,pernot2021constraints}, and we find that it also contributes significantly to the chameleon screening of the fifth force in the envisioned setup of the quantum experiment we consider here (as opposed to, for example, cold atoms, where the screening length of the atomic probes is very small). To take the finite screening length into account, we go beyond the common approximation that the probe radius is small compared to the range of the chameleon field and derive analytic expressions for the modified force seen by the probe.
Then, using tools from quantum information theory and quantum metrology such as the quantum Fisher information, we are able to estimate the fundamental sensitivity for detecting deviations from Newtonian gravity. To further improve the sensitivity, we also consider known ways to enhance the optomechanical sensor in the form of squeezed light and a modulated optomechanical coupling~\cite{qvarfort2020optimal}.
Our main results include the bounds presented in figure~\ref{fig:exclusion:plot:comparison}, which shows the parameter ranges of modified gravity theories that could potentially be excluded with an ideal optomechanical sensor. The bounds are computed for a specific set of experimental parameters. To facilitate investigations into additional parameter regimes, we have made the code used to compute the bounds available (see the Data Availability Statement). While experiments are unlikely to achieve the predicted sensitivities due to noise and systematic effects, our bounds constitute a fundamental limit for excluding effects beyond Newtonian gravity given the experimental parameters in question.
This work is structured as follows. In section~\ref{sec:experimental:setup} we present the proposed experimental setup and optomechanical Hamiltonian, and then we proceed to discuss Yukawa potentials as a modification to the Newtonian gravitational potential in section~\ref{sec:moving:mass:potential}. We consider those sourced by a chameleon field and provide a first-principles' derivation of the time-dependent potential that results from the mass oscillating around an equilibrium position. We also discuss screening effects inherent to chameleon fields and derive the screening effect that arises from the size of the optomechanical probe. In section~\ref{sec:linearised:potential}, we linearise the modified gravitational potential, and in section~\ref{sec:metrology}, we provide an introduction to quantum metrology and the quantum Fisher information. These tools allow us to present analytic expressions for the fundamental sensitivity of the system, which we do in section~\ref{sec:results}. The work is concluded by a discussion in section~\ref{sec:discussion} and some final remarks in section~\ref{sec:conclusions}.
\section{Optomechanical model and dynamics} \label{sec:experimental:setup}
In this section, we introduce the model of the optomechanical system and show how the effects of a time-varying gravitational field can be included in the dynamics.
\subsection{Experimental setup}
We envision an experimental setup similar to that used in~\cite{westphal2020measurement}, where an oscillating source mass made of solid gold is placed in a vacuum chamber adjacent to an optomechanical probe (see figure~\ref{fig:cavity}). We have chosen gold because we require the highest possible density in order to detect gravitational effects and maximise the effect of density-dependent screening mechanisms such as chameleon fields\footnote{While there are denser materials, such as depleted Uranium, gold is a stable material that has previously been used for small-mass sensing, see e.g. Ref~\cite{westphal2020measurement}.}. The source mass oscillates back and forth, which can be achieved in a number of different ways~\cite{schmole2017development}. One such implementation is with the help of a shear piezo, which oscillates at a fixed frequency. The optomechanical probe is then allowed to move along the same axis as the oscillating mass. By injecting light into the cavity, the position of the optomechanical coupling is dispersively coupled to the optical field through radiation-pressure. The light then picks up a phase shift conditioned on the displacement of the mechanical mode, which has been influenced by the gravitational force. Therefore, information about the gravitational field is imprinted on the optical state. The light is then collected and measured either as it leaks from the cavity or through a scheme where the cavity is coherently opened to access the full intra-cavity state~\cite{tufarelli2014coherently}.
While the optomechanical interaction can generally be described with the same dynamics for a large range of systems, the force and strength of the coupling differ for each platform. In this work, we begin with a general description of a single interacting mode, but later specialise towards a spherical mechanical element since it allows for analytical treatments of some modified gravity potentials.
The optomechanical Hamiltonian, which governs the dynamics of the optomechanical probe, is given by (in the absence of an external gravitational field):
\begin{equation} \label{eq:basic:Hamiltonian}
\hat H_0 = \hbar \, \omega_{\mathrm{c}} \, \hat N_ a + \hbar \,\omega_{\mathrm{mech}} \, \hat N_b - \hbar \, k(t)\, \hat N_ a \, \bigl( \hat b^\dag + \hat b \bigr),
\end{equation}
where $\omega_{\mathrm{c}}$ and $\omega_{\mathrm{mech}}$ are the oscillation frequencies of the optical cavity mode and mechanical mode respectively, with annihilation and creation operators $\hat a ,\hat a ^\dag$ and $\hat b, \hat b^\dag$. We have also defined $\hat N_a = \hat a^\dag \hat a$ and $\hat N_b = \hat b^\dag \hat b$ as the photon and phonon number operators.
The coupling $ k(t)$ is the (potentially time-dependent) characteristic single-photon interaction strength between the number of photons and the position of the mechanical element. It takes on different forms depending on the optomechanical platform in question. Among the simplest couplings is that for a moving mirror, of mass $m$, that makes up one end of a cavity, $k = (\omega_l/L) \sqrt{\hbar /(2 m\omega_{\mathrm{mech}})}$, where $\omega_l $ is the laser frequency and $L$ is the length of the cavity. In this work, we also consider modulating the coupling in time; it has been previously found that such modulations can be used to enhance the sensitivity of the optomechanical sensor if they can be made to match the oscillation of the external force~\cite{qvarfort2020optimal}. Modulation of the optomechanical coupling can be introduced in different ways depending on the experimental platform in question. For example, the mechanical frequency of a cantilever can be modified by applying an oscillating electric field~\cite{rugar1991mechanical,szorkovszky2013strong}, and a modulated coupling arises naturally through the micro-motion of a levitated system in a hybrid electro-optical trap~\cite{Millen2015cavity,aranas2016split,fonseca2016nonlinear}.
All quantum systems are affected by noise due to their interaction with the environment. Such an interaction usually results in dissipation and thermalisation, which in turn leads to decoherence of the off-diagonal elements of the quantum state. For cavity optomechanical systems, common sources of noise include photons leaking from the cavity, as well as thermlisation of the mechanical element due to interactions with the surrounding residual gas, or from vibrations from the mount~\cite{aspelmeyer2014cavity}. The nature of the noise is unique to each experimental platform and must be carefully modelled in each case.
In this work, we are interested in deriving the best-possible sensitivity that an optomechanical system can achieve. We therefore assume that the $Q$-factor of the cavity is high enough that the system stays coherent throughout the duration of our measurement protocols. Recently, $Q$-factors of $10^9$ have been demonstrated in magnetically levitated meso-mechanical systems~\cite{cirio2012quantum}, and linewidths of $81\pm23\,\mu$Hz have been measured~\cite{pontin2020ultranarrow}. We also assume that the system has been cooled to temperatures such that the surrounding environment does not cause the mechanical mode to heat up during the protocol. To reduce unwanted vibrations or gravitational noise, it is also possible to add decoupling stages in the experiments~\cite{pitkin2011gravitational}, such as suspension stages made by fused silica fibres~\cite{penn2001high,cumming2020lowest}.
Under these conditions, it is possible to consider an approximately unitary description of the experiment, which we shall use to derive a fundamental limit of the sensitivity that could in principle be achieved with an optomechanical system. To then describe a realistic experiment, all of the above effects must be taken into account. We discuss this and other potential future work in section~\ref{sec:discussion}.
When treating the system in a closed and ideal setting, we can model the initial state as a separable state of the light and the mechanical element. For the optical state, we consider injecting squeezed light into the cavity. Squeezed light has been shown to fundamentally enhance the sensitivity to displacements~\cite{giovannetti2006quantum}. By including squeezing here, we generalise our scheme to include these input states. However, we note that in order to improve the sensitivity overall, it is always more beneficial to increase the number of photons rather than squeezing the system. Squeezing also reduces quadrature noise~\cite{mehmet2011squeezed}.
The state of the mechanical element, on the other hand, is most accurately described as thermal at a non-zero temperature. With these assumptions, the initial state of the system can be written as
\begin{equation}\label{initial:state}
\hat \varrho(0) = \ketbra{\zeta}\otimes \sum_{n=0}^\infty \frac{\tanh^{2n}r_T}{\cosh^2 r_T}\ketbra{n}{n}\;,
\end{equation}
where $\ket{\zeta} = \hat S_\zeta \ket{\mu_{\mathrm{c}}}$ is a squeezed coherent state of the optical field where $\hat S_\zeta = \mathrm{exp} \bigl[ (\zeta^* \hat a^2 - \zeta \hat a^{\dag 2})/2 \bigr]$ and where the coherent state satisfies $\hat a \ket{\mu_{\mathrm{c}}} = \mu_{\mathrm{c}} \ket{\mu_{\mathrm{c}}}$. The squeezing parameter can also be in spherical polar form as $\zeta = r_{\mathrm{sq}} \, e^{i \varphi}$. Squeezed states can be generated through four-wave mixing in an optical cavity~\cite{slusher1985observation} or parametric down-conversion~\cite{wu1986generation}. See also Ref~\cite{andersen201630} for a review of squeezed state generation. The parameter $r_T$ of the thermal state arises from the Bose--Einstein distribution and is defined by $\tanh r_T=\exp[-\frac{\hbar\,\omega_\textrm{mech}}{2\,k_\textrm{B}\,T}]$, where $T$ is the temperature of the system and $k_\mathrm{B}$ is Boltzmann's constant.
\subsection{Modelling the gravitational force}
In order to compute the sensitivity bounds for detecting modified gravity, we model the effect of the gravitational force from the moving source mass on the optomechanical system as a contribution to the dynamics. When the force is weak, it can be linearised and included as a displacement term in the optomechanical Hamiltonian in equation~\eqref{eq:basic:Hamiltonian}. In this section, we provide a general derivation of this linearised force, while in section~\ref{sec:linearised:potential} we specialise to Yukawa-like and chameleon modifications to the Newtonian potential. The linearisation is necessary to properly describe the quantum dynamics of the setup with the current theoretical machinery; we will however describe the chameleon field in full generality to allow for future work to improve the theoretical description.
We start by assuming that the source mass and the mechanical element of the optomechanical system are constrained to move along the $x$-axis. We let the mechanical element be subject to a harmonic potential centered at $x = 0$ and we label the position of the source mass $x_S(t)$. Then, we assume that there is a small perturbation to the centre-of-mass position of the mechanical element that we call $\delta x$, and, assuming that $|x_S(t)| \gg |\delta x|$ at all times, we write the relative distance between the systems as $x_S(t) - \delta x $.
Provided that $\delta x$ remains small, we can Taylor expand the system to first order in $\delta x$. Given a generic potential term $V(x_S(t) - \delta x)$, we find, to first order in $\delta x$:
\begin{equation} \label{eq:expanded:potential}
V(x_S(t) - \delta x ) = V(x_S(t)) - V'(x_S(t)) \, \delta x + \mathcal{O}[(\delta x)^2].
\end{equation}
The first term in equation~\eqref{eq:expanded:potential} represents a time-dependent shift of the overall energy, however it does not depend on the position of the optomechanical system. The second term describes the (potentially time-dependent) displacement of the mechanical element with $\delta x$. The second-order term in $\delta x$ leads to a shift in the mechanical frequency that we do not model here, but dynamics of this kind have been previously studied~\cite{qvarfort2020time}. The expansion in equation~\eqref{eq:expanded:potential} is valid as long as $\delta x$ remains small such that the higher-order terms can be neglected. We outline the conditions for this being true in the Discussion (see section~\ref{sec:discussion}).
We proceed to promote $\delta x$ to an operator $\delta x\rightarrow \hat x_{\mathrm{mech}}$, which can be written in terms of the annihilation and creation operators $\hat b$ and $\hat b^\dag$ of the mechanical element as
\begin{equation}
\hat x_{\mathrm{mech}} = x_{\mathrm{zpf}} \, \bigl( \hat b^\dag + \hat b \bigr),
\end{equation}
where $x_{\mathrm{zpf}} = \sqrt{\hbar/(2\,m\omega_{\mathrm{mech}})}$ is the zero-point fluctuation of the mechanical oscillator. It should be noted here that the dynamics of a nonlinear optomechanical system with a driving term proportional to $\hat x_{\mathrm{mech}}^2$ has been solved, however the inclusion of these effects adds significant complexity the mathematical treatment of the system~\cite{qvarfort2020time}, while it will likely not result in a significant improvement of the sensitivity.
The full optomechanical Hamiltonian including the modified gravitational potential can then be written as
\begin{align} \label{eq:cham:Hamiltonian}
\hat H(t) = \hat H_0 - V'(x_S(t)) \, x_{\mathrm{zpf}} \, \bigl( \hat b^\dag + \hat b \bigr),
\end{align}
where $\hat H_0$ is given in equation~\eqref{eq:basic:Hamiltonian} and where the time-dependent modified Newtonian gravitational force is contained in the second term.
The time-evolution of the system with the Hamiltonian in equation~\eqref{eq:cham:Hamiltonian} can be written as the following time-ordered exponential:
\begin{align}
\hat U(t) &= \overleftarrow{\mathcal{T}} \mathrm{exp}\left[ - \frac{i}{\hbar} \int^t_0 \mathrm{d}t’ \, \hat H(t') \right],
\end{align}
where the time-dependence of the gravitational potential in $\hat H(t)$ requires careful consideration. Such dynamics have been studied previously~\cite{qvarfort2019enhanced, qvarfort2020time} and provide a short overview of the treatment~\ref{app:sensitivity}. Later on in this work, we use the expression for $\hat U(t)$ to derive the sensitivity of the system to modifications of the Newtonian potential, but first, we will study the form of the modifications in-depth.
\vspace{0.8cm}
\section{Modified gravitational potential and screening from the source and the probe} \label{sec:moving:mass:potential}
In this section, we discuss an example of how the chameleon mechanism would alter the Newtonian force law on sub-millimetre scales. We write all equations in terms of SI units, but energy units (as used elsewhere in the literature) can be restored by setting $\hbar = c = 1$ throughout.
\subsection{Yukawa modifications to the gravitational force law}
Although there are many ways of modifying Newton's laws on short distances, perhaps one of the best motivated theoretically is to add a Yukawa term to the potential. Yukawa potentials are ubiquitous in scalar field theories, since they are the solution to the sourced (inhomogeneous) Klein--Gordon equation for a massive field in the case of spherical symmetry. As a consequence of Lovelock's theorem~\cite{doi:10.1063/1.1665613}, modifications to General Relativity require either additional degrees of freedom such as a scalar field, or more exotic scenarios such as large extra dimensions, higher derivatives, or non-locality. Consequently, additional scalar fields are common in modified gravity theories. These act like a fifth force, and for a source of mass $M_{S}$ and test particle mass $m$, give rise to a gravitational potential of the form:
\begin{equation} \label{eq:modified:gravitational:potential}
V(r) = -\frac{G \, M_S m}{r}\left(1+\alpha \, e^{-r/\lambda}\right),
\end{equation}
where $\alpha$ parametrises the intrinsic difference in strength between the Yukawa-like fifth force and gravity, while $\lambda$ parametrises the range of this fifth-force. Note that it is possible to have $\alpha \gg 1$ and still agree with existing constraints, provided the force is sufficiently short range to have evaded tests of gravity on short distances.
For this work, we will consider a chameleon screening mechanism that gives rise to a Yukawa-like force. However, the methods we describe can be broadly applied to many different Yukawa-type modifications of the gravitational field on short distances. In the chameleon mechanism, short distance modifications to the Newtonian force law are \emph{screened} from the reach of solar system tests by the presence of a density-dependent scalar field, known as the chameleon field. In regions of relatively high average density -- such as can be found inside a galaxy -- the chameleon field has a high mass, making it hard to detect at colliders and altering the gravitational force law in such a way as to be consistent with solar-system experiments (this is the `screening' effect). However, in regions of low-density -- such as in cosmological voids -- the field is lighter and the effects of modified gravity unscreened. This allows modified gravity theories to have substantial effects on cosmological scales, while being difficult to detect on galactic or solar-system scales.
We review the properties of chameleon fields in~\ref{sec:cham:mech}. The net effect of the chameleon scalar field $\phi$ is to modify the effective Newtonian potential affecting a test particle. Specifically, the effective potential at position $\mathbf{X}$ is given by
\begin{equation}
\Phi_{\mathrm{eff}}(\mathbf{X}) = \Phi_N(\mathbf{X}) + \Phi_{\mathrm{C}}(\mathbf{X}) \approx \Phi_N(\mathbf{X}) + \frac{\phi(\mathbf{X})}{M},\label{eq:potential}
\end{equation}
where $\Phi_N$ is the standard Newtonian potential, and $\Phi_C$ is the modification to it arising from the chameleon field. The parameter $M$ (here chosen to be a mass to give the correct units for a potential) determines how strongly the chameleon field affects test particles and arises from the non-minimal coupling of the chameleon field to curvature as discussed in~\ref{sec:cham:mech}.
In this work, we consider a chameleon model with an effective interaction potential
\begin{equation}
V_{\mathrm{eff}}(\phi) = \frac{\Lambda^{4 + n}}{\phi^n} + \frac{\phi\rho}{M}(\hbar c)^3.
\end{equation}
We explore only the case $n=1$ in this work: other models and choices of $n$ are possible, but we choose this specific example to demonstrate how the method works in principle. This model has two parameters; $\Lambda$, which characterises the energy scale of the chameleon's self-interaction potential; and $M$ which is defined above.
For $n = 1$ the background value of the field, $\phi_{\mathrm{bg}}$, in an environment of constant mass density $\rho_{\mathrm{bg}}$ is given by
\begin{equation}
\phi_{\mathrm{bg}} = \sqrt{\frac{M\Lambda^5}{\rho_{\mathrm{bg}}(\hbar c)^3}}\label{eq:phibg}.
\end{equation}
In the centre of the source, the chameleon field reaches its minimum value of $\phi_S$ (which can be obtained by replacing the density $\rho_{\mathrm{bg}}$ in equation~\eqref{eq:phibg} with the source density $\rho_{S}$). The mass of the chameleon field, $m_{\mathrm{bg}}$, is density dependent (see~\ref{sec:cham:mech}) and given by
\begin{equation} \label{eq:mbg}
m_{\mathrm{bg}}c^2 = \left(\frac{4\,\rho_{\mathrm{bg}}^3(\hbar c)^9}{M^3\Lambda^5}\right)^{1/4}.
\end{equation}
The key question for us is how the field results in a force on the optomechanical sensor. This is what we consider next.
\subsection{Force on the optomechanical sensor}
The effect of a chameleon field is in principle detectable in a high-vacuum environment. In practise, this requires extremely precise acceleration measurements, which an optomechanical system can provide. While the optomechanical probe can come in many diffferent shapes, in this work, for the sake of simplicity, we model both the source mass and the detector probe as spheres. This allows us to compute the sensitivity using the chameleon force between these two spheres. There are therefore two effects to consider: the response of the field in equation~\eqref{eq:potential} to the spherical source, and the response of the probe to that field. Due to the nature of the chameleon field, a non-point-like probe will not simply follow the gradient of equation~\eqref{eq:potential} as would a test particle: instead there is an additional screening effect due to the interactions of the probe itself with the field.
To derive the force that acts on the sensor, we consider the field inside the vacuum chamber. Burrage \textit{et al}.~\cite{Burrage:2014oza} derived the chameleon field around a spherical source of mass $M_S$ and radius $R_S$ as a function of distance from the centre of the sphere, $r$. They assumed in their derivation that the range of the chameleon force was large compared to the size of the source (that is, $m_{\mathrm{bg}}R_Sc/\hbar \ll 1$). To allow us to consider a broad parameter space, we do not assume that either $m_{\mathrm{bg}}R_Sc/\hbar \ll 1$ or $m_{\mathrm{bg}}R_Pc/\hbar \ll 1$ where the indices $S$ and $P$ denote the source and probe, respectively. In what follows, we go beyond existing studies in this regard by including sources (probes) with non-negligible size compared to the range of the force.
We use the same asymptotic matching approach as Burrage \textit{et al}.~\cite{Burrage:2014oza} to obtain an expression for the chameleon field around a spherical matter distribution:
\begin{equation} \label{eq:static_field}
\phi(r) = \left\{\begin{matrix}\phi_i, &r < S_i\\
\phi_i + \frac{\hbar c M_i}{8\pi R_i M} \frac{r^3 - 3S_i^2r + 2S_i^3}{rR_i^2}, &S_i < r < R_i\\
\phi_{\mathrm{bg}} - \frac{\hbar c M_i}{4\pi r M(1+m_{\mathrm{bg}}R_ic/\hbar)}\left(1 - \frac{S^3_i}{R_i^3}\right)e^{-m_{\mathrm{bg}}c(r - R_i)/\hbar},&r > R_i\end{matrix}\right\}.
\end{equation}
Here, $i = S, P$ for the source and probe respectively. $\phi_i$ is the equilibrium value of the chameleon field in a material with the source (probe) density: the field will attain this value at some radius $S_i \leq R_i$. There is then a transition layer where the field $\phi_i$ increases to its surface value, before increasing to the equilibrium value in the background density, $\phi_{\mathrm{bg}}$. The range of the force outside the source is controlled by the density-dependent chameleon mass, $m_{\mathrm{bg}}$. The full derivation of equation~\eqref{eq:static_field} is given in~\ref{app:chameleon_field}.
The value of the length scale $S_i$ depends on the source/probe properties, the chameleon model, and environmental properties. For the model we consider here, it is found by solving the following cubic equation:
\begin{equation} \label{eq:solve:for:S}
\frac{S_i^2}{R_i^2} + \frac{2}{3}\left[\frac{1}{1+m_{\mathrm{bg}}R_ic/\hbar} - 1\right]\frac{S_i^3}{R_i^3} = 1- \frac{8\pi M}{3M_i}R_i(\phi_{\mathrm{bg}} - \phi_i)c/\hbar + \frac{2}{3}\left[\frac{1}{1+m_{\mathrm{bg}}R_ic/\hbar} - 1\right].
\end{equation}
In the $m_{\mathrm{bg}}R_ic/\hbar \rightarrow 0$ and $\phi_{\mathrm{bg}} \gg \phi_i$ limits, this reduces to
\begin{equation} \label{eq:S}
S_i = R_i\sqrt{1-\frac{8\pi M}{3M_i}\frac{R_i\phi_{\mathrm{bg}}}{\hbar c}},
\end{equation}
which is the result found by Burrage \textit{et al}.~\cite{Burrage:2014oza}. $S_i$ parametrises the screening effect of the chameleon mechanism for a spherical source/probe: for example, when $S_i$ is much lower than $R_i$, the field is effectively unscreened while for $S_i \approx R_i$ the field is heavily screened. Outside of the source ($r > R_i$), we see from equation~\eqref{eq:static_field} that the scalar field, and thus the modified gravitational potential, has an effective Yukawa form. Thus, a chameleon field of this type would manifest as a Yukawa-like modification to the acceleration of a test particle. Since our proposed experimental setup will involve measuring acceleration outside of the sphere, we only need the $r > R_i$ part of the solution.
When viewed as a Yukawa-type force of the form considered in equation~\eqref{eq:modified:gravitational:potential}, and when the probe itself does not contribute to the screening, the resulting fifth-force strength $\alpha$ and range $\lambda$ are given by
\begin{align}
& \alpha_\mathrm{bg} =
\frac{2M_{\mathrm{P}}^2}{M^2} \, \xi_S , &&\mbox{and}
&&\lambda_{\mathrm{bg}} = \frac{\hbar}{m_{\mathrm{bg}}c}, \label{eq:lambda:bg}
\end{align}
where $M_{\mathrm{P}} \approx \sqrt{\hbar c/(8\pi G)} = 4.341\times 10^{-9}\mathrm{\,kg}$ is the reduced Planck mass (here expressed as a mass rather than an energy), $\alpha_\mathrm{bg}$ depends on the background density through
$\xi_S$, which is given by~\cite{Burrage:2014oza}
\begin{equation} \label{eq:def:of:xi}
\xi_S =
\begin{cases}
1, & \rho_S R_S^2 < 3 M \, \phi_{\mathrm{bg}} /(\hbar c) , \\
1 - \frac{S_S^3}{R_S^3}, &\rho_S R^2_S > 3 M \phi_{\mathrm{bg}}/(\hbar c) \,.
\end{cases}
\end{equation}
As long as the optomechanical sensor is approximated as a point-particle, such that $R_P/\lambda_{\mathrm{bg}} \ll 1$, the force felt by the optomechanical probe can therefore be written as
\begin{align} \label{eq:force:no:screening}
F&= -\frac{G\,M_Sm}{|\mathbf{X}_S|^2}\biggl[1 + \alpha_{\mathrm{bg}} \left( 1 + \frac{|\mathbf{X}_S(t)|}{\lambda_{\mathrm{bg}}} \right) e^{-|\mathbf{X}_S(t)|/\lambda_{\mathrm{bg}}}\biggr],
\end{align}
where $\mathbf{X}_S$ is the vector-position of the source. The point-particle approximation is however quite severe, especially for an optomechanical probe, the radius of which can be quite large compared with the range of the force in question. We proceed to consider the screening from the probe in the following section.
\subsection{Chameleon screening from the optomechanical probe} \label{sec:probe:screening}
Compared with the atoms used in alternative approaches to the detection of fifth-force modifications to gravity, such as atom interferometry, the optomechanical probe can potentially be relatively large compared to the range of fifth forces. This can result in significant contributions to the chameleon field screening. The screening depends strongly on the geometry of the system; in general, numerical methods are needed to compute the full screening~\cite{Burrage:2017shh}. As such, it is difficult to estimate the screening for say a Fabry--P\'{e}rot moving-end mirror; however the problem is simplified when both the source sphere and the probe are spherically symmetric. This is the case when the mechanical element in the optomechanical system is a levitated sphere, made, for example, by silica.
To estimate the extent of the screening for a spherical optomechanical probe, we consider the force that arises from the movement on the time-dependent mass. See~\ref{app:sec:screening:calculations} for the full calculation. In the limit where the probe radius is much smaller than the distance between the probe and the source sphere $R_P \ll |\mathbf{X}_S(t)|$, we find the following expression for the force:
\begin{align} \label{eq:force:spheres}
F&= -\frac{G\,M_Sm}{|\mathbf{X}_S|^2}\biggl[1 + \alpha_{\mathrm{bg},P} \left( 1 + \frac{|\mathbf{X}_S(t)|}{\lambda_{\mathrm{bg}}} \right) e^{-|\mathbf{X}_S(t)|/\lambda_{\mathrm{bg}}}f(R_P/\lambda_{\mathrm{bg}}, |\mathbf{X}_S(t)|/\lambda_{\mathrm{bg}}) \biggr],
\end{align}
where the sensor-dependent fifth-force strength is defined as
\begin{equation}\label{eq:alphabgP}
\alpha_{\mathrm{bg},P} = \frac{2M_{\mathrm{P}}^2}{M^2} \, \xi_S \, \xi_P\,,
\end{equation}
where we have added the subscript '$P$' to denote that screening from the probe is here taken into account. Furthermore, $\xi_S$ and $\xi_P$ (again labelled $S$ for the source and $P$ probe, respectively), are given in equation~\eqref{eq:def:of:xi}. To compute $\xi_P$, we replace $M_S$, $R_S$ and $\rho_S$ with $M_P$, $R_P$ and $\rho_P$.
Finally, the function $f$ is a form-factor given by
\begin{align} \label{eq:form:factor}
f(u,v) = (1 + u) \, e^{-u}\left[\frac{\sinh(u)}{u} - \left(\frac{v}{1 + v} - 2\right)\frac{1}{v}\left(\cosh(u) - \frac{\sinh(u)}{u}\right)\right].
\end{align}
This approaches $1$ in the $x = m_{\mathrm{bg}} R_P c/\hbar = R_P/\lambda_{\mathrm{bg}} \rightarrow 0$ limit, in which case equation~\eqref{eq:force:spheres} reduces to the result of Burrage \textit{et al}.~\cite{Burrage:2014oza} for the force between two spheres. Since spherical probes or source masses generally maximise the screening~\cite{Burrage:2017shh}, equation~\eqref{eq:force:spheres} can be interpreted as a conservative estimate of the screening due to the shielding from the probe.
Burrage \textit{et al}.~\cite{Burrage:2014oza} make use of the $R_P /\lambda_\mathrm{bg} \rightarrow 0$, since the probe radius in the case of atom interferometry is typically much smaller than $\lambda_{\mathrm{bg}}$. For an optomechanical probe, however, the additional screening introduced by the probe can be substantial, but not, as we shall see, detrimental. In what follows, we compute the sensitivity both with and without the screening from the probe, where the latter corresponds to approximating the probe as a point particle.
\subsection{Potential from a moving source-mass}
In this work, we consider a moving source mass. This brings up a consideration of how the chameleon field responds to the motion of the source mass. For the gravitational field, we know that changes in the potential propagate outwards at the speed of light, and thus the appropriate potential to use is the retarded Newtonian potential. The situation is less clear for the scalar field, however. Since it is massive, it is not immediately obvious information will propagate outwards at the speed of light. To get an idea of its behaviour we need to know the speed, $v_I$, at which information propagates through the scalar field. We show in~\ref{app:scalar_field_evolution} that this is also, in fact, the speed of light. Consequently, the potential at 3D position $\mathbf{X}$ can be approximated by the time dependent form
\begin{align} \label{app:chameleon:potential}
\Phi_{\mathrm{C}}(\mathbf{X},t) = \frac{\phi_{\mathrm{bg}}}{M}- \frac{GM_S}{|\mathbf{X} - \mathbf{X}_S(t)|} \alpha_\mathrm{bg} \, e^{-|\mathbf{X} - \mathbf{X}_S(t)|/\lambda_\mathrm{bg} } .
\end{align}
where $t$ should be replaced with the retarded time given by equation~\eqref{eq:tret}, however, we can ignore this for the non-relativistic speeds and distances considered in this setup. Since both the chameleon field, $\phi$, and the metric, $g_{\mu\nu}$ (which gives rise to the Newtonian potential, $\Phi_N$) are well-defined dynamical quantities, the time-dependence of this potential is well-defined. We note at this point that if quantum corrections are large, the effective speed of information propagation for the scalar field, $v_I$, may differ from $c$~\cite{ellis2007causality}. However, large quantum corrections of this size would mean that we cannot readily use the effective field theory treatment of the chameleon field assumed throughout~\cite{Khoury:2013yya}, so we do not consider this effect here. We can therefore use equation~\eqref{app:chameleon:potential} in the discussion that follows to measure the values of $\alpha$ and $\lambda$, and thus the parameters $\Lambda$ and $M$ of the chameleon field.
\section{Linearised modified Newtonian potential} \label{sec:linearised:potential}
In order to compute the sensitivity of the optomechanical system, we need to include the force on the sensor shown in equation~\eqref{eq:force:spheres} into the dynamics of the optomechanical system. It is possible to obtain the solution numerically, but in order to obtain analytic expressions, we choose to linearise the Yukawa modification of the force for small oscillations of the source-mass. We let the time-dependent distance between the systems $x_S(t)$ be given by:
\begin{equation} \label{eq:time:dependent:distance}
x_S(t ) = x_0 \, \left( 1 - \epsilon \cos(\omega_0 \, t + \phi_0) \right) ,
\end{equation}
where $\epsilon$ is a dimensionless oscillation amplitude defined as a fraction of $x_0$, where $\omega_0$ is the oscillation frequency and $\phi_0$ is a phase shift that we specify later in order to maximize the sensitivity.
In the following two sections, we show the linearisation of the force for a generic Yukawa potential, and for the chameleon force with a large optomechanical probe that contributes to the screening.
\subsection{Linearising the Yukawa potential}
We now linearise the contributions from the Yukawa potential to equation~\eqref{eq:modified:gravitational:potential} for small oscillation amplitudes $\epsilon\ll 1$. We note that, for specific values of $\alpha$ and $\lambda$, higher order contributions to the Newtonian gravitational force may be larger than the first-order contributions to the Yukawa force. It is therefore important that, when taking data in an experiment, we determine the origin of the observed values (see the Discussion in section~\ref{sec:discussion}).
Linearising, we obtain
\begin{align} \label{eq:linearised:Yukawa:int}
\mathcal{G}_{\mathrm{Yuk}}(t) &\approx - \frac{G M_S m }{ \, x_S^2(t) } -
m g_\mathrm{N} \bigl[ \kappa + \sigma \,\epsilon \,\cos (t \omega_0 + \phi_0 ) \bigr] \, .
\end{align}
where $g_\mathrm{N}=G \, M_S/x_0^2$ is the Newtonian gravitational acceleration at the equilibrium distance $x_0$
and where we defined the two parameters
\begin{align} \label{eq:kappa}
&\kappa = \alpha \, e^{- x_0/\lambda} \, \left( 1 + \frac{x_0}{\lambda}\right) ,
&& \mbox{and}
&& \sigma = \alpha \, e^{- x_0/\lambda} \, \left(2 + 2 \frac{x_0}{\lambda} + \frac{ x_0^2}{\lambda^2} \right) \,,
\end{align}
which quantify the deviation of the constant and the time-dependent part of the force from the
Newtonian one, respectively. We make this distinction because in an experiment, it is often possible to isolate a time-dependent signal from a constant noise floor. In addition, systematic effects such as the Casimir effect can be effectively screened out in this way (we return to this point in the Discussion in Section~\ref{sec:discussion}). We will therefore focus on estimating $\sigma$ as part of our analysis.
\subsection{Linearising the chameleon potential from a screened spherical probe}
For a spherical optomechanical probe, the force between the probe and the source is given in equation~\eqref{eq:force:spheres}. We note that the form factor shown in equation~\eqref{eq:form:factor}, which arises due to the screening from the optomechanical probe, depends on $x_S(t)$ and is therefore time-dependent.
To determine the constant and time-dependent contributions, we again assume that the source sphere oscillates around the equilibrium distance $x_0$ according to equation~\eqref{eq:time:dependent:distance}. Noting that $x_S(t)$ only enters into the $y$-terms in equation~\eqref{eq:form:factor}, we find
\begin{align} \label{eq:linearised:potential:chameleon}
\mathcal{G}_{\mathrm{Cha}}(t) &\approx - \frac{G M_S m }{ \, x_S^2(t) } - m g_{\mathrm{N}} \left( \kappa + \sigma \epsilon \cos(\omega_0 \, t + \phi_0) \right),
\end{align}
where the expressions for $\kappa$ and $\sigma$ now read
\begin{align} \label{eq:kappa:sigma:shielded}
&\kappa = \alpha_{\mathrm{bg},P} \, e^{- x_0/ \lambda_{\mathrm{bg}}} \biggl[ \left( 1 + \frac{x_0}{\lambda_{\mathrm{bg}}}\right)A( R_P/\lambda_{\mathrm{bg}}) + \left( 1 +2 \frac{\lambda_{\mathrm{bg}}}{x_0} \right)B(R_P/\lambda_{\mathrm{bg}}) \biggr],
\\
&\sigma = \alpha_{\mathrm{bg},P} \, e^{- x_0/\lambda_{\mathrm{bg}}} \biggl[ \left( 2 + 2 \frac{x_0}{\lambda_{\mathrm{bg}}} + \frac{ x_0^2}{\lambda_{\mathrm{bg}}^2} \right)A(R_P/\lambda_{\mathrm{bg}}) + \left( 4 +6 \frac{\lambda_{\mathrm{bg}}}{x_0} + \frac{x_0}{\lambda_{\mathrm{bg}}} \right) B( R_P/\lambda_{\mathrm{bg}}) \biggr], \nonumber
\end{align}
where we have defined:
\begin{align} \label{eq:A:B}
&A(u) = (1 + u) \, e^{- u} \, \frac{\sinh(u)}{u}, &&\mbox{and}
&&B(u) = (1 + u) \, e^{- u} \, \left( \cosh(u) - \frac{\sinh(u)}{u}\right).
\end{align}
The expressions in equation~\eqref{eq:A:B} arise from the form-factor in equation~\eqref{eq:form:factor}.
For the parameter regimes considered in this work, we find that $ R_P/\lambda_{\mathrm{bg}} \ll 1$. This means that the form factors become $A(R_P/\lambda_\mathrm{bg}) = 1$ and $B(R_P/\lambda_{\mathrm{bg}}) = 0$. As a result, $\kappa$ and $\sigma$ simplify to
\begin{align} \label{eq:kappa:sigma:shielded}
&\kappa = \alpha_{\mathrm{bg},P} \, e^{- x_0/ \lambda_{\mathrm{bg}}} \left( 1 + \frac{x_0}{\lambda_{\mathrm{bg}}}\right), \nonumber
\\
&\sigma = \alpha_{\mathrm{bg},P} \, e^{- x_0/\lambda_{\mathrm{bg}}} \left( 2 + 2 \frac{x_0}{\lambda_{\mathrm{bg}}} + \frac{ x_0^2}{\lambda_{\mathrm{bg}}^2} \right),
\end{align}
which has the same form as equation~\eqref{eq:kappa}. We are now ready to compute the sensitivities of the optomechanical sensor, but first, we provide a brief introduction to the quantum metrology tools we use for this purpose.
\section{Quantum metrology and ideal bounds} \label{sec:metrology}
In this work, we are interested in the best-possible sensitivity that can be achieved with the optomechanical probe. To determine the sensitivity of the probe, we turn to tools from quantum metrology.
Specifically, we focus on computing the quantum Fisher information (QFI), which we denote $\mathcal{I}_\theta$, where $\theta$ is the parameter that we wish to estimate. Intuitively the QFI can be seen as a measure of how much the quantum state of the system changes given a specific encoding of $\theta$. The QFI then provides a measure of the change in the state with $\theta$ compared with the case when the state is unaffected. See also Ref~\cite{meyer2021fisher} for an intuitive introduction to the QFI and related concepts in quantum metrology.
The connection to sensitivity stems from the fact that the QFI provides a lower bound to the variance $\mathrm{Var}(\theta)$ of $\theta$ through the quantum Cram\'{e}r--Rao bound~\cite{cramer1946contribution, rao1992information}:
\begin{equation}
\mathrm{Var} ( \theta) \geq \frac{1}{\mathcal{M} \, \mathcal{I}_\theta } ,
\end{equation}
where $\mathcal{M}$ is the number of measurements or probes used in parallel. The standard deviation of $\theta $ is then given by $\Delta \theta = 1/\sqrt{\mathcal{M}\, \mathcal{I}_\theta} $.
For unitary dynamics and mixed initial states written in the form of $\hat \varrho = \sum_n \lambda_n \ketbra{\lambda_n}$, the QFI can be cast as~\cite{pang2014,jing2014}:
\begin{align}\label{definition:of:QFI:appendix}
\mathcal{I}_\theta
=& \, 4\sum_n \lambda_n\left(\bra{\lambda_n}\mathcal{\hat H}_\theta^2\ket{\lambda_n} - \bra{\lambda_n}\mathcal{\hat H}_\theta\ket{\lambda_n}^2 \right) - 8\sum_{n\neq m}\frac{\lambda_n \lambda_m}{\lambda_n+\lambda_m}\left| \bra{\lambda_n}\mathcal{\hat H}_\theta \ket{\lambda_m}\right|^2,
\end{align}
where the operator $\mathcal{\hat H}_\theta$ is defined as $\mathcal{\hat H}_\theta = - i \hat U_\theta^\dag \partial_\theta \hat U_\theta $. Here, $\hat U_\theta$ is the unitary operator that encodes the parameter $\theta$ into the system.
In our case, $\hat U(\theta)$ is the unitary operator that arises from the Hamiltonian in equation~\eqref{eq:cham:Hamiltonian}, and the effect we wish to estimate is the effect of the Yukawa potential on the probe. Therefore, in order to compute $\mathcal{I}_\theta$, we must first solve the time-evolution of the system, which is often challenging when the signal is time-dependent, as is the case for us here. Some of these challenges can however be addressed by making use of a previously established method for solving the Schrödinger equation using a Lie algebra approach~\cite{wei1963lie}. Details of this solution were first used to study a purely Newtonian time-dependent gravitational potential~\cite{qvarfort2020optimal} and can be found in~\ref{app:sensitivity}.
Using the expression for $\mathcal{I}_\theta$ in equation~\eqref{definition:of:QFI:appendix}, we can derive a compact expression for the QFI that represent the sensitivity with which modifications to Newtonian gravity can be detected. In our case, we let the parameter $\theta$ of interest be either $\kappa$ or $\sigma$ as defined in equation~\eqref{eq:kappa}. By then applying the Cram\'er--Rao bound, we can derive the standard deviation for each parameter. We then consider the ratios $\Delta \kappa/\kappa $ or $\Delta \sigma /\sigma$, which describe the relative error of the collective measurements.
In this work, we say that we can distinguish modifications to the Newtonian potential if the error in $\kappa$ and $\sigma$ is smaller than one, that is, when $\Delta \kappa/\kappa < 1$ or $\Delta \sigma / \sigma < 1$. Note that, to find the sensitivity to the actual values of, for example, $\alpha$ and $\lambda$, we would need a full multi-parameter likelihood analysis, which requires us to go beyond the regular error-propagation formula for the parameter we consider here. Such an analysis is currently beyond the scope of this work. Instead, we focus mainly on detecting $\sigma$, since it is the amplitude of the time-dependent signal.
Unfortunately, the QFI does not actually reveal the optimal measurement that saturates the quantum Cram\'{e}r--Rao bound. To obtain this information, one must compute the classical Fisher information for a particular measurement and examine whether it saturates the quantum Fisher information. It is known that, when the optomechanical coupling is constant and takes on specific values, that a homodyne measurement of the optical field is optimal~\cite{qvarfort2018gravimetry,qvarfort2020optimal}. When the optomechanical coupling is modulated at resonance, as is the case here, the optimal measurement is not yet known. The gravitational interaction between the source and the optomechanical probe results in a phase shift of the optical state. Therefore, the utility of a homodyne measurements can be expected also for the case of modulated optomechanical coupling, but we leave this specific analysis to future work.
In practise, once the optomechanical probe has interacted with the source, the system is measured to extract information about the gravitational force. Standard measurements that are performed on the optomechanical system include homodyne and heterodyne measurements of the cavity field, as well as photon detection measurements, which can either be resolving (counting the number of photons) or non-resolving (merely detecting the presence of a photon). In a homodyne measurement, the output light from the optomechanical system is brought into interference with a local oscillator light field which comes from the same source as the input light field of the optomechanical system. This is the same measurement principle that is, for example, employed in a Mach-Zehnder interferometer to infer a phase shift on a light field. Heterodyne measurements, on the other hand, compare the collected light with a different coherent state reference. The usefulness of each measurement depends on the situation at hand. Since we focus on deriving the best-possible sensitivities in this work, we leave it to future work to analyse the sensitivity that can be gained from specific measurements.
\begin{figure*}
\centering
\subfloat[ \label{fig:force:alpha:lambda}]{%
\includegraphics[width=0.45\linewidth, trim = 0mm 0mm 0mm 0mm]{alphalambdaforceplot.pdf}%
} $\qquad$
\subfloat[ \label{fig:force:M:Lambda}]{%
\includegraphics[width=0.45\linewidth, trim = 0mm 0mm 0mm 0mm]{MLambdaforceplot.pdf}
}\hfill
\caption{Plots of the ratio of the Newtonian force and the modification $ F_{\mathrm{mod}}/F_{\mathrm{N}} = \epsilon \sigma$. Plot (a) shows the ratio as a function of $\alpha$ and $\lambda$. Plot (b) shows the ratio as a function of $M$ and $\Lambda$ in units of the Planck mass $M_{\mathrm{P}}$ and eV, respectively. The filled-in contours show the radio without screening from the probe. The lines instead show the ratio for when the probe is spherical and contributes to the screening. As a result, the strength of the force is reduced. The parameters used to make these plots can be found in table~\ref{tab:Values}. The mapping between the $(\alpha,\lambda)$ and $(M,\Lambda)$ spaces is non-trivial. The left hand figure shows the range of force modifications that can potentially detected, which is different to the range of theoretically-interesting chameleon parameters, shown on the right.}
\label{fig:force:plot}
\end{figure*}
\section{Results} \label{sec:results}
We are now ready to compute the sensitivities that can be achieved with an ideal optomechanical sensor for detecting modifications of gravity. Specifically, we consider a region of parameter space to be possible to exclude using the optomechanical sensor when the best precision possible on the parameters $\alpha$ and $\lambda$ (or the chameleon parameters $M$ and $\Lambda$) is sufficient to distinguish them from zero, their values in ordinary General Relativity.
\subsection{Fundamental sensitivities}
We first present some simple expressions for the sensitivities that can be achieved, and we then proceed to compute the parameter regions that could potentially be excluded with an optomechanical sensor.
When the source mass oscillates at the same frequency as the optomechanical system, that is, when $\omega_0 =\omega_{\mathrm{mwch}}$, the effects accumulate and cause the position of the optomechanical system to become increasingly displaced.
Following the outline in~\ref{app:sensitivity} we find the following expressions for the sensitivities for $\kappa$ and $\sigma$ at time $t \, \omega_{\mathrm{mech}}= 2\pi n $ (see~\cite{qvarfort2020optimal} for a detailed derivation). For large enough temperatures in the mechanical state, such that $r_T\gg1$, the expressions simplify and we find that the sensitivities $\Delta \kappa$ and $\Delta \sigma$ are given by
\begin{align}
\Delta \kappa &= \frac{1}{\sqrt{\mathcal{M}} \, g_\mathrm{N} } \frac{1}{\Delta \hat N_a} \sqrt{\frac{ 2 \hbar \, \omega_{\mathrm{mech}}^5}{m}} \frac{1}{8 \pi\, n \, k_0 } , \label{eq:constant:sensitivity:kappa} \\
\Delta \sigma &= \frac{1}{\sqrt{\mathcal{M}} \, g_\mathrm{N} } \frac{1}{\Delta \hat N_a} \sqrt{\frac{ 2 \hbar \, \omega_{\mathrm{mech}}^5}{m}} \frac{1}{4 \pi\, n \, k_0 \,\epsilon }, \label{eq:constant:sensitivity:sigma}
\end{align}
where $n$ is an integer, and for an optomechanical coupling $k(t) \equiv k_0$ and phase $\phi_0 = \pi$, and where the variance $(\Delta \hat N_a)^2$ of the photon number is given by~\cite{qvarfort2020optimal}
\begin{align} \label{eq:photon:number:variance}
( \Delta \hat N_a)^2 &= |\mu_{\mathrm{c}}|^2 e^{ 4 r_{\mathrm{sq}}} + \frac{1}{2} \sinh^2(2\,r_{\mathrm{sq}}) - 2 \, \mathfrak{Re} [ e^{- i \varphi/2} \mu_{\mathrm{c}} ]^2 \sinh(4\,r_{\mathrm{sq}}),
\end{align}
where $r_{\mathrm{sq}}$ and $\varphi$ are the squeezing amplitude and phase, and where $\mu_{\mathrm{c}}$ is the coherent state amplitude of the optical mode. The expression in equation~\eqref{eq:photon:number:variance} is maximised when $e^{-i\varphi/2 }\mu_{\rm{c}} $ is completely imaginary, which causes the last term of equation~\eqref{eq:photon:number:variance} to vanish. This can be achieved by assuming that $\mu_{\rm{c}}\in \mathbb{R}$ and setting the squeezing phase to $\varphi = \pi/2$. The other parameters in equation~\eqref{eq:photon:number:variance} have been previously defined in the text (see also table~\ref{tab:Values} for a summary).
The sensitivities can be improved by modulating the optomechanical coupling at the same frequency as the gravitational signal~\cite{qvarfort2020optimal}.
In this work, we choose a sinusoidal modulation with $k(t) = k_0 \cos(\omega_k \, t )$, where $k_0$ is the amplitude of the modulation and $\omega_k$ is the modulation frequency. At resonance, when $\omega_k = \omega_{\mathrm{mech}}$, and for the optimal phase choice $\phi_0 = \pi/2$, we find that the sensitivities for measuring $\kappa$ and $\sigma$ become
\begin{align}
\Delta \kappa^{(\mathrm{mod})} &= \frac{1}{\sqrt{\mathcal{M}} \, g_\mathrm{N} } \frac{1}{\Delta \hat N_a} \sqrt{\frac{ 2 \hbar \, \omega_{\mathrm{mech}}^5}{m}} \frac{1}{ 4 \pi\, n \, k_0 }, \label{eq:modulated:sensitivity:kappa} \\
\Delta \sigma^{(\mathrm{mod})} &= \frac{1}{\sqrt{\mathcal{M}} \, g_\mathrm{N} } \frac{1}{\Delta \hat N_a} \sqrt{\frac{ 2 \hbar \, \omega_{\mathrm{mech}}^5}{m}} \frac{1}{ 2 \pi^2 \, n^2 \, \, k_0 \,\epsilon }\label{eq:modulated:sensitivity:sigma} .
\end{align}
Here, equation~\eqref{eq:modulated:sensitivity:sigma} scales with $n^{-2}$ rather than $n^{-1}$. This enhancement arises from the additional modulation of the optomechanical coupling, and was already noted in the context of time-dependent gravimetry for a purely Newtonian potential~\cite{qvarfort2020optimal}. By now considering the cases where the uncertainty in the parameter is a fraction of the parameter itself, we are able to define the regions in which modifications to Newtonian gravity can be established with certainty.
\begin{table}[h!]
\centering
\caption{ Example parameters used to compute the bounds on modified gravity for a generic optomechanical sensor. We denote the optomechanical (probe) mass by $m$ so as to not confuse it with the Planck mass $M_{\mathrm{P}}$. \vspace{0.2cm}}
\begin{tabular}{Sl Sc Sc} \hline \hline
\textbf{Parameter} & \textbf{Symbol} & \textbf{Value} \\
\hline
Source mass & $M_{\mathrm{S}}$ & $10^{-6}\,$kg \\
Source mass density & $\rho_{\mathrm{S}}$ & $19.3\times 10^3$\,kg\,m$^{-3}$ \\
Source mass radius & $R_{\mathrm{S}}$ & $2\times 10^{-4}$\,m \\
Equilibrium distance & $x_0$ & $ 10^{-3}$\,m \\
Source oscillation amplitude ratio & $\epsilon $ & 0.1 \\
Background density & $\rho_{\mathrm{bg}}$ & $8.27\times 10^{-14}$\,kg\,m$^{-3}$ \\ \hline
Optomechanical coupling & $ k_0/(2\pi)$ & $10$\,Hz \\
Mechanical frequency & $\omega_{\mathrm{mech}}/(2\pi)$ & $100$\,Hz \\
Probe mass & $m$ & $10^{-14}$\,kg \\
Oscillator (probe) mass density (silica) & $\rho_{\mathrm{P}}$ & $1\,538$\,kg\,m$^{-3}$ \\\hline
Coherent state parameter & $|\mu_{\mathrm{c}}|^2$ & $10^6$ \\
Squeezing parameter & $r_{\mathrm{sq}}$ & 1.73 \\
\hline
Number of measurements & $\mathcal{M}$ & $10^3$ \\
Time of measurement & $\omega_{\mathrm{mech}} \, t = 2\pi n $ & $n = 10$ \\ \hline
Newtonian gravitational force at equilibrium distance & $ m g_\mathrm{N} $ & $\sim 6.67 \times 10^{-25}$\,N \\\hline
\multicolumn{3}{c}{Sensitivities (constant coupling)} \\\hline
Sensitivity $\kappa$ & $\Delta \kappa$ & $1.36 \times 10^{-3}$ \\
Sensitivity for constant force & $m g_{\mathrm{N}} \Delta \kappa$ & $9.08 \times 10^{-28}$\,N \\
Sensitivity $\sigma$ & $\Delta \sigma$ & $27.1\times 10^{-3}$ \\
Sensitivity for res. oscillating force & $m g_{\mathrm{N}} \Delta \sigma \epsilon $ & $1.81 \times 10^{-27}$\,N \\\hline
\multicolumn{3}{c}{Sensitivities (resonant coupling)} \\ \hline
Sensitivity $\kappa$
& $\Delta \kappa^{(\mathrm{mod})} $ & $2.71 \times 10^{-3}$ \\
Sensitivity for constant force & $m g_{\mathrm{N}} \Delta \kappa^{(\mathrm{mod})}$ & $1.81 \times 10^{-27}$\,N \\
Sensitivity $\sigma$ &$\Delta \sigma^{(\mathrm{mod})} $ & $1.73 \times 10^{-3}$ \\
Sensitivity for res. oscillating force & $m g_{\mathrm{N}} \Delta \sigma^{(\mathrm{mod})} \epsilon $ & $1.15 \times 10^{-28}$\,N\\ \hline\hline
\end{tabular} \label{tab:Values}
\end{table}
\subsection{Experimental parameters}
We assume that the oscillating source mass oscillates at the resonant frequency of the optomechanical system. We further assume that the source mass is made of solid gold, which has a density of $\rho = 19.3\times10^3$\,kg\,m$^{-3}$. For a mass of $10^{-6}$\,kg ($1$\,mg), this translates into a source mass radius of $R_S = 2.3\times 10^{-4}$\,m.
While this mass is very small compared with those currently used in atom interferometry experiments~\cite{hamilton2015atom}, gravitational fields from masses of slightly larger radii have recently been detected~\cite{westphal2020measurement}. The reason for choosing such a small mass is that the systems can be placed very close together while still achieving a significant oscillation amplitude. This allows us to probe parameter regimes of a short-ranged force. Due to the scaling of the sensitivity as $\Delta \theta \sim x_0^2$, choosing a smaller $x_0$ is always going to be beneficial. We therefore set $x_0 = 10^{-3}$\,m and assume that the oscillation amplitude ratio is $\epsilon = 0.1$. This ensures that, when the source mass oscillates, it does not come into contact with the optomechanical system\footnote{For the choice of such a small source-mass, it might be the case that we must take the mass of the modulation mechanism into account, which would change both the effective mass seen by the optomechanical probe, as well as the screening of the force. A standard piezo stack has a mass of 16\,g, for example.}
For the optomechanical probe, we use the following example parameters: we assume that the effective mass of the optomechanical probe is $ m = 10^{-14}$\,kg, and that the light-matter coupling has an amplitude of $ k_0/(2\pi) = 10$\,Hz. We then assume that the mechanical frequency can be made as low as $\omega_{\mathrm{mch}}/(2\pi) = 100 $\,Hz, which is important since the expressions for $\Delta \kappa$ and $\Delta \sigma$ scale with $\omega_{\mathrm{mech}}^{5/2}$. For the squeezed coherent state, we assume that the coherent state parameter is given by $|\mu_{\mathrm{c}}|^2= 10^6$ and that the phase of the squeezed light can be set to $\varphi = \pi$, which ensures that the photon number variance $(\Delta \hat N_a)^2$ shown in equation~\eqref{eq:photon:number:variance} is maximized. One of the highest squeezing factors that have been achieved to-date is $r_{\mathrm{sq}} = 1.73$~\cite{vahlbruch2016detection}, which is what we choose to include here. We also consider a protocol where we perform $\mathcal{M} = 10^3$ measurements at time $t \, \omega_{\mathrm{mech}} = 20 \pi$, which allows us to improve the sensitivity a bit further.
To derive the bounds on the chameleon parameters $M$ and $\Lambda$, we assume that the optomechanical system can be operated in high vacuum. This also helps in terms of mitigating mechanical noise; in generic oscillators, damping effects are well-understood and largely not present below $10^{-7}$\,mbar~\cite{cole2011phonon}. On the other hand, it can be challenging to confine a levitated optomechanical system at high vacuum~\cite{pontin2020ultranarrow}. Recently, however, several works have demonstrated trapping at $10^{-7}$\,mbar of pressure~\cite{pontin2020ultranarrow, delic2020levitated}, even going as low as $9,2\times 10^{-9}$\,mbar~\cite{magrini2020optimal}. Using these values as our starting point, we note that $ 10^{-9}$\,mbar translates into a molecular background density of $\rho_{\mathrm{bg}} = 8.27\times 10^{-14}$\,kg\,m$^{-3}$. To derive this value, we have used the ideal gas law, which can be rewritten to give $\rho_{\mathrm{bg}} = P m_{N_2}/(k_{\mathrm{B}} T)$. Here, $P$ is the pressure (in Pascal), $k_{\mathrm{B}}$ is Boltzmann's constant, $T$ is the temperature (in Kelvin), and where we have assumed that the vacuum chamber has been vented with hydrogen of molecular mass $m_{H} = 3.3\times10^{-27}$\,kg before being emptied (that is, it was filled with hydrogen gas, such that any residual particles inside the chamber are $H_2$ particles).
All parameters are summarized in table~\ref{tab:Values}. There, we also give values for the Newtonian gravitational force for source and sensor at their respective equilibrium positions, which is approximate equivalent to the time-averaged Newtonian force and
the sensitivities shown in equations~\eqref{eq:constant:sensitivity:kappa},~\eqref{eq:constant:sensitivity:sigma},~\eqref{eq:modulated:sensitivity:kappa}, and~\eqref{eq:modulated:sensitivity:sigma}. We find that for a constant optomechanical coupling, the sensitivities become $\Delta \kappa = 1.36 \times 10^{-3}$ and $\Delta \sigma = 27.1\times 10^{-3}$. For a time-dependent optomechanical coupling modulated sinusoidally at resonance, we find sensitivities $\Delta \kappa^{(\mathrm{mod})} = 2.71 \times 10^{-3}$ and $\Delta \sigma^{(\mathrm{mod})} = 1.73 \times 10^{-3}$, where $\Delta \kappa^{(\mathrm{mod})}$ is slightly worse than $\Delta \kappa$ and $\Delta \sigma^{(\mathrm{mod})}$ is slightly better than $\Delta \sigma$. In table~\ref{tab:Values}, we also give the corresponding force sensitivities.
To see how strong the modified contributions to the force are compared with just the Newtonian part, we plot the amplitude of the time-dependent modification $F_{\mathrm{mod}} = \frac{G m M_{\mathrm{s}}}{x_0^2} \epsilon \sigma $ as a fraction of the Newtonian force $F_{\mathrm{N}} = \frac{G m M_{\mathrm{s}}}{x_0^2} $. The result can be found in Figure~\ref{fig:force:plot}, where we have plotted contours for $ F_{\mathrm{mod}}/F_{\mathrm{N}} = \epsilon \sigma$ using the experimental parameters in table~\ref{tab:Values}. Figure~\ref{fig:force:alpha:lambda} shows $F_{\mathrm{mod}}/F_{\mathrm{N}}$ as a function of $\alpha$ and $\lambda$, and Figure~\ref{fig:force:M:Lambda} shows $F_{\mathrm{mod}}/F_{\mathrm{N}}$ as a function of $M$ and $\Lambda$. The filled-in contours in Figure~\ref{fig:force:M:Lambda} correspond to the force shown in equation~\eqref{eq:force:no:screening}, where the screening from the optomechanical probe itself has been ignored. The lines, on the other hand, correspond to the force shown in equation~\eqref{eq:force:spheres} where the screening from a spherical probe has been taken into account.
\subsection{Fundamental bounds for the Yukawa parameters $\alpha$ and $\lambda$}
We are now ready to compute the bounds on the parameter ranges that could potentially be tested with a quantum optomechanical system.
To find the bounds, we consider the ratios $\Delta \kappa/ \kappa$ and $\Delta \sigma/\sigma $ as functions of $\alpha$ and $\lambda$, where $\kappa$ and $\sigma$ were defined in equation~\eqref{eq:kappa} as the modification due to the gravitational force at the equilibrium distance and the amplitude of the time-dependent contribution. The result can be found in figure~\ref{fig:bound:alpha:lambda}: the dark green dashed line shows where the relative error satisfies $\Delta \kappa/\kappa = 1$, and the dotted green line shows where $\Delta \kappa^{(\mathrm{mod})}/\kappa = 1$. Since $\kappa$ corresponds to the static modification of the gravitational force, modulating the optomechanical coupling does not improve the sensitivity. We instead focus on the dynamic contribution from $\sigma$. The lighter purple region shows where $\Delta \sigma/\sigma<1 $, and the darker purple region shows where $\Delta \sigma^{(\mathrm{mod})} < 1$. The resonantly modulated optomechanical coupling provides a significant enhancement for $\Delta \sigma$.
The general features in figure~\ref{fig:bound:alpha:lambda} can be understood by examining the form of $\kappa$ and $\sigma$, which are shown in equation~\eqref{eq:kappa}. When $\lambda \gg x_0$, the exponential can be approximated as $e^{-x_0/\lambda} \sim 1$. This means that $\sigma$ becomes $\sigma \sim 2\alpha$, which is independent $\lambda$ and thereby explains the straight line at $|\alpha| \sim 10^{-3}$. Once $\lambda < x_0$, which corresponds to a short-ranged Yukawa force, the effect can no longer be seen by the optomechanical probe. However, the bounds in figure~\ref{fig:bound:alpha:lambda} could be shifted to the left by decreasing $x_0$. Care must be taken that the two systems do not touch, which is limited by the source sphere and probe radii, as well as the oscillation amplitude $\epsilon x_0$. For the example parameters used here, the smallest distance between the system is $0.7$\,mm.
\begin{figure*}
\centering
\subfloat[ \label{fig:bound:alpha:lambda}]{%
\includegraphics[width=0.45\linewidth, trim = 10mm 0mm -10mm 0mm]{alphalambdasensitivity.pdf}%
} $\qquad$
\subfloat[ \label{fig:bound:M:Lambda}]{%
\includegraphics[width=0.45\linewidth, trim = 10mm 0mm -10mm 0mm]{ExclusionMLambdaSmallRange.pdf}
}\hfill
\caption{Ideal bounds for detecting modifications to Newtonian gravity with an optomechanical sensor. Each bound shows where the value of the modification is greater than the error bound. The parameters used in both plots are shown in table~\ref{tab:Values}. Plot (a) shows the bounds for the Yukawa parameters $\alpha$ and $\lambda$. The dashed dark green line indicates where $\Delta \kappa/\kappa = 1$, and the dotted lighter green line where $\Delta \kappa^{(\mathrm{res})} / \kappa = 1$. The light purple area shows the parameter regime where $\Delta \sigma^{(\mathrm{res})}/\sigma <1$ and the dark purple area shows where $\Delta \sigma / \sigma <1$. Since $\kappa$ is a constant effect, modulating the optomechanical coupling yields no improvement of the sensitivity. Plot (b) shows the bounds for the chameleon parameters $M$ in terms of the Planck mass $M_{\mathrm{P}}$ and $\Lambda$ in eV. The bounds include a point-particle approximation of the sensor (the two largest lighter purple areas) and the inclusion of screening from a spherical probe (darker purple lines). The magenta dotted lined shows where the screening length of the source mass is zero $S_S = 0$, below which the screening of the probe starts reducing the sensitivity. Similarly, the orange dashed line shows where the screening length of the probe is zero $S_P = 0$, below which the screening of the spherical probe reduces the sensitivity. We have refrained from plotting the bounds $\Delta \kappa/\kappa$ and $\Delta \kappa^{(\mathrm{mod})}/\kappa$ here as they roughly follow the outline of the bounds on $\sigma$.}
\label{fig:exclusion:plot}
\end{figure*}
\subsection{Fundamental bounds for the chameleon parameters $M$ and $\Lambda$}
To obtain the bounds on $M$ and $\Lambda$, we rescale $M$ in terms of the reduced Planck mass $M_{\mathrm{P}}$. We then compute the bounds for $M$ and $\Lambda$ by plotting $\Delta \sigma/\sigma$ as a function of $M$ and $\Lambda$ for the following two cases: (i) when the probe is approximated as a point-particle (no probe screening), and (ii) when the screening from the probe is taken into account. The latter we denote by $\Delta \sigma_{(\mathrm{scr})}$ and $\Delta \sigma_{(\mathrm{scr})}^{(\mathrm{mod})}$.
We compute these quantities by numerically solving equation~\eqref{eq:solve:for:S} for $S_S$ and $S_P$ at each point. The expression for $\sigma$ given in equation~\eqref{eq:kappa:sigma:shielded}.
The result can be found in figure~\ref{fig:bound:M:Lambda}. Note that we do not plot the bounds for $\Delta\kappa$ and $\Delta\kappa^{(mod)}$ for clarity, and because as static contributions they are more difficult to distinguish from a constant noise floor. The lighter regions show the bounds when the optomechanical probe does not contribute to the screening of the fifth force. This is equivalent to approximating the probe as a point-particle. In contrast, the darker regions show the reduction in sensitivity due to the screening that arises from a spherical optomechanical probe.
To explain the features of the plot, we draw lines where the screening from the probe $S_P$ and source system $S_S$ vanishes.
The magenta line shows where $S_S = 0$ and the orange line shows where $S_P = 0$. Above each line, the screening is zero, while below the lines, the screening lengths increase and the modifications to Newtonian gravity can no longer be detected. Finally, the right-most boundary of the dark purple area can be understood as follows: The appearance of $M^{-3}$ in $m_{\mathrm{bg}}$ (see equation~\eqref{eq:mbg}) ensures that, when $M$ is large compared with the other quantities, $m_{\mathrm{bg}}$ is small. This, in turn, means that the range of the force $\lambda_{\mathrm{bg}}$, as shown in equation~\eqref{eq:lambda:bg} will be large. It then follows that the amplitude $\sigma$ (see equation~\eqref{eq:kappa:sigma:shielded}) will be approximately $\sigma \approx \alpha_{\mathrm{bg}, P}$, where $\alpha_{\mathrm{bg}, P} = 2 M^2/ M_{\mathrm{P}}^2$ from equation~\eqref{eq:alphabgP} (note that $\xi_S = \xi_P = 1$ because we are considering the range of $\Lambda$ above the orange and magenta lines). This means that $\sigma$ is independent of $\Lambda$ and the boundary becomes a vertical line. The point at which the ratio $\Delta \sigma/\sigma = 1$ then occurs is $M/M_{\mathrm{P}} = 48.1$.
\begin{figure*}
\centering
\subfloat[ \label{fig:convex:hull:alpha:lambda}]{%
\includegraphics[width=0.45\linewidth, trim = 10mm 0mm -10mm 0mm]{Exclusionalphalambda_excluded-region.pdf}%
} $\qquad$
\subfloat[ \label{fig:convex:hull:M:Lambda}]{%
\includegraphics[width=0.45\linewidth, trim = 10mm 0mm -10mm -0mm]{ExclusionLambdaM_excluded-region.pdf}
}\hfill
\caption{Comparison between predictions (this work) and known experimental bounds (pink region). Both plots show the convex hull (yellow) of the bounds derived in this work in figure~\ref{fig:exclusion:plot}. Plot (a) shows the bounds in terms of the Yukawa parameters $\alpha$ and $\lambda$, while Plot (b) shows the bounds in terms of the chameleon screening parameters $M$ and $\Lambda$. Plot (b) also includes the bounds (yellow) for when the optomechanical probe contributes to the screening of the chameleon field. The pink areas represent the experimentally excluded regions based on figure~8 of~\cite{Murata_2015} and recent results presented in~\cite{PhysRevLett.124.051301} (see figure~6). (b) shows bounds in terms of $M$ and $\Lambda$, which are the mass and energy-scale for the chameleon screening mechanism. The experimentally excluded regions are based on those reported in Ref~\cite{burrage2018tests}. }
\label{fig:exclusion:plot:comparison}
\end{figure*}
\subsection{Relation to existing experimental bounds}
To see how the theoretical bounds in figure~\ref{fig:exclusion:plot} relate to known experimental bounds on Newtonian gravity, we plot the convex hull of the shaded areas in figures~\ref{fig:bound:alpha:lambda} and~\ref{fig:bound:M:Lambda} against the bounds presented in Refs.~\cite{Murata_2015,PhysRevLett.124.051301,burrage2018tests}.By comparing with experimental results, we are able to demonstrate where optomechanical systems could help further constrain known bounds according to the results in this work. We emphasise however that this comparison is highly hypothetical, since experimental challenges such as noise, long-term stability, and integration over many runs of the experiment have not been included in our analysis. Much more work is required before it is known exactly how the optomechanical probe compares with other platforms (see section~\ref{sec:discussion}).
The bounds can be found in figure~\ref{fig:exclusion:plot:comparison}, where figure~\ref{fig:convex:hull:alpha:lambda} shows the bounds in terms of $\alpha$ and $\lambda$, and where figure~\ref{fig:convex:hull:M:Lambda} shows the bounds in terms of $M$ and $\Lambda$. The yellow regions show the convex hull of the bounds derived in this work, and the purple region shows the combined parameter spaces that have been experimentally excluded. The orange area in figure~\ref{fig:convex:hull:M:Lambda} shows the excluded region for when the optomechanical probe is approximated as a point-particle, i.e.~the chameleon screening due to the finite size of the probe is neglected.
Our results indicate that, for the values used in this work, even the ideal realisation of a nonlinear optomechanical sensor achieves similar bounds on $\alpha$ and $\lambda$ to those already reported in the literature. The decoherence, dissipation and thermalisation effects not accounted for in this description are likely to further reduce the sensitivity. This suggests that the sensitivity of the system must be improved further, should we wish to probe the hitherto unexplored regions in figure~\ref{fig:convex:hull:alpha:lambda}. From inspecting equations~\eqref{eq:constant:sensitivity:kappa},~\eqref{eq:constant:sensitivity:sigma},~\eqref{eq:modulated:sensitivity:kappa}, and~\eqref{eq:modulated:sensitivity:sigma}, we note that the strongest dependence is with the mechanical frequency $\omega_{\rm{m}}$. Thus the lower $\omega_{\rm{m}}$, the better the sensitivity. Another strategy would be to increase the strength of the light--matter coupling $k_0$, however this is a long-standing challenge for many experimental platforms. More effective perhaps would be to decrease the separation distance $x_0$ between the probe and source systems, which would allow the optomechanical sensor to explore a larger range of $\lambda$, in particular smaller $\lambda$, since the Yukawa potential will not be as suppressed there. However, as the sensor is moved closer to the source sphere, the Casimir effect is expected to strongly contribute to the resulting acceleration (see below). On the other hand, our results according to figure~\ref{fig:convex:hull:M:Lambda} indicate that optomechanical systems could be used to probe some hitherto unexplored regions of the chameleon parameters $M$ and $\Lambda$. The advances here likely depend on the quality of the background vacuum.
\section{Discussion} \label{sec:discussion}
In this section, we discuss the challenges that must be overcome when considering an experiment of this nature. They include systematics and noise that affect the experiment, as well as forces that arise from the Casimir effect.
\subsection{Examining the conditions for linearising the force}
In order to definitely rule out modifications to the Newtonian potential, we must experimentally determine if the observed data deviates from that predicted by Newtonian gravity. Doing so requires extensive knowledge of the full dynamics of the system, including higher-order contributions from the Newtonian potential that we have neglected in our main analysis.
With this in mind, we examine the derivation of the linearised gravitational potential (see the expansion in equation~\eqref{eq:expanded:potential}) to determine when this linearisation breaks down. We assumed that the perturbation $\delta x$ to the position of the optomechanical element is small compared with $x_S(t)$ (the distance from the probe to the source mass) at all times. However, depending on the intended precision of the measurement of the force, Newtonian gravitational terms of second order in $\delta x$ may become relevant, that is, terms of the form $\propto\bigl( \hat b^\dag + \hat b\bigr)^2$. These terms can be included into the full dynamical analysis, which has been done in~\cite{bruschi2020time}. We leave performing the same analysis for modified gravity to future work.
Moreover, the radiation pressure found in an optomechanical setup has the explicit effect of displacing the mechanical element. When the light-matter coupling is modulated at mechanical resonance, the maximum position increases linearly as a function of time~\cite{qvarfort2020optimal}. Once this displacement grows too large, the approximation under which the optomechanical Hamiltonian in equation~\eqref{eq:basic:Hamiltonian} was derived is no longer valid (see e.g. Ref~\cite{law1995interaction} for details of how the optomechanical Hamiltonian is derived). A method for dealing with a displacement driven by radiation pressure would be attempting to cancel the expected radiation pressure by manually introducing a time-dependent linear potential $\sim (\hat b^\dag + \hat b)$ into the dynamics~\cite{qvarfort2020optimal}. In this way, the displacement from the light-radiation pressure is cancelled, while the phase from the gravitational interaction is still imparted on the optical state. The drawback of this method is that it most likely introduces additional noise into the experimental setup from the linear driving term. We do however leave the full quantum metrology analysis to future work.
\subsection{Limitations due to the Casimir effect}
Due to the relative weakness of gravity compared with electromagnetic force, the latter are likely to dominate any experimental setting. Therefore, any stray electromagnetic effects must be controlled very precisely in order to detect deviations from Newtonian gravity. One of the most important effects that has to be taken into account is the Casimir force~\cite{casimir1948influence}, which becomes significant when the distance between the probe and the source mass is small. To estimate the effect of the Casimir force, we use an analytic formula given in~\cite{Bimonte2018beyond} (based on the results of~\cite{Bimonte2012exact}) for the force due to the Casimir effect between two homogeneous perfectly conducting spheres at a distance much larger than their radii. The model of two perfectly conducting spheres is unlikely to accurately describe the experimental realisation of optomechanical setup described in this article, both in terms of geometry and material. Therefore, we will use this case to give only a first estimate of the effect and discuss how to suppress it.
We consider the Drude boundary condition model for isolated conductors (see~\cite{Bimonte2018beyond} for details). For the distance between the probe and the source $x_0-R_S-R_P$ being much larger than the thermal wavelength, i.e. $ x_0-R_S-R_P \gg \lambda_T = \hbar c/(2\pi k_B T)$ (where the thermal wavelength is about $1\mathrm{\mu m}$ at room temperature) the classical thermal contribution to the Casimir force dominates, which leads to the expressions
\begin{equation}\label{eq:Casimir:effect:Dr}
F_C \approx 18k_{\mathrm{B}} T \frac{R_S^3 R_P^3}{(x_0-R_S-R_P)^7},
\end{equation}
where $R_P$ and $R_S$ are the radii of the probe and the source, respectively.
At room temperature and for the parameters given in table~\ref{tab:Values}, equation~\eqref{eq:Casimir:effect:Dr} leads to an acceleration of the order of $ 9\times 10^{-13}\,\mathrm{m\,s^{-2}}$, experienced by the probe mass, while the gravitational acceleration induced by the source mass is of the order of $6\times 10^{-11}\,\mathrm{m\,s^{-2}}$. Casimir forces are therefore of order $10^{-2}$ smaller than the main gravitational component. The size of the fifth force corrections we consider here are largely controlled by the two parameters $\sigma$ and $\kappa$ as shown in equation~\eqref{eq:linearised:potential:chameleon}. At peak sensitivity, when $x_0\sim \lambda$, this means that the ratio of the Casimir force to the Newtonian gravitational force should be compared to $\alpha$. We see from figure~\ref{fig:bound:alpha:lambda} that this ratio can be as low as $10^{-3}$ at the edge of the detectable region; the corrections are even smaller at lower $\lambda$. Furthermore, since the Casimir force grows very strongly with the inverse distance of the source and probe mass, the Casimir force quickly overshadows the fifth force contributions by many orders of magnitude when the source-probe distance is decreased to achieve better sensitivities. This shows that the Casimir effect is a relevant systematic that has to be controlled, that is, either precisely quantified or reduced. One way to reduce the force is to lower the temperature of the setup.
Another option to suppress the Casimir effect is to place a material in-between the source mass and the sensor that acts as a shield to the Casimir effect~\cite{chiaverini2003new,munday_measured_2009}. The Casimir force of the shield will be stationary while the un-shielded gravitational acceleration will be time-dependent, and therefore, clearly distinguishable~\cite{schmole2016micromechanical}. This approach is, however, limited by the size of the shield.
For example, in levitated optomechanics, the screening scheme can be naturally realized by placing the source mass behind one of the cavity end mirrors such that the mirror serves as a shield. However, in the case of detecting modifications due to a chameleon field, the presence of the mirror might introduce additional screening effects that need to be accounted for. The Casimir effect may also be reduced by modulating or compensating for the Casimir force with radiation pressure~\cite{banishev_modulation_2012}, nano-structuring of the source and probe surfaces~\cite{intravaia2013strong}, or an optical modulation of the charge density~\cite{chen_control_2007}.
Further analysis of the impact of a shield, or other techniques for accounting for the impact of the Casimir force, will require detailed numerical modelling. For example, Pernot-Borr et al.~\cite{pernot2019general} considered the impact of cylindrical walls on the screening of a source, finding that it can depend strongly on the thickness of the wall used for screening. Since we here consider the fundamental limits of an optomechanical setup, we leave a numerical analysis of the impact of different approaches to future work.
\subsection{Improvements to the sensitivity}
There are a number of ways in which the sensitivity of the optomechanical system can be further improved. In this work, we considered spherical source masses and probes in order to analytically derive the screening from the probe, however, choosing a different shaped source may improve the bounds that could be achieved. For example, a source mass in the form of a slab much larger than the probe system would mitigate gradient contributions from the Newtonian part of the potential, since the gravitational force from an infinite plane is constant. Furthermore, it was shown in Ref~\cite{Burrage:2017shh} that symmetric source masses tend to be much more strongly screened (and thus have smaller detectable effects) then asymmetric sources. Therefore, we would expect to obtain more favourable precision bounds than those presented in this work by considering asymmetric sources. An interesting prospect also arises from the fact that the optomechanical probe itself can also be asymmetric, e.g. in the shape of a levitated rod~\cite{kuhn2017optically}, which offers an additional avenue compared with, for example, atomic systems. However, these non-spherical cases bring with them additional challenges. The approximation used in equation~\eqref{eq:static_field} assumes a spherical source (probe), and approximates the nonlinear solution of the chameleon field equation with an analytic expression derived by asymptotic matching. To accurately obtain measurements with a non-spherical setup would require precise numerical modelling of the chameleon field around a (non-spherical) source and probe, such as done in Ref~\cite{Burrage:2017shh}. The precise effect on the sensitivity is left to future work.
As a final note, we mention that the nonlinear radiation-pressure term in the Hamiltonian in equation~\eqref{eq:basic:Hamiltonian} appears in many different contexts, of which not all fall under the category of optomechanics (such as for example electromechanical setups~\cite{tsang2010cavity}). Our results therefore apply to these systems as well. We therefore have a large range of systems to choose from when it comes to optimising the geometry and resulting sensitivity.
\subsection{Future work towards an experimental proposal}
The sensitivities calculated in this work give us an indication of the \textit{resolution} of the force that the optomechancial probe can achieve in principle. That is, we learn the magnitude of the gravitational force that can be detected. In practice, however, we must then determine whether this force is simply the Newtonian force, or whether it is due to the Newtonian force and an additional force that arises from the modification. With a good-enough resolution, such a modification can be detected even if the Newtonian force is much stronger than the modification.
There are several methods by which the modification can be detected. The first is to very carefully model the influence of the Newtonian force on the optomechanical dynamics and data that is collected through e.g. a homodyne measurement. If a deviation in the collected data is then seen, steps should be taken to rule out any other source. Another way is to carefully change the equilibrium separation distance $x_0$ between the source sphere and the optomechanical probe. Since the modifications considered in this work changes quite drastically due to the inclusion of the exponential term in equation~\eqref{eq:modified:gravitational:potential}, it should be possible to detect such a exponential change in the data. Both of these methods here can be theoretically explored in future work.
Our results can be used to evaluate the fundamental ability of a quantum optomechanical system to probe a particular parameter regime of modified gravity theories. A realistic optomechanical system, however, will be affected by a number of systematics and noise sources, including optical dissipation from photons leaking from the cavity, mechanical thermal noise, Brownian motion noise, damping effects, and noise from the trapping or clamping mechanism, as well as radiation back-action noise and shot-noise. Yet additional noise sources include external gravitational noise and environmental vibrations (see e.g.~\cite{schmole2016micromechanical,schmole2017development} for a discussion of a related experimental setup). Generally, such noise sources have spectral contributions at the resonant frequency of the sensor and are enhanced as well as the signal from the source mass that we wish to detect. Therefore, in practice, it may be favourable to consider an off-resonant sensing scheme, such as those discussed in Refs~\cite{schmole2016micromechanical,schmole2017development}. We also note that such additional noise sources will be particularly dominant when the mechanical frequency is low, however we see from equation~\eqref{eq:constant:sensitivity:kappa} and~\eqref{eq:constant:sensitivity:sigma} that a low mechanical frequency is a necessary requirement if we wish to achieve a high sensitivity. We also note that it is not clear how the sensitivity gained from e.g. modulating the optomechanical coupling changes when the $Q$-factors of the cavity and the oscillators are considered.
To model the noise and systematics mentioned in the previous paragraph, a plausible next step beyond this work involves linearising the optomechanical dynamics around a strong coherent input-state~\cite{aspelmeyer2014cavity}. With the help of phase-space methods~\cite{serafini2017quantum}, it is then possible to include most of the systematics and noise terms mentioned above into the dynamics. In addition, a homodyne measurement could be modelled using input-output theory for the optical mode. One can then examine the susceptibility of the mode and determine the noise levels required for these effects to be detectable~\cite{motazedifard2021ultraprecision}. An important question that must be addressed is the laser power required to maximise the sensitivity.
Since the linearisation gives rise to equations of motion that differ from those used here, it is difficult to predict what the resulting bounds on modified gravity theories will look like compared with those presented here. Most likely, the presence of noise and absence of non-Gaussian resources (which arise from the nonlinear coupling) means that the prediction for the sensitivity is reduced.
To instead extend the analysis in this work even further in the nonlinear optomechanical regime, we must include noise in the solution of the dynamics for the nonlinear Hamiltonian in equation~\eqref{eq:basic:Hamiltonian}. However, since the resulting nonlinear Langevin equations are generally much more difficult to solve (although certain solutions in the weak-coupling limit and for systems with weak optical decoherence exist~\cite{rabl2011photon, nunnenkamp2011single}), we expect this to be challenging. A preliminary step towards modeling Markovian optical decoherence affecting the intra-cavity state was recently taken~\cite{qvarfort2020master}, and mechanical thermal noise has been modelled using a range of methods~\cite{bassi2005towards, bernad2006quest}. For a strongly coupled system, however, optical and mechanical noise cannot be treated separately, and must instead be considered together~\cite{hu2015quantum,betzholz2020breakdown}. To our knowledge, fundamental quantum metrology bounds in the noisy nonlinear regime have not yet been considered.
Another aspect that needs to be modelled is the additional screening that arises from the inclusion of a shield to block out Casimir forces. In addition, for a levitated optomechanical sphere, a mirror must be placed between the optomechanical probe and the source, which also contributes to the screening (but which can, at the same time, act as the Casimir shield). To carry out a full analysis of the screening, the geometry of the vacuum chamber, along with the trapping mechanism of the optomechanical system and the Casimir shield, must be carefully modelled. It is then possible to exactly predict the magnitude of the modified force that the optomechanical probe can detect.
\section{Conclusions} \label{sec:conclusions}
In this work, we derived the best-possible bounds for detecting modified gravity with a quantum optomechanical sensor. We modelled the effects of a force from an oscillating source mass on the optomechanical probe and estimated the sensitivity of the system by computing the quantum Fisher information. In particular, we considered the additional screening that arises due to the relatively large size of the optomechanical probe.
Our results show that optomechanical sensors could, in principle, be used to improve on existing experimental bounds for the chameleon screening mechanism, although more work is needed to evaluate the prospects for using experimental optomechanical systems as probes for modified gravity.
\section*{Data availability statement}
The code used to compute the screening and sensitivity to chameleon fields can be found in the following online GitHub repository: \href{https://github.com/sqvarfort/modified-gravity-optomech}{https://github.com/sqvarfort/modified-gravity-optomech}.
\section*{Acknowledgments}
We thank Markus Rademacher, Niall Moroney, David Edward Bruschi, Doug Plato, Alessio Serafini, Daniel Braun, Michael R. Vanner, Peter F. Barker, Witlef Wieczorek, Clare Burrage, and Hendrik Ulbricht for helpful comments and discussions. S.Q. was supported in part by an Engineering and Physical Sciences Research Council (EPSRC) Doctoral Prize Fellowship, the Wallenberg Initiative on Networks and Quantum Information (WINQ), and the Marie Skłodowska-Curie Action IF programme “Nonlinear optomechanics for verification, utility, and sensing” (NOVUS) -- Grant- Number 101027183. D.R. would like to thank the Humboldt Foundation for supporting his work with their Feodor Lynen Research Fellowship and acknowledges funding by the Marie Skłodowska-Curie Action IF programme -- Project-Name “Phononic Quantum Sensors for Gravity” (PhoQuS-G) -- Grant-Number 832250. The work of S.S. was supported by the G\"oran Gustafsson Foundation for Research in Natural Sciences and Medicine, by the Royal Society, and partially supported by the UCL Cosmoparticle Initiative and the European Research Council (ERC) under the European Community’s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement number 306478-CosmicDawn.
|
1,477,468,750,649 | arxiv | \section{Introduction}
This paper introduces an event-based method to detect and track faces from the output of an event-based camera (samples are shown in Fig.\ref{fig:overview}). The method exploits the dynamic nature of human faces to detect, track and update multiple faces in an unknown scene. Although face detection and tracking is considered practically solved in classical computer vision, the use of conventional frame-based cameras does not allow to consider dynamic features of human faces. Event-based cameras record changes in illumination and are therefore able to record dynamics in a scene with high temporal resolution (in the range of 1$\mu$ to 1\,ms). In this work we will rely on eye blink detection to initialise the position of multiple trackers and reliably update their position over time. Blinks produce a unique space-time signature that is temporally stable across populations and can be reliably used to detect the position of eyes in an unknown scene. this paper extends the sate-of-art by:
\begin{itemize}
\item implementing a low-power human eye-blink detection that exploits the high temporal precision provided by event-based cameras.
\item detecting and tracking multiple faces simultaneously at $\mu$s precision.
\end{itemize}
The pipeline is entirely event-based in the sense that every event that is output from the camera is processed into an incremental, non-redundant scheme rather than creating frames from the events to recycle existing image-based methodology. We show that the method is inherently robust to scale change of faces by continuously inferring the scale from the distance of two eyes. Comparatively to existing image-based face detection techniques such as ~\cite{ViolaRobustrealtimeface2004}\cite{JiangFaceDetectionFaster2017}\cite{liu2016ssd}, we show in this work that we can achieve a reliable detection at the native temporal resolution of the sensor without using costly computational techniques. Existing approaches usually need offline processing to build a spatial prior of what a human face should look like or vast amounts of data to be able to use machine learning techniques.
The method is tested on a range of scenarios to show its robustness in different conditions: indoor and outdoor scenes to test for the change in lighting conditions; a scenario with a face moving close and moving away to test for the change in scale, a setup of varying pose and finally a scenario where multiple faces are detected and tracked simultaneously.
In order to compare performance to frame-based techniques, we build frames at a fixed frequency (25fps) from the grey-level events provided by the event based camera. We then apply gold-standard and state-of-the-art face detection algorithms on each frame and the results are used to assess the proposed event-based algorithm.
\subsection{Event-based cameras}
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{camera-principle.png}
\caption{Working principle of the event-based camera and two types of events. 1) change event of type ON is generated at $ t_0$ as voltage generated by incoming light crosses a voltage threshold. 2) time $t_2 - t_1 $ to receive a certain amount of light is converted into an absolute grey-level value, emitted at $t_2$ used for ground truth in the paper.}
\label{fig:ATIS}
\end{figure}
Event-based vision sensors are a new class of sensors based on an alternative signal acquisition paradigm. Rethinking the way how visual information is captured, they increasingly attract attention from computer vision community as they provide many advantages that frame-based cameras are not able to provide without drastically increasing computational resources. Redundancy suppression and low latency are achieved via precise temporal and asynchronous level crossing sampling as opposed to the classical spatially dense sampling at fixed frequency implemented in standard cameras.
Most readily available event-based vision sensors stem from the Dynamic Vision Sensor (DVS)~\cite{LichtsteinerTemporalContrastVision2008}. As such, they work in a similar manner of capturing relative luminance changes. As Fig.~\ref{fig:ATIS} shows, each time illuminance for one pixel crosses a predefined threshold, the camera outputs what is called an event. An event contains the spatial address of the pixel, a timestamp and a positive (ON) or negative (OFF) polarity that corresponds to an increase or decrease in illuminance. Formally, such an event is defined as the n-tuple:
$ev=(x,y,t,p)$,
where $(x,y)$ are the pixel coordinates, $t$ the time of occurrence and $p$ is the polarity.
Variations of event-based cameras implement additional functionality. In this work, we are using the Asynchronous Time-based Image Sensor (ATIS)~\cite{PoschQVGA143dB2011} as it also provides events that encode absolute luminance information, as does \cite{Guo2017}. Here the time it takes to reach a certain threshold is converted into an absolute grey-level value. This representation allows for easier comparisons with the frame-based world. To compare the output of such cameras with conventional ones, artificial frames can be created by binning the grey-level events. A hybrid solution of event- and frame-based world captures grey-level frames like a regular camera on top of the events~\cite{brandli2014240}.
Inherently, no redundant information is captured, which results in significantly lower power consumption. The amount of generated events directly depends on the activity and lighting conditions of the scene. Due to the asynchronous nature and therefore decoupled exposure times of each pixel these sensors timestamp and output events with $\mu$s precision and are able to reach a dynamic range of up to 125\,dB. The method we propose can be applied to any event-based camera operating at sub-millisecond temporal precision as it only uses events that encode change information.
\subsection{Face detection}
The advent of neural networks enables state-of-the-art object detection networks that can be trained on facial images \cite{YangFacenessNetFaceDetection2017, JiangFaceDetectionFaster2017, sun2018face}, which rely on intensive computation of static images and need enormous amounts of data. Although there have been brought forward ideas on how to optimise frame-based techniques for face detection on power-constraint phones, most of the times they have to use a dedicated hardware co-processor to enable real-time operation~\cite{ren2008real}. Nowadays dedicated chips such as Google's Tensor Processing Unit or Apple's Neural Engine have become an essential part in frame-based vision, specialising in executing the matrix multiplications necessary to infer neural networks on each frame as fast as possible. In terms of power efficiency algorithms such as the one developed by Viola and Jones \cite{ViolaRobustrealtimeface2004} are still more than competitive.
Dedicated blink detection in a frame-based representation is a sequence of detections for each frame. To constrain the region of interest, a face detection algorithm normally is used beforehand. Blinks are then deduced from the coarse sequence of detection results, which depending on the frame rate typically ranges from 15 to 25\,Hz\cite{NomanMobileBasedEyeBlinkDetection2018}. In an event-based approach, we turn the principle inside out and use blink detection as a mechanism to drive the face detection and tracking. Being the first real-time event-based face detector and tracker (to the best of our knowledge), we show that by manually labelling fewer than 50 blinks, we can generate sufficiently robust models that can be applied to different scenarios. The results clearly contrast the vast amount of data and GPUs needed to train a neural network.
\subsection{Human eye blinks}
We take advantage of the fact that adults blink synchronously and more often than required to keep the surface of the eye hydrated and lubricated. The reason for this is not entirely clear, research suggests that blinks are actively involved in the release of attention \cite{NakanoBlinkrelatedmomentaryactivation2013}. Generally, observed eye blinking rates in adults depend on the subject's activity and level of focus and can range from $3\,\sfrac{blinks}{min}$ when reading up to $30\,\sfrac{blinks}{min}$ during conversation (Table \ref{table_mean_blinking_rates}). Fatigue significantly influences blinking behaviour, increasing both rate and duration \cite{JohnA.SternBlinkRatePossible1994}.
Typical blink duration is between $100 - 150\,ms$ \cite{BenedettoDriverworkloadeye2011} and shortens with increasing physical workload or increased focus.
\begin{table}[h]
\renewcommand{\arraystretch}{1.3}
\centering
\setlength{\tabcolsep}{9pt}
\begin{tabular}{r|p{1cm}|p{1cm}}
Activity & \multicolumn{2}{c}{\# Blinks per min} \\
\hline
reading & 4.5 & 3-7\\
at rest & 17 &- \\
communicating& 26 & - \\
non-reading &- & 15-30 \\
\end{tabular}
\caption{Mean blinking rates according to \cite{BentivoglioAnalysisblinkrate1997} (left column) and \cite{JohnA.SternBlinkRatePossible1994} (right column)
\label{table_mean_blinking_rates}}
\end{table}
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{single-blink.png}
\caption{Mean and variance of the continuous activity profile of averaged blinks in the outdoor data set with a decay constant of 50\,ms. a) minimal movement of the pupil, almost no change is recorded. b) eye lid is closing within 100\,ms, lots of ON-events (in white) are generated. c) eye is in a closed state and a minimum of events is generated. d) opening of the eye lid is accompanied by the generation of mainly OFF-events (in black).}
\label{fig:single-blink}
\end{figure}
To illustrate what happens during an event-based recording of an eye blink, Fig. \ref{fig:single-blink} shows different stages of the eye lid closure and opening. If the eye is in a static state, few events will be generated (a). The closure of the eye lid happens within 100\,ms and generates a substantial amount of ON events, followed by a slower opening of the eye (c,d) and the generation of mainly OFF events. From this observation, we devise a method to build a temporal signature of a blink. This signature is then used to signal the presence of a pair of eyes in the field of view, hence the presence of a face.
\section{Methods}
\subsection{Temporal signature of an eye blink}
Eye blinks are a natural dynamic stimulus that can be represented as a temporal signature. While a conventional camera is not adequate to produce such a temporal signature because of its stroboscopic and slow acquisition principle, event-based sensors on the contrary are ideal for such a task. The blinks captured by an event-based camera are patterns of events that possess invariance in time because the duration of a blink is independent of lighting conditions and steady across the population. To build a canonical eye blink signature $A(t_i)$ of a blink, we convert events acquired from the sensor into temporal activity. For each incoming event $ev=(x_i,y_i,t_i,p_i)$, we update $A(t_i)$ as follows:
\begin{equation}
A(t_i)=
\left\{
\begin{matrix}
A_{on}(t_{u}) e^{-\frac{t_i-t_{u}}{\tau}} + \frac{1}{scale} & \textrm{if $p_i$=ON}\\
A_{off}(t_{v}) e^{-\frac{t_i-t_{v}}{\tau}} + \frac{1}{scale} & \textrm{if $p_i$=OFF}
\end{matrix}
\right.
\label{eq:activity_increase}
\end{equation}
where $t_u$ and $t_v$ are the times an ON or OFF event occurred before $t_i$. The respective activity function is increased by $\frac{1}{scale}$ at each time $t_n$ an event ON or OFF is registered. The quantity $\textrm{scale}$ acts as a corrective factor to account for a possible change in scale, as a face that is closer to the camera will inevitably trigger more events. Fig.~\ref{fig:activity-timeline} on top of the next page shows the two activity profiles for one tile that aligns with the subject's eye in a recording. Clearly visible are the 5 profiles of the subject's blinks, as well as much higher activities at the beginning and the end of the sequence when the subject moves as a whole. From a set of manually annotated blinks we build such an activity model function as shown in Fig.~\ref{fig:single-blink} where red and blue curve respectively represent the ON and OFF parts of the profile.
Our algorithm detects blinks by checking whether the combination of local ON- and OFF-activities correlates with that model blink that had previously been built from annotated material.
To compute that local activity, the overall input focal plane is divided into one grid of 16 by 16 tiles, overlapped with a second similar grid made of 15 by 15 tiles. Each of these are rectangular patches of $19 \times 15$ pixels, given the event-camera's resolution of $304 \times 240$ pixels. They have been experimentally set to line up well with the eyes natural shape. The second grid is shifted by half the tile width and height to allow for redundant coverage of the focal plane.
An activity filter is applied to reduce noise:
For each incoming event, its spatio-temporal neighbourhood is checked for corresponding events. If there are no other events within a limited time or pixel range, the event is discarded. Events that pass this filter will update ON or OFF activity in their respective tile(s) according to Eq.~\ref{eq:activity_increase}. Due to the asynchronous nature of the camera, activities in the different tiles can change independently from each other, depending on the observed scene.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{activity-timeline.png}
\caption{Showing ON (red) and OFF (blue) activity for one tile which lines up with one of the subject's eyes. Multiple snapshots of accumulated events for 250\,ms are shown, which corresponds to the grey areas.\textbf{a-e)} Blinks. Subject is blinking. \textbf{f)} Subject moves as a whole and a relatively high number of events is generated.}
\label{fig:activity-timeline}
\end{figure*}
\subsubsection{Blink model generation}
The model blink is built from manually annotated blinks from multiple subjects. We used two different models for indoor and outdoor scenes, as the ratio between ON and OFF events changes sufficiently in natural lighting. 20 blinks from 4 subjects resulted in an average model as can be seen in Fig.~\ref{fig:single-blink}. The very centre of the eye is annotated and events within a spatio-temporal window of one tile size and 250\,ms are taken into account to generate the activity for the model. This location does not necessarily line up with a tile of the previously mentioned grids. Due to the sparse nature of events, we might observe a similar envelope of activity for different blinks, however the timestamps of when events are received will not be exactly the same. Since we want to obtain a regularly sampled, continuous model, we interpolate activity between events by applying Eq.~\ref{eq:activity_increase} for a given temporal resolution $R_t = 100 \mu s$. Those continuous representations for ON (red curve) and OFF (blue curve) activity are then averaged across different blinks and smoothed to build the model. Grey area in Fig.~\ref{fig:sparse-correlation} representing such a continuous model corresponds to blue mean in Fig.~\ref{fig:single-blink}. We define the so obtained time-continuous model:
\begin{equation}
B(t)=B_{ON}(t) \cup B_{OFF}(t).
\end{equation}
As part of the model and for implementation purposes, we are also adding the information $N= \frac{\#\textrm{events}}{T.scale}$, which is normalised by the scale term that reflects the typical number of events triggered by a blink within that last $T$ms.
\subsubsection{Sparse cross-correlation}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=\columnwidth]{model-comparison.png}
\caption{
Example of a sparse correlation for OFF activity of an actual blink. The grey patch represents $B_{OFF}$, the activity model for OFF events previously built for outdoor data sets. Blue triangles correspond to the activity $A(t_k)$ for which events have been received in the current time window. Black dots symbolise $B_{OFF}(t_k)$, the value of activity in the model at the same times-tamps as incoming events. Values for blue triangles and black dots will be correlated to obtain the similarity score.
\label{fig:sparse-correlation}}
\end{center}
\end{figure}
When streaming data from the camera, the most recent activity within a $T=250$\,ms time window is taken into account in each tile to calculate the template matching score for ON and OFF activity.
However, the correlation score is only ever computed if the number of recent events exceeds $N$, to avoid costly and unnecessary calculations. To further alleviate computational burden, we harness the event-based nature of the recording by taking into account only values for which we have received events. Fig.\,\ref{fig:sparse-correlation} shows an example of a sparse correlation calculation. The cross-correlation score between the incoming stream of events and the model is given by:
\begin{equation}
C(t_k)=\alpha C_{on}(t_k)+(1-\alpha)C_{off}(t_k),
\label{eq:corr1}
\end{equation}
where
\begin{equation}
C_p(t_k) = \displaystyle\sum_{i=0}^{N} A_{p}(t_i)B_{p}(t_i-t_k),
\label{eq:corr2}
\end{equation}
with $p\in\{ON,OFF\}$. The ON and OFF parts of the correlation score are weighted by a parameter $\alpha$ that tunes the contribution of the ON/OFF events. This is necessary as, due to lighting and camera biases, ON and OFF events are usually not balanced. The weight $\alpha$ is set experimentally, typically for indoor and outdoor conditions.
It is important for implementation reason, to calculate the correlation as it is in Eq.~\ref{eq:corr2} because while it is possible to calculate the value of the model $B(t_m-t_k)$ at anytime, samples for $A$ are only known for the set of times $\{t_i\}$, from the events.
If $C(t_i)$ exceeds a certain threshold, we create what we call a blink candidate event for the tile in which the event that triggered the correlation occurred. Such a candidate is represented as the n-tuple $eb = (r, c, t)$, where $(r,c)$ are the coordinates of the grid tile and $t$ is the timestamp. We do this since we correlate activity for tiles individually and only in a next step combine possible candidates to a blink.
\subsubsection{Blink detection}
To detect the synchronous blinks of two eyes, blink candidates across grids generated by the cross-correlation are tested against additional constraints for verification. As a human blink has certain physiological constraints in terms of timing, we check for temporal and spatial coherence of candidates in order to find true positives. The maximum temporal difference between candidates will be denoted as $\Delta T_{max}$ and is typically 50\,ms, the maximum horizontal spatial disparity $\Delta H_{max}$ is set to 60\,pixels and maximum vertical difference $\Delta V_{max}$ is set to 20\,pixels. Algorithm~\ref{algo:blink-detection-pseudo-code} summarises the set of constraints to validate a blink. We trigger this check whenever a new candidate is stored. The scale factor here refers to a face that has already been detected.
\begin{algorithm}[htb]
\caption{Blink detection}
\label{algo:blink-detection-pseudo-code}
\textbf{Inputs:} A pair of consecutive blink candidate events $eb_u=(r_u,c_u,t_u)$ and $eb_v=(r_v,c_v,t_v)$ with $t_u > t_v$ \\
\If{ ($t_{u} - t_{v} < \Delta T_{max}$) AND ($|r_{u} - r_{v}| < \Delta V_{max} \times scale$) AND ($|c_{c} - c_{v}| < \Delta H_{max} \times scale$)}{
\eIf{face is a new face}{\textbf{return} 2 trackers with $scale = 1$}{\textbf{return} 2 trackers with previous $scale$}}{}
\end{algorithm}
\subsection{Gaussian tracker}
Once a blink is detected with sufficient confidence, a tracker is initiated at each detected location.
Trackers such as the ones presented in~\cite{LagorceAsynchronousEventBasedMultikernel2015} are used with bivariate normal distributions to locally model the spatial distribution of events. For each event, every tracker is assigned a score that represents the probability of the event being generated by the tracker:
\begin{equation} \label{eq:tracker_probability}
p(u) = \frac{1}{2\pi}|\Sigma|^{-\frac{1}{2}} e^{-\frac{1}{2}(\mathbf{u-\mu})^T \Sigma^{-1}(\mathbf{u-\mu})}
\end{equation}
where $ u= [x,y]^T$ is the pixel location of the event and the covariance matrix $\Sigma$ is determined when the tracker is initiated and will also update according to the distance between the eyes. The tracker with the highest probability is updated, provided that it is higher than a specific threshold value. A circular bounding box for the face is drawn based on the horizontal distance between the two eye trackers. We shift the centre of the face bounding box by a third of the distance between the eyes to properly align it with the actual face.
\subsection{Global algorithm}
The detection and tracking blocks put together allow us to achieve the following event-by-event global face tracking Algorithm~\ref{algo:general_pseudo_code}:
\begin{algorithm}[htb]
\caption{Event-based face detection and tracking algorithm} \label{algo:general_pseudo_code}
\For{each event ev(x, y, t, p)}{
\If{at least one face has been detected}{
update best blob tracker for $ev$ as in (\ref{eq:tracker_probability})
update $scale$ of face for which tracker has moved according to tracker distance
}{
update activity according to (\ref{eq:activity_increase})
correlate activity with model blink as in (\ref{eq:corr1})
run Algorithm~\ref{algo:blink-detection-pseudo-code} to check for a blink
}
}
\end{algorithm}
\section{Experiments and Results}
We evaluated the algorithm's performance on a total of 48 recordings from 10 different people. The recordings are divided into 4 sets of experiments to assess the method's aptitude under realistic constraints encountered in natural scenarios. The event-based camera is static, observing people interacting or going after their own tasks. The data set tested in this work includes the following parts:
\begin{itemize}
\item a set of indoor sequences showing individuals moving in front of the camera.
\item a set of outdoor sequences similarly showing individuals moving in front of the camera.
\item a set of sequences showing a single face moving back and forth w.r.t. the camera to test for scale change robustness.
\item a set of sequences with several people discussing, facing the camera to test for multi-detections.
\item a set of sequences with a single face changing its orientation w.r.t. the camera to test for occlusion resilience.
\end{itemize}
The presented algorithm has been implemented in C++ and runs in real-time on an Intel Core i5-7200U CPU. We are quantitatively assessing the proposed method's accuracy by comparing it with state of the art and gold standard face detection algorithms from frame-based computer-vision. As these approaches require frames, we are generating grey-levels from the camera when this mode is available. The Viola-Jones~\cite{ViolaRobustrealtimeface2004} algorithm provides the gold standard face detector and a Faster R-CNN and a Single Shot Detector (SSD) network that have been trained on the Wider Face\cite{YangWiderfaceface2016} data set enable comparison with state-of-the-art face detectors based on deep learning~\cite{ren2015faster, liu2016ssd}.
\subsection{Blink detection and face tracking}
The proposed blink detection and face tracking technique requires reliable detections or true positives. We do not actually need to detect all blinks because one is already sufficient to initiate the trackers. Additional incoming blink detections are used to correct trackers' drifts from time to time and could possibly decrease latency until tracking starts. As we will show in the experimental results, blinks are usually detected with a ratio of 60\% which ensures reliable tracking accuracy.
\subsection{Indoor and outdoor face detection}
The indoor data set consists of recordings in controlled lighting conditions. As blinking rates are highest during rest or conversation, subjects in a chair in front of the camera were instructed not to focus on anything in particular and to gaze into a general direction. Fig.~\ref{fig:whole-data-recording} shows tracking data for such a recording. Our algorithm starts tracking as soon as one blink is registered (a). After an initial count to 10, the subject should lean from side to side every 10 seconds in order to vary their face's position. Whereas tracking accuracy on the frame-based implementation is constant (25\,fps), our algorithm is updated event-by-event depending on the movements in the scene. If the subject stays still, computation is drastically reduced as there is a significantly lower number of events. Head movement causes the tracker to update within $\mu s$ (b), incrementally changing its location in sub-pixel range. Eye trackers that go astray will be rectified at the next blink.\\
Subjects in the outdoor experiments were asked to step from side to side in front of a camera placed in a courtyard under natural lighting conditions. Again they were asked to gaze into a general direction, partly engaged in a conversation with the person who recorded the video. As can be expected, Table \ref{table-results-summary} shows that results are similar to indoor conditions. The slight difference is due to non-idealities and the use of the same camera parameters as the indoor experiments. Event-based cameras still lack an automatic tuning system of their parameters that hopefully will be developed in a future generation of a camera.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{whole-data-recording.png}
\caption{face tracking of one subject over 45s. a) subject stays still and eyes are being detected. Movement in the background to the right does not disrupt detection. b) when the subject moves, several events are generated}
\label{fig:whole-data-recording}
\end{figure}
\subsection{Face scale changes}
In 3 recordings the scale of a person's face varies by a factor of more than 5 from smallest to largest detected occurrence. Subjects sitting on a movable stool were instructed to approach the camera within 25\,cm after an initial position and then veer away again after 10\,s to about 150\,cm. Fig.~\ref{fig:scale-data-recording} shows tracking data for such a recording over time. The first blink is detected after 3\,s at roughly 1\,m in front of the camera (a). The subject then moves very close to the camera and to the left so that not even the whole face bounding box is seen anymore (b). Since the eyes are still visible, this is not a problem for our tracker. However, GT had to be partly manually annotated for this part of the recording, as two of the frame-based methods failed to detect the face that is too close to the camera. The subject then moves backwards and to the right, followed by further re-detections (c).
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{scale.png}
\caption{Verifying resistance to scale. a) first blink is detected at initial location. Scale value of 1 is assigned. b) Subject gets within 25cm of the camera, resulting in a three-fold scale change. c) Subject veers away to about 150cm, the face is now 35\% smaller than in a)}
\label{fig:scale-data-recording}
\end{figure}
\subsection{Multiple faces detection}
In order to show that the algorithm can handle multiple faces at the same time, we recorded 3 sets of 3 subjects sitting at a desk talking to each other. No instructions where given, as the goal was to record in a natural environment. Fig.~\ref{fig:multiple-data-recording} shows tracking data for such a recording. The three subjects stay relatively still, but will look at each other from time to time as they are engaged in conversation or focus on a screen. Lower detection rates (see Table~\ref{table-results-summary}) are caused by an increased pose variation, however this does not result in an increase of the tracking errors.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{multiple.png}
\caption{Multiple face tracking in parallel. Face positions in X and Y show three subjects sitting next to each other, their heads are roughly on the same height. a) subject to the left blinks at first. b) subject in the centre blinks next, considerably varying their face orientation when looking at the other two. c) third subject stays relatively still. }
\label{fig:multiple-data-recording}
\end{figure}
\subsection{Occlusion sequences}
These last sequences aim to evaluate the robustness of the proposed method to pose variation that cause eye occlusions. The subjects in these sequences are rotating their head from one side to the other until one eye is partly occluded. Our experiments show that our algorithm successfully recovers from occlusion to track the eyes. These experiments have been carried out with an event-based camera at VGA resolution. While this camera provides better temporal accuracy and spatial resolution, it does not provide grey-level events measurements. Although we fed frames from the change detection events (which do not contain absolute grey-level information) to the frame-based methods, none of them would sufficiently detect a face. This can be expected as the networks had been trained on grey-level images. We believe that if we retrained the last layers of the networks with manually labelled frames from change detection events, they would probably achieve similar performance. However the frame data set creation and the training are not the scope of this work.
\begin{figure*}[h]
\centering
\includegraphics[width=\linewidth]{pose-sequence.jpg}
\caption{Pose variation experiment. \textbf{a}) Face tracker is initialised after blink. \textbf{b}) subject turns to the left. \textbf{c-d}) One eye is occluded, but tracker is able to recover.}
\label{fig:pose-sequence}
\end{figure*}
\subsection{Summary}
Table~\ref{table-results-summary} summarises the accuracy of detection and tracking of the presented method, in comparison to Viola-Jones (VJ), Faster-RCNN (FRCNN) and Single Shot Detector (SSD) algorithms. The tracking errors are the deviations from the frame-based bounding box centre, normalised by the bounding box's width. The normalisation provides a scale invariance so that errors estimated for a large bounding box from a close-up face have the same meaning as errors for a small bounding box of a face further away.
\begin{table}
\renewcommand{\arraystretch}{1.3}
\centering
\begin{tabular}{p{1cm}|p{.8cm}|p{1.1cm}|p{0.8cm}|p{1.1cm}|p{0.9cm}}
& \small{\# of recordings} & \small{blinks detected (\%)}& \small{error VJ (\%)}& \small{error FRCNN (\%)}& \small{error SSD (\%)}\\
\hline
\small{indoor} & 21 & $68.4$ & $5.92$ & $9.42$ & $9.21$ \\
\small{outdoor} & 21 & $52.3$ & $7.6$ & $14.57$ & $15.08$ \\
\small{scale} & 3 & $62.6$ & $4.8$ & $10.17$ & $10.22$ \\
\small{multiple}& 3 & $36.8$ & $15$ & $16.15$ & $14.61$ \\
\hline
\small{total} & 48 & $59$ & $7.68$ & $11.77$ & $11.52$ \\
\end{tabular}
\caption{Summary of results for detection and tracking for 4 sets of experiments. \% of blinks detected relates to the total number of blinks in a recording. Tracking errors are Euclidean distances in pixel between the proposed and compared method's bounding boxes, normalised by the frame-based bounding box width and height.}
\label{table-results-summary}
\end{table}
\section{Conclusion}
The presented method for face detection and tracking is a novel method using an event-based formulation. It relies on eye blinks to detect and update the position of faces making use of dynamical properties of human faces rather than a approach which is purely spatial.
The face's location is updated at $\mu$s precision that corresponds to the native temporal resolution of the camera. Tracking and re-detection are robust to more than a five-fold scale, corresponding to a distance in front of the camera ranging from 25\,cm to 1.50\,m. A blink seems to provide a sufficiently robust temporal signature as its overall duration changes little from subject to subject.
The amount of events received and therefore the resulting activity amplitude varies only substantially when lighting of the scene is extremely different (i.e. indoor office lighting vs bright outdoor sunlight). The model generated from an initial set of manually annotated blinks is proven robust to those changes across a wide set of sequences. Even so, we insist again in stating that the primary goal of this work is not to detect 100\,\% of blinks, but to reliably track a face. The blink detection acts as initialisation and recovery mechanism to allow that. This mechanism allows some resilience to eye occlusions when a face moves from side to side. In the most severe cases of occlusion, the tracker manages to reset correctly at the next detected blink.
The occlusion problem could be further mitigated by using additional trackers for more facial features (mouth, nose, etc) and by linking them to build a deformable part-based model of the face as it has been tested successfully in \cite{ReverterValeirasAsynchronousNeuromorphicEventDriven2015}. Once the trackers are initiated, they could more easily keep the same distances between parts of the face. This would also allow for a greater variety in pose variation and more specifically, this would allow us to handle conditions when subjects do not directly face the event-based camera.
The blink detection approach is simple and yet robust enough for the technique to handle up to several faces simultaneously. We expect to be able to improve detection accuracy even more by learning the dynamics of blinks via techniques such as HOTS~\cite{LagorceHOTSHierarchyeventbased2016}. At the same time with increasingly efficient event-based cameras providing higher spatial resolution the algorithm is expected to increase its performances and range of operations.
We roughly estimated the power consumption of the compared algorithms to provide numbers in terms of efficiency:
\begin{itemize}
\item The presented event-based algorithm runs in real-time on 70\% of a single core of an Intel i5-7200U CPU for mobile Desktops, averaging to 5.5\,W of power consumption estimated from \cite{7thGenerationIntel2017}.
\item The OpenCV Viola Jones implementation is able to run 24 of the 25\,fps in real-time, using one full core at 7.5\,W again inferred from 15W full load for both cores\cite{7thGenerationIntel2017}.
\item The Faster R-CNN Caffe implementation running on the GPU uses 175\,W on average on a Nvidia Tesla K40c with 4-5\,fps.
\item The SSD implementation in Tensorflow runs in real-time, using 106\,W on average on the same GPU model.
\end{itemize}
Currently our implementation runs on a single CPU core, beating SOA neural nets by an estimated factor of 20 in terms of power efficiency. Due to the asynchronous nature of the input and our method that adapts to it, it could easily be parallelised across multiple threads, using current architecture that is still bound to synchronous processing of instructions and allocation of memory. A neural network model that runs on neuromorphic hardware could further improve power efficiency by a factor of at least 10.
{\small
\bibliographystyle{ieee}
|
1,477,468,750,650 | arxiv | \section{Introduction}
Gravitational wave (GW) \citep{Einstein:1918btx} are a new multi-frequency observational probe of the Universe which will explore a vast range of cosmological redshifts. The GW signal can be produced from astrophysical sources such as black holes (BHs), neutron stars (NSs), white dwarfs (WDs), supernovae \citep{Vishveshwara:1970zz,Press:1971wr,Chandrasekhar:1975zza, Blanchet:1995ez, Buonanno:1998gg, Damour:2001bu, Blanchet:2004ek, Baker:2005vv,Pretorius:2005gq,Campanelli:2005dd, Buonanno:2004tp,Marassi:2011si, Zhu:2010af,Cusin:2018rsq}. Along with the astrophysical origins of GWs, it can also be generated in different cosmological scenarios such as during the period of inflation \citep{Starobinsky:1979ty,Turner:1996ck, Martin:2013nzq}, cosmic strings \citep{Kibble:1976sj,Damour:2004kw}, phase transitions \citep{Kosowsky:1992rz,Kamionkowski:1993fg} etc. The observed GW signal can be classified as point source or stochastic in nature. The GW signal can be resolved individually in the former case, and will be an unresolved continuous/discontinuous diffused background for the latter case.
The stochastic gravitational wave background (SGWB) \citep{Allen:1996vm, Maggiore:1999vm, Wu:2011ac,Regimbau:2007ed,Romano:2016dpx,Rosado:2011kv,Zhu:2011bd} spans a wide frequency range and is expected to have contributions from both astrophysical binary astrophysical sources (BHs, NSs, WDs, BH-NS, NS-WD) as well as from the cosmological origins.
One of the potential candidates of the cosmological SGWB is the primordial GW background produced during the epoch of inflation. This is can be explored at the low frequency ($f \sim 10^{-18}$ Hz) using the large angular scale B-mode polarization of the cosmic microwave background (CMB) which is accessible from CMB experiments such as BICEP-KECK \citep{Ade:2018iql}, Simons Observatory\citep{Ade:2018sbj}, and LiteBIRD \citep{2018JLTP..193.1048S}. The high-frequency primordial inflationary GW signal can in principle be directly probed if the astrophysical SGWB can be successfully {distinguished from the inflationary GW signal}.
In this work, we focus on the astrophysical SGWB which is produced from a large number of binary mergers of compact objects \citep{Rosado:2011kv, Regimbau:2008nj,Regimbau:2007ed, Wu:2011ac}. Such sources will contribute to the total GW energy background $\rho_{GW}$, which can be written in terms of the dimensionless quantity
\begin{eqnarray}\label{eqgw1}
\Omega_{GW}(f)= \frac{1}{\rho_cc^2} \frac{d \rho_{GW} (f)}{d\ln f},
\end{eqnarray}
where $c$ is the speed of light and $\rho_c= 3H_0^2/8\pi G$ is the critical density of the Universe, which depends on the present value of the Hubble constant $H_0$. The astrophysical SGWB is expected to be anisotropic and several previous methods \citep{Mitra:2007mc,Thrane:2009fp, Talukder:2010yd, Mingarelli:2013dsa, Romano:2016dpx} were developed to measure the signal from the GW data. The currently ongoing ground-based GW experiments (such as advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) (Hanford and Livingston) \citep{TheLIGOScientific:2014jea}, advanced VIRGO detectors \citep{TheVirgo:2014hva}) and the upcoming ground-based GW experiments such as KAGRA \citep{Akutsu:2018axf} and LIGO-India \citep{Unnikrishnan:2013qwa} are going to be operational in the coming decade to measure {the GW signal in the frequency range $f \approx 30-3000$ Hz with a strain noise $\sim 10^{-23}\, {Hz^{-1/2}}$.} The data from the advanced-LIGO's first observational run have imposed upper bounds on both spatially varying and non-varying contributions to $\Omega_{GW}(f)$ \citep{TheLIGOScientific:2016dpb,TheLIGOScientific:2016xzw}. {Another window to the GW signal is through} the Pulsar Timing Array (PTA) \citep{2010CQGra..27h4013H}, which are looking for GW signals in the frequency band $10^{-9}- 10^{-6}$ Hz and have imposed constraints on the strain of SGWB signal as $1.45 \times 10^{-15}$ at $f=1\, \text{yr}^{-1}$ \citep{Arzoumanian:2018saf}. In the future, space-based GW observatory Laser Interferometer Space Antenna (LISA) \citep{2017arXiv170200786A} will probe the frequency band $f \sim 10^{-5}-10^{-1}$ Hz of GW signals. {In even longer timescale, the third generation GW experiments \citep{Evans:2016mbw} such as Einstein Telescope and Cosmic Explorer are going to measure GW signal for frequencies above $10$ Hz with sensitivity better than $10^{-24}\, {Hz^{-1/2}}$ \citep{Evans:2016mbw}. The third generation detectors will be able to measure the GW sources up to a redshift $z \sim 20$ with an snr $\sim 10$ \citep{Evans:2016mbw}.}
In this paper, we discuss the origin of temporal dependence in the astrophysical SGWB signal and {show how this can be used to distinguish between the SGWB signal originating from astrophysical and cosmological sources.} The temporal dependence is useful for probing the astrophysical event rates of the SGWB sources, the variations with frequency of GW emission, and the spatial positions of the sources. {The study of temporal fluctuations along with spatial anisotropies bring a new dimension for exploring the SGWB background and its statistical properties. We show that this avenue is going be useful for observationally distinguishing between the cosmological and astrophysical SGWB.} In Sec. \ref{time}, we discuss the origin of the temporal dependence of the SGWB signal. In Sec. \ref{aspects} and Sec. \ref{formalism}, we discuss the formalism and the corresponding estimator for studying the frequency and temporal fluctuations of the SGWB signal. The measurability of the time variability for a network of detectors such as advanced-LIGO (Hanford and Livingston), Virgo detectors and Cosmic Explorer are shown in Sec. \ref{error-estimate}. The conclusion and future scope of this work are discussed in Sec. \ref{conclusion}.
\section{Origin of the temporal dependence in SGWB} \label{time}
Along the cosmic light-cone in a particular sky direction and over a particular observational time window $\Delta t$, the SGWB signal has contributions from all of the events coalescing along the line of sight. The number of coalescing events taking place at different times is governed by Poisson statistics as each of the events happens independently \footnote{It is a reasonable assumption to consider that one coalescing binary system is not triggering other binary events.}. The corresponding probability mass function of occurrence of $N$ mergers of binaries of mass $M$ in a time interval $\Delta t$ can be written as
\begin{equation}\label{poisson}
P (N, M)= \frac{e^{-\Lambda (M,\dot \rho, z)\Delta t} (\Lambda (M,\dot \rho, z) \Delta t)^{N}}{N!},
\end{equation}
where $\Lambda (M,\dot \rho,z)$ is the average event rate \citep{Kalogera:2006uj, OShaughnessy:2006uzj, Bulik:2008ab,Mandel:2009nx,Dvorkin:2017kfg} and is still poorly known from astrophysical observations and is expected to depend on several parameters such as the mass of the binary compact objects $M$, star formation history $\dot \rho$, the source redshift $z$. In the above equation the time in the observer's frame $\Delta t$ is related to the time in the source rest frame $\Delta t_r$ at redshift $z$ by the relation $\Delta t= \Delta t_r (1+z)$.
The Poisson nature of the GW events results in a variation in the number of GW sources at different time intervals $\Delta t$ with the standard deviation proportional to $(\Lambda (M,\dot \rho, z) \Delta t)^{1/2}$. As a result, the SGWB, which is an integrated effect of all of the events, will exhibit a temporal variation at a fixed direction in the sky. The average value of the event rate is expected to be constant as it is governed by the astrophysical and cosmological phenomena, and as a result, we can expect negligible evolution of the event rate $\Lambda (M, \dot \rho, z)$ over the observation time. This implies that the SGWB will be \textbf{time-dependent}, but will exhibit \textbf{statistical time translation symmetry} on averaging over large observation time. For large event rates, the Poisson distribution will tend towards a Gaussian distribution (central limit theorem), as the skewness and kurtosis decreases as $(\Lambda(M,\dot \rho, z) \Delta t)^{-1/2}$ and $(\Lambda(M,\dot \rho, z) \Delta t)^{-1}$ respectively. However the variance of the signal grows with the event rates according to the relation $\Lambda(M,\dot \rho, z) \Delta t$ indicating that the rms time variability of the SGWB signal will not disappear due to the large event rate.
We can write the observed $\Omega_{GW} (f)$ at a frequency $f$ in terms of the energy emission from each GW source and number of GW events $N(z)$ in a comoving volume between the cosmic time $t(z)$ and $t(z+\Delta z)$ as \citep{2001astro.ph..8028P}
\begin{align}\label{basic-1}
\begin{split}
\Omega_{GW} (f)= \frac{1}{\rho_cc^2}\int dz \frac{N(z)}{(1+z)} \frac{dE_{GW}}{d\ln f_r}\bigg|_{f_r= (1+z)f},
\end{split}
\end{align}
where, $\rho_c= 3H^2_0/ 8\pi G$ and $c$ is the speed of light. The variation in the amplitude of the SGWB signal within the time interval $\Delta t$ will be reflected due to the change in the number of GW events $\Delta N (z, \Delta t)/ N(z, t) = N(z, t+\Delta t)/N(z, t) -1$ in the time interval $\Delta t$.
Due to Poisson nature of the GW events, if the variation in the number of sources $\Delta N (z, \Delta t)$ (within a time interval $\Delta t$) is comparable to the total number of GW sources ($N (z, t)$) contributing to the SGWB signal at frequency $f$ and time $t$, then the relative fluctuations in the SGWB signal around the mean value of $\Omega_{GW} (f)$ are of the order $\approx \Delta N(z, \Delta t)/ N(z, t)$. This implies that the maximum time variability of the SGWB signal is going to be evident when the condition $\Delta N (z) \sim N (z)$ will be satisfied.
The time variability of the SGWB signal depends primarily on three time-scales, (i) the duration $\tau_d$ which a GW source spends at a particular frequency $f$, (ii) the duration between the consecutive events ($\Delta t_{event} \propto 1/\Lambda$), and (iii) the time-scale $\Delta t$ over which we estimate the variation of the sky signal. For the first two time-scales $\tau_d$ and $\Delta t_{event}$, we can define the duty cycle of the GW signal as the ratio of the duration of the signal emitted between frequency $f$ and $f+ \Delta f$, and the time difference between two GW events \citep{Wu:2011ac, Rosado:2011kv}
\begin{align}\label{basic-2}
\begin{split}
\frac{d\mathcal{D}}{df}= \int dz \, \dot n(z) \frac{d\tau_d}{df}, \end{split}
\end{align}
where $\dot n (z)$ is the event rate as a function of the comsological redshift and the duration of the signal $\frac{d\tau_d}{df}$ at frequency $f$ can be written in terms of the GW chirp mass according to the relation
\begin{eqnarray}\label{eqgw7}
\frac{d\tau_d}{df}= \frac{5c^5}{96\pi^{8/3} G^{5/3} \mathcal{M}_z^{5/3} f^{11/3}}.
\end{eqnarray}
A large duty-cycle implies that the duration of the GW signal at frequency $f$ is long compared with the time difference between the events. So, if $\Delta t < \Delta t_{event}$, the sources of the GW signal are not changing within the time $\Delta t$, resulting in a SGWB signal with no temporal fluctuations. Also, when $\Delta t < \tau_d$, the variations of the GW signal at a frequency $f$ is also negligible within the time scale $\Delta t$. This implies for the scenario $\tau_d >> \Delta t_{event} >> \Delta t$, the number GW sources produced in the time $\Delta t$ is much smaller than the total number of GW sources contributing to the SGWB signal and also the intrinsic time variability of the signal in the time scale $\Delta t$ is negligible. As a result, the fractional change in the SGWB signal with respect to the average background is tiny for this case.
On the other hand, if the three time-scales satisfies the condition ($\Delta t \geq \Delta t_{event} \geq \tau_d$), then the SGWB signal shows time variability comparable to the mean average signal in the time scale $\Delta t$. The GW sources satisfying this criterion are expected to contribute maximally to the time variability of the SGWB signal. For this kind of source, the duty cycle is of the order unity or less, i.e $\tau_d \leq \Delta t_{event}$. The condition for the time variability of the SGWB signal can also be written in terms of the overlap function as defined by \citet{Rosado:2011kv}. The contribution from the non-overlapping GW sources will cause dominant contribution to time variability in the SGWB signal. Whereas the overlapping sources for which $\Delta N (z) << N (z)$, is going to produce negligible time variability in the SGWB.
In Fig. \ref{fig:duty}, we plot the duty-cycle as a function of the GW frequency for binary neutron stars (BNS), black hole neutron star (BH-NS) and binary black holes (BBH). For this plot, we have taken the event rate of GW sources as $1$ Mpc$^{-3}$ Myr$^{-1}$, $0.03$ Mpc$^{-3}$ Myr$^{-1}$ and $0.005$ Mpc$^{-3}$ Myr$^{-1}$ for BNS, BH-NS and BBH respectively. From Fig. \ref{fig:duty}, it is evident that BNS, followed by BH-NS systems, are going to have overlapping sources resulting in less temporal fluctuations in the SGWB signal as opposed to the BBH systems. As a result, this avenue helps in distinguishing between the SGWB originating from the BNS, BH-NS and BBHs systems.
To show the time variability of the SGWB for BNS, BH-NS and BBH, we estimate the rms temporal fluctuations of the SGWB\footnote{Here the notation $\langle .\rangle_T$ implies average in the time domain for different realizations of SGWB.} $\Delta \Omega_{GW} (f)= \sqrt{\bigg\langle \bigg(\frac{\Omega_{GW} (t,f) - \bar \Omega_{GW} (f)}{\bar \Omega_{GW} (f)}\bigg)^2\bigg\rangle_T}$ from $100$ realizations of $\Omega_{GW} (t)$ drawn from a Poisson distribution with the fixed event rates $1\, \text{Mpc}^{-3}\,\text{Myr}^{-1}$, $0.03\, \text{Mpc}^{-3}\,\text{Myr}^{-1}$ and $0.005\, \text{Mpc}^{-3}\,\text{Myr}^{-1}$ for BNS, BH-NS and BBH respectively. The corresponding rms variations in the SGWB signal are shown in Fig. \ref{fig:rms} for the frequency range which are relevant for the terrestrial GW observatories such as advanced-LIGO, VIRGO and KAGRA, LIGO-India. The maximum relative temporal fluctuations of the SGWB signal are expected from BBHs as the number of overlapping events is fewer, resulting from a smaller value of the duty cycle. On the other hand, BNS will exhibit less relative time variability due to their larger duty cycles. This behavior of the SGWB over these frequency ranges will allow us to distinguish between the different types of GW sources which are contributing to the signal. The amplitude of the time variability of the SGWB signal depends strongly on the event rate, probability distribution of the compact object mass and their redshift distribution. We show the changes in the rms fluctuation for the BBH case with the variation in the event rates and probability distributions of the black hole masses in Fig. \ref{fig:rmsbh}. Fig. \ref{fig:rms} and Fig. \ref{fig:rmsbh} indicates that characterizing the time variability of the SGWB signal is a useful probe to understand different populations of binary GW source and their event rates respectively.
\begin{figure}
\centering
\includegraphics[trim={0 0 0 0cm}, clip, width=0.5\textwidth]{Figure_1.png}
\caption{The duty-cycle for binary black holes (BBH), binary neutron stars (BNS) and black holes- neutron star (BH-NS) are shown for ground-based detectors.}
\label{fig:duty}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim={0 0 0 0cm}, clip, width=0.5\textwidth]{Figure_1a.png}
\caption{We show the rms fluctuations in the SGWB signal expected due to the Poisson nature of the GW binary sources. The rms fluctuations for three species of binaries namely binary black holes (BBH), binary neutron stars (BNS) and black holes- neutron star (BH-NS).}
\label{fig:rms}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim={0 0 0 0cm}, clip, width=0.5\textwidth]{figure_BH_BH_rates.png}
\caption{We show the rms fluctuations in the SGWB signal expected due to the Poisson nature of the GW binary black holes with three different rates. With an increase in the number of events, the background rms fluctuations diminish for the frequency band $10-100$ Hz, which are explored by ground based GW detectors such as LIGO and VIRGO.}
\label{fig:rmsbh}
\end{figure}
{The time variability of the SGWB signal is a independent avenue along with other existing methods to detect the GW event rates from the individual GW sources and SGWB signal \citep{2013NJPh...15e3027M, 2015PhRvD..91b3005F, PhysRevX.8.021019}. These frameworks \citep{2013NJPh...15e3027M, 2015PhRvD..91b3005F, PhysRevX.8.021019} are able to measure the rates from the GW data and can successfully distinguish between the background and individual signal. The detection of the individual GW events are going to be more powerful in detecting the local GW event rates from the "loudest individual events". However, the study of the temporal variation of the SGWB signal will be powerful in mainly distinguishing the population of different SGWB sources and understanding their distribution over the cosmic volume. The event rates that can be inferred from the SGWB signal is capable to probe the signal up to a high redshift which cannot be inferred from the individual loud sources. The temporal dependence of the SGWB signal can also be a useful avenue for distinguishing between astrophysical SGWB signals and the cosmological SGWB signals, since the latter are generally not expected to show sporadic behavior in the temporal domain.
}
The temporal fluctuations of $\Omega_{GW}(f, \hat n, t)$ can be written as
\begin{eqnarray}\label{eqgw4}
\begin{split}
\Omega_{GW}(f,\hat n, t)=& \frac{f}{4\pi\rho_c c^3} \iint dz\,d\mathcal{M} R_\mathcal{M}(z,\hat n, t)\frac{dV}{dz}\\ & \times \overbrace
{\bigg(\frac{(1+z)f_r}{d_L^2} \frac{dE_{GW}(f_r, \hat n)}{df_r}\bigg)}^{\text{GW source}},
\end{split}
\end{eqnarray}
where we can express the emission of the GW signal during the inspiral phase from an individual source in terms of the redshifted chirp mass $\mathcal{M}_z= \mathcal{M} (1+z)$ and redshifted frequency $f_r= (1+z)f$ by the relation
\begin{eqnarray}\label{eqgw4b}
\frac{dE_{GW}(f_r, \hat n)}{df_r}= \frac{(G\pi)^{2/3} \mathcal{M}_z^{5/3}}{3f_r^{1/3}}.
\end{eqnarray}
The number of coalescing binaries $R_\mathcal{M}(z,\hat n, t)$ per unit earth time, per unit comoving volume, per unit mass-bin for the binaries of redshifted chirp mass $\mathcal{M}$ can be written as
\begin{eqnarray}\label{eqgw4b}
R_\mathcal{M}(z,\hat n, t)= \frac{\overbrace{n_{gal}(\hat n, z)}^{\text{cosmology}}\overbrace{\mathcal{E}_\mathcal{M}(z,\hat n, t_r)}^{\text{astrophysics}}}{(1+z)},
\end{eqnarray}
where, $n_{gal}(\hat n, z)$ is the number density of galaxies and $\mathcal{E}_\mathcal{M}(z,\hat n, t_r)$ is the number of GW binaries coalescing per galaxy, per unit cosmic time, per unit chirp mass. $dV/dz$ denotes the comoving volume element which can be written as $4\pi c d_L^2/ ((1+z)^2H(z))$. The merger and the ring-down phase of the individual GW signal can also be included to model the SGWB signal. As the GW events are expected to be a statistical process (Poisson distributed), at any moment of time, $\mathcal{E}_\mathcal{M}(z,\hat n, t)$ is going to be time-dependent.
The average of $\Omega_{GW} (f)$ over a large observation time will exhibit statistical time translation symmetry which can be written as
\begin{eqnarray}\label{eqgw3}
\bar\Omega_{GW} (f)\equiv \langle \Omega_{GW}(f, \hat n, t) \rangle_T = \frac{1}{T}\int dt \frac{1}{\rho_c} \frac{d \rho_{GW} (f, \hat n, t)}{d\ln f}.
\end{eqnarray}
\section{Aspects of SGWB in frequency and temporal domains}\label{aspects}
We propose to study three aspects of the astrophysical SGWB: (i) \textit{Spectral derivative of the SGWB signal}, (ii) \textit{Time derivative of the SGWB signal} and, (iii) \textit{Tomographic redshift estimation of the SGWB signal}.
\subsection{Spectral derivative of the GW signal at a fixed observation time}
The spectrum of the GW signal is an indicator of the mass of the coalescing binaries, as the mergers of the binaries with heavier masses happen at a lower frequency. We can then construct the spectral derivative of the SGWB signal as
\begin{align}\label{sgwbft}
\begin{split}
\mathcal{F} (t, f_a, f_b, \hat n)\equiv & \Omega(t, f_a, \hat n)- \bigg(\frac{f_a}{f_b}\bigg)^{2/3}\Omega(t, f_b, \hat n) \\ =& \frac{(G\pi)^{2/3}f_a^{2/3}}{3\rho_c c^2 H_0} \iint dz \, d\mathcal{M} \, \frac{n_{gal}(\hat n, z)}{(1+z)^{4/3}E(z)} \\ &\bigg[\bigg(\mathcal{E}_{\mathcal{M}}(z,\hat n, t_r) \mathcal{M}_z^{5/3}(t, \hat n)\bigg) \bigg{|}_{f_a}\\ & - \bigg(\mathcal{E}_{\mathcal{M}}(z,\hat n, t) \mathcal{M}_z^{5/3}(t, \hat n)\bigg)\bigg{|}_{f_b}\bigg],
\end{split}
\end{align}
where, $f_a= f_b+\Delta f$ and we have used the fact that the GW strains from the individual binaries are not correlated with other coalescing binaries. $E(z)= \sqrt{\Omega_m(1+z)^3 + \Omega_\Lambda}$ is the expansion history of the Universe. The maximum frequency ($f_{max}$) up to which the GW signal is emitted depends on the last stable orbit of the BBH, BH-NS, and BNS. This depends on the total mass $M$ of the coalescing binaries as $f_{max}= \frac{c^3}{6\sqrt{6}\pi G M}$. So the GW signal emitted from sources with the mass of the coalescing binaries between $M^a\propto \frac{1}{f_a}$ and $M^b\propto \frac{1}{f_b}$
will contribute to the differential signal $\mathcal{F} (t, f_a, f_b, \hat n)$
\begin{align}\label{sgwbft2}
\begin{split}
\mathcal{F} (t, f_a, f_b, \hat n)=& \frac{(G\pi)^{2/3}f_a^{2/3}}{3\rho_c c^2 H_0} \int dz\, n_{gal}(\hat n, z)\\ & \times\int d\mathcal{M} \bigg(\frac{\mathcal{E}_{\mathcal{M}}(z,\hat n, t) \mathcal{M}_z^{5/3}(t, \hat n)}{(1+z)^{4/3}E(z)}\bigg) \\ &\times \bigg(H({M}^a_{z}-{{M}_{z}})-H({M}^b_{z}-{{M}_{z}})\bigg),
\end{split}
\end{align}
where $H(x)$ is the Heaviside function which is non-zero only for positive values of the argument $x$. For the same observation time and sky direction, the number of GW sources emitting in individual mass bins is fixed. As a result, the spectral derivative of the SGWB captures the sky luminosity in the redshift-averaged mass-bins corresponding to $\Delta f/f$. This will also exhibit temporal fluctuations due to the variation in the number of events with cosmological redshift at a fixed frequency bin-width. However, the temporal dependence can be weak, depending on the population of the sources and the event rate.
The spatial and time average of the spectral derivative signal in individual mass-bins can be expressed as
\begin{eqnarray}\label{sgwbft3}
\bar{\mathcal{F}} (f_a,f_b)= \frac{1}{4\pi T_{obs}}\int d^2\hat n \int_0^{{T}_{obs}} dt\, \mathcal{F} (t, f_a, f_b, \hat n).
\end{eqnarray}
$\bar{\mathcal{F}} (f_a,f_b)$ can be expected to be a constant over separate large observational windows due to the negligible variation of the astrophysical and cosmological phenomena over the observation time.
\subsection{Temporal derivative of the SGWB signal at fixed frequency}
The temporal dependence due to the Poisson distribution can be captured by estimating the variation in the SGWB amplitude between two epochs of time $t$ and $t'= t-\Delta t$ at a fixed GW frequency $f$
\begin{align}\label{sgwbtime1}
\begin{split}
\mathcal{T} (t, t', f, \hat n) \equiv\,& \Omega (t, f, \hat n)- \Omega (t', f, \hat n)\\ =& \frac{(G\pi)^{2/3}f^{2/3}}{3 \rho_c c^2} \iint dz \, d\mathcal{M}\frac{n_{gal}(\hat n, z)}{(1+z)^{4/3}E(z)}\\& \times \bigg[\bigg({\mathcal{E}_{\mathcal{M}}(z,\hat n, t)\mathcal{M}_z^{5/3}(\hat n)}\bigg) \\ & - \bigg({\mathcal{E}_{\mathcal{M}}(z,\hat n, t')\mathcal{M}_z^{5/3}(\hat n)}\bigg)\bigg].
\end{split}
\end{align}
This is the line-of-sight integrated quantity of the variation of the number of events in any sky direction. The spectrum of this quantity is a direct probe of the variation of the event rate across the GW source population.
\subsection{Tomographic redshift estimation of the spatial distribution of the GW sources}\label{position}
The three-dimensional spatial distribution of the GW sources depend upon the background cosmology, the redshift of the sources and the spatial distribution of the host galaxy. Cosmological information (such as the redshift distribution of the host galaxies of the GW sources) which is embedded in the SGWB signal is not time-dependent. The contribution of the SGWB signal from individual cosmological redshifts can be obtained by cross-correlating with redshift-known cosmic probes of the density field. From a galaxy survey, we obtain the spatial position of galaxy along with its photometric or spectroscopic redshift. We can construct the tomographic density field at different cosmological redshifts as
\begin{eqnarray}\label{cosmo1}
\delta_{g} (\hat n, z)= \frac{n_{gal}(\hat n, z)}{\bar n_{gal}(z)}-1,
\end{eqnarray}
where $n_{gal}(\hat n, z)$ is the number of galaxies in the redshift bin $z$ and in the direction $\hat n$ and $\bar n_{gal}(z)$ is the average number of galaxies. The two-point correlation function of galaxies $\mathcal{W}_{gg}$ is a probe to the underlying dark matter correlation function $ \xi_{DM} (\chi, z)$ given by the relation
\begin{align}\label{gal1}
\mathcal{W}_{gg}(\theta, z) \equiv & \langle \delta(\hat n, z) \delta(\hat n + \theta, z)\rangle\nonumber \\ =&\int d^2\hat n\, D^2(z)b^{2}_{g}(z)\xi_{DM}( \chi(z'), \chi(z), \theta),
\end{align}
where the angular bracket $\langle.\rangle$ denotes the
all-sky average, $D(z)$ is the growth function, $b_g(z) = \delta_g/\delta_{DM}$ is the galaxy bias with respect to the dark matter density field $\delta_{DM}$ and the correction function $\xi_{DM}( \chi(z'), \chi(z), \hat \theta)$ is related to the power spectrum of the dark matter distribution $P_{DM}(k)$ by
\begin{eqnarray}\label{corr}
\xi_{DM}( \chi(z'), \chi(z), \hat \theta)= \frac{1}{2\pi^2} \int dk k^2 P_{DM}(k) j_0(k\chi),
\end{eqnarray}
where $j_0(k\chi)$ is the spherical Bessel function and $\chi= \sqrt{\chi(z')^2 + \chi(z)^2 - 2\chi(z')\chi(z')\cos{\theta}}$.
The cross-correlation of the SGWB with the tomographic redshift bins of the large-scale structure encodes the cosmological spatial distribution of the GW sources. The
time-dependent $\mathcal{F} (t, f, f', \hat n)$ and $\mathcal{T} (t, t', f, \hat n)$ signal intrinsically depends on the redshift distribution of the GW sources. So, the angular cross-correlation of $\mathcal{F} (t, f, f', \hat n)$ with the tomographic density field $\delta_g (\hat n, z)$ can be expressed as
\begin{align}\label{sgwbfco1}
\mathcal{W}_{\mathcal{F}g}(t, f, z, \theta) \equiv& \langle \mathcal{F} (t, f, \hat n) \delta(\hat n + \theta, z)\rangle\nonumber \\ =&\int d^2\hat n\int dz'\, D(z')D(z)b_{g}(z)b_{\mathcal{F}}(t, f,z',\hat n)\nonumber \\& \times \xi_{DM}( \chi(z'), \chi(z), \theta),
\end{align}
where $b_{\mathcal{F}}(t, f, z', \hat n) $ is the time-dependent GW bias for the spectral derivative which we can express in terms of the GW source properties by the relation
\begin{eqnarray}\label{sgwbfco1a}
b_{\mathcal{F}}(t, f, z, \hat n)=&\frac{(G\pi)^{2/3}f_a^{2/3}}{3\rho_c c^2 H_0} \int d\mathcal{M} \bigg(\frac{\mathcal{E}_{\mathcal{M}}(z,\hat n, t) \mathcal{M}_z^{5/3}(t, \hat n)}{(1+z)^{4/3}E(z)}\bigg)\nonumber \\ &\times \bigg(H({M}^a_{z}-{{M}_{z}})-H({M}^b_{z}-{{M}_{z}})\bigg).
\end{eqnarray}
Similarly, the cross-correlation of $\mathcal{T} (t, t', f, \hat n)$ with the galaxy distribution will be able to capture the time evolution of the binary events with redshift, which can be written as
\begin{align}\label{sgwbfco2}
\mathcal{W}_{\mathcal{T}g}(t, t', f', z, \theta) \equiv& \langle \mathcal{T} (t, t', f, \hat n) \delta(\hat n + \theta, z)\rangle\nonumber \\= & \int d^2\hat n\int dz'\, D(z)D(z')b_{g}(z)\nonumber \\& \times b_{\mathcal{T}}(t,t', f',z', \hat n) \xi_{DM}( \chi(z'), \chi(z), \theta),
\end{align}
where, we have defined $b_{\mathcal{T}}(t,t', f',z, \hat n)$ as
\begin{align}\label{sgwbfco1c}
b_{\mathcal{T}}(t,t', f',z, \hat n)=&\frac{(G\pi)^{2/3}f^{2/3}}{3 \rho_c c^2} \int \, d\mathcal{M}\frac{1}{(1+z)^{4/3}E(z)}\nonumber \\& \times \bigg[\bigg({\mathcal{E}_{\mathcal{M}}(z,\hat n, t)\mathcal{M}_z^{5/3}(\hat n)}\bigg) \nonumber \\ & - \bigg({\mathcal{E}_{\mathcal{M}}(z,\hat n, t')\mathcal{M}_z^{5/3}(\hat n)}\bigg)\bigg].
\end{align}
The two-point cross-correlation $\mathcal{W}_{\mathcal{T}g}(t, t', f', z, \theta)$ and $\mathcal{W}_{\mathcal{F}g}(t, f, z, \theta)$ are all-sky integrated quantities. For a statistically isotropic Universe, the all sky-average of the cross-correlation signal will translate into a temporal average over different realizations of the events. As a result, we can expect the two-point correlation function to have statistically time translation symmetry. It will be related to the mean merger rate of GW sources per unit redshift per unit time which is denoted by $\mathcal{E}_{\mathcal{M}}(z)$.
The two-point cross-correlation function, defined in real space, can also be obtained in the spherical harmonic basis of the field ($X_{lm}= \int d^2\hat{n} Y^*_{lm} (\hat n) X(\hat n)$) using the addition theorem of spherical harmonics
\begin{eqnarray}\label{sph-func}
\mathcal{W_\mathcal{XX'}}(\theta)= \sum_l \bigg(\frac{2l+1}{4\pi}\bigg) P_l(\cos(\theta)) C^{XX'}_l,
\end{eqnarray}
where $C_l^{XX'}= \sum_m X_{lm}X'^*_{l'm'} \delta_{ll'}\delta_{mm'}$ and $P_l(\cos(\theta))$ are the Legendre Polynomials.
In summary, the time dependence of the SGWB carries a rich source of information about the astrophysical event rate of the GW sources. We show that a set of observables such as $\mathcal{F}$, $\mathcal{T}$, $\mathcal{W}_{\mathcal{F}g}$, and $\mathcal{W}_{\mathcal{T}g}$ are capable of probing the event rate as a function of cosmological redshift and the chirp mass of the binaries from the SGWB signal. The auto-power spectrum of the SGWB is also an interesting probe. Recent studies has explored the auto-power spectrum and the impact of shot-noise \citep{Jenkins:2019uzp, Jenkins:2019nks}. A recent study \citep{Canas-Herrera:2019npr} has explored cross-correlation of the SGWB signal and galaxy surveys for a cosmic variance limited measurement of the SGWB and without considering the overlap reduction function of GW detectors.
\section{Estimators for the astrophysical SGWB}\label{formalism}
In this section, we devise the formalism which will be useful for extracting the astrophysical time-dependence of the SGWB signal from the GW data. The extraction of the stochastic GW signal can be devised in both pixel-space (real space) and spherical harmonic space. We show the estimators which can be used to study the quantities such as $\mathcal{F}$, $\mathcal{T}$, $\mathcal{W}_{\mathcal{F}g}$, and $\mathcal{W}_{\mathcal{T}g}$.
\subsection{Overview of the analysis technique}
In this section, we briefly explain the standard framework for the analysis of stochastic GW data \citep{Mitra:2007mc, Thrane:2009fp, Talukder:2010yd, Romano:2016dpx}.
The time-series data $d_i(t)$ from the detector $i$ can be written as
\begin{eqnarray}\label{es1}
d_i(t)= h_i(t) + n_i(t),
\end{eqnarray}
where, $h_i(t)$ and $n_i(t)$ are the signal and noise respectively. The observed signal can be written in terms of the true sky signal $h_p^s$ and the detector response $F^p_i (\hat n, t)$ as \citep{Romano:2016dpx, Thrane:2009fp}
\begin{eqnarray}\label{es1a}
h_i(t) = \int_{-\infty}^{\infty} df\int d^2\hat n F^p_i (\hat n, t)e^{2\pi i f(t- \vec x_I. \hat n/c)}h^s_{p} (f, \hat n).
\end{eqnarray}
The short-time Fourier transform of the sky signal at a particular time (t) can be written as
\begin{eqnarray}\label{ft}
d(t, f)=& \int_{t- \tau/2}^{t+\tau/2} d(t') e^{-2\pi ift'} dt',
\end{eqnarray}
where the choice of $\tau$ is made such that the detector response function has not changed significantly in the time window $\tau$. It should also not be smaller than the travel time of the GW signal between a pair of detectors. The expectation value of the cross-correlation of $d_i(f)$ between two different detectors is
\begin{align}\label{es2}
\langle \Omega^d_{ij}(f, t) \rangle \equiv \frac{2}{\tau}\langle{d_i(f,t)d^*_j(f,t)}\rangle
= \frac{2}{\tau}h_i(f, t)h^*_j(f,t),
\end{align}
where the noise terms between the detectors are expected to be uncorrelated $\langle{n_i(f)n^*_j(f)}\rangle=0$. The corresponding noise covariance matrix of $\Omega^d_{ij}(f, t)$ is
\begin{eqnarray}\label{es2cov}
\mathcal{N}^d_{ij}(f, t, f',t') \equiv &\langle\Omega^d_{ij}(f, t)\Omega^d_{ij}(f', t') \rangle - \langle\Omega^d_{ij}(f, t)\rangle\langle\Omega^d_{ij}(f', t')\rangle,\nonumber \\
=& \mathcal{N}_i (f)\mathcal{N}_j (f) \delta_{tt'}\delta_{ff'}
\end{eqnarray}
where $\mathcal{N}_x$ is the one-sided noise power spectrum for detector $x$ which is uncorrelated with the noise of the other detectors. In this expression, the contribution of the signal is assumed to be negligible in comparison to the detector noise.
With knowledge of the detector response function $F^p_i (\hat n, t)$ for both the detectors, we can relate the observed cross-correlation spectrum with the true signal in the sky $\Omega^s(\hat n, f, t)$ as
\begin{eqnarray}\label{es3}
\Omega^d_{ij}(f,t)= \int d^2 \hat n \gamma(\hat n, f, t)\Omega^s(\hat n, f, t),
\end{eqnarray}
where, $\gamma(\hat n, f, t)$ is the geometric factor \citep{PhysRevD.48.2389, PhysRevD.46.5250}
\begin{eqnarray}\label{es3a}
\gamma(\hat n, f, t)= \frac{1}{2}F^p_i (\hat n, t)F^p_j (\hat n, t)e^{2\pi i f\hat n.(\vec x_i- \vec x_j)/c}.
\end{eqnarray}
In Eq. \ref{es3}, we differ from the previous method in two ways. Firstly, the SGWB signal has an intrinsic temporal dependence $\Omega^s(\hat n, f, t)$. Secondly, we have not separated the frequency and the angular parts, but rather kept them together in $\Omega^s(\hat n, f, t)$. In the previous analysis \citep{Thrane:2009fp}, the temporal dependence of the data was considered to be only due to the temporal dependence of the geometric factor $\gamma(\hat n, f, t)$ and the sky signal was assumed to depend only on the sky direction.
\subsection{Estimators}
\textbf{Spectral-derivative: } The spectral-derivative of the SGWB signal is the difference of the SGWB signal between two frequency channels and at a fixed detector time. The corresponding estimator in terms of the cross-power spectrum of the GW data can be written as
\begin{align}\label{es5}
\Delta \hat \Omega_{\mathcal{F}} \equiv\, & \Omega^d(f, t)- \Omega^d(f', t), \nonumber \\=& \int d^2 \hat n \gamma(\hat n, f, t)(\Omega^s(\hat n, f, t)- \Omega^d(\hat n, f', t)),
\end{align}
where the frequency bands need to satisfy $|f-f'|= \Delta f << c/(2\pi D)$, such that the geometric factor does not change significantly in this narrow frequency band and with the precise knowledge of the geometric factor, we can make an estimate of the best-fit value of the differential sky signal $\hat{\mathcal{F}}$, using the log-likelihood of the form \footnote{The quantities with a hat ($\hat{X}$) denotes the estimator of $X$ from the data.}
\begin{eqnarray}\label{es6}
\begin{split}
-2\mathcal{L}_{\mathcal{F}}\propto&\sum_t \sum_i\bigg(\Delta \hat \Omega_{\mathcal{F}}(\hat n_i, f, t)- \gamma(\hat n_i, f, t)\mathcal{F}(\hat n_i, f, t)\bigg)^\dagger \\& N_\mathcal{F}^{-1}\bigg(\Delta \hat \Omega_{\mathcal{F}}(\hat n_i, f, t)- \gamma(\hat n_i, f, t)\mathcal{F}(\hat n_i, f, t)\bigg),
\end{split}
\end{eqnarray}
where the sum over indices $i$ and $t$ runs over the number of pixels ($N_{pix}$) and total number of time-bins $T_{bin}= T_{obs}/ \Delta t$, $\mathcal{F}(\hat n, f, t)$ denotes the model of the spectral-derivative signal given in Eq. \ref{sgwbft2}, and $N_\mathcal{F}$ denotes the covariance matrix given by
\begin{eqnarray}\label{es6cov}
N_\mathcal{F}= (\mathcal{N}_i(f)\mathcal{N}_j(f) + \mathcal{N}_i(f')\mathcal{N}_j(f')) \delta_{tt'}\delta_{ff'},
\end{eqnarray}
where we have assumed that the noise is the dominant contribution than the sky signal and have also assumed that the noise is uncorrelated between different frequency channels, different observation time and different pairs of detectors. As expected, the noise covariance matrix for the derivative signal gets contribution from both the frequency channels $f$ and $f'$, and is going to be more noise dominant in comparison to the measurement of $\Omega^s(f)$.
\textbf{Temporal-derivative: } The time-derivative signal of the SGWB can be estimated from the data using the form
\begin{align}\label{es7}
\Delta \hat \Omega_{\mathcal{T}} \equiv\, & |\Omega^d(f, t)- \Omega^d(f, t+\Delta t)|, \nonumber \\=& \int d^2 \hat n \gamma(\hat n, f, t)|(\Omega^s(\hat n, f, t)- \Omega^s(\hat n, f, t+\Delta t))|,
\end{align}
where $\Delta t$ is a small time interval, over which the detector response function has not changed significantly. The corresponding likelihood to obtain the best-fit value can be written as
\begin{align}\label{es8}
\begin{split}
-2\mathcal{L}_{\mathcal{T}}&\propto \sum_t \sum_i\bigg(\Delta \hat \Omega_{\mathcal{T}}(\hat n_i, f, t)- \gamma(\hat n, f, t)\mathcal{T}(\hat n, f, t, t')\bigg)^\dagger \\ & N_\mathcal{T}^{-1} \bigg(\Delta \hat \Omega_{\mathcal{T}}(\hat n_i, f, t,t')- \gamma(\hat n, f, t)\mathcal{T}(\hat n, f, t,t')\bigg),
\end{split}
\end{align}
where the index $t$ is summed over the number of temporal bins $T_{bins}= T_{obs}/\Delta t$, index $i$ runs over number of sky pixels and $\mathcal{N}_\mathcal{T}$ denotes the covariance matrix
\begin{align}\label{es8cov}
N_\mathcal{T}= 2\mathcal{N}_i(f)\mathcal{N}_j(f) \delta_{tt'}\delta_{ff'},
\end{align}
where we have assumed again that the noise between two detectors are uncorrelated in time and frequency.
\textbf{Tomographic redshift estimate:} In order to separate the cosmological and astrophysical signal as discussed in Sec. \ref{position}, we work in the spherical harmonics space, and define $\Omega_{\mathcal{X}}(\hat n, f, t)$ for $\mathcal{X}\in \mathcal{F}, \, \mathcal{T}$ and cosmic density field $\delta(\hat n, z)$ as
\begin{eqnarray}\label{es9}
\Delta \Omega_{lm|\mathcal{X}}( f, t)= \int d^2\hat n\, \Omega_\mathcal{X}(\hat n, f, t) Y^*_{lm} (\hat n), \nonumber \\
\delta_{lm}(z)= \int d^2\hat n\, \delta(\hat n, z) Y^*_{lm} (\hat n).
\end{eqnarray}
The maximum value of $l$ is related to the smallest angular scale which can be resolved from the experiment. For the GW interferometer detectors at a distance $D$ apart from each other, the smallest angular scale is diffraction-limited, and this sets the maximum value of $l$ as $l_{max}= 2\pi f D/c$ (where $c$ is the speed of light). For a higher signal to noise ratio (snr), we need to go to high $l$ values, hence large spatial separation between the pair of detectors is required.
Using $\hat{\mathcal{T}}(\hat n, f, t, \Delta t)$ and $\hat{\mathcal{F}}(\hat n, f, t)$ in the spherical harmonic basis, we can define the cross-correlation with the cosmic density field as
\begin{eqnarray}\label{es10}
\hat{C}_l^{\mathcal{F}\delta}(f, t, z)= \sum_m\frac{\hat{\mathcal{F}}_{lm}(f, t)\delta^*_{lm}(z)}{2l+1},\nonumber \\
\hat{C}_l^{\mathcal{T}\delta}(f, t, \Delta t, z)= \sum_m\frac{\hat{\mathcal{T}}_{lm}(f, t, \Delta t)\delta^*_{lm}(z)}{2l+1}.
\end{eqnarray}
Using the cross-correlation power spectrum of the SGWB data with the galaxy surveys, we can estimate the best-fit astrophysical event-rate by using the likelihood
\begin{align}\label{es11}
-2\mathcal{L}\propto& \sum_t \sum_l\bigg[\bigg(\hat{C}_l^{\mathcal{X}\delta}(f, t, z)- C_l^{\mathcal{X}\delta}(f, t, z)\bigg)^\dagger N_{{ll'|\mathcal{X}\delta}}^{-1}\nonumber \\&\bigg(\hat{C}_{l'}^{\mathcal{X}\delta}(f, t, z)- C_{l'}^{\mathcal{X}\delta}(f, t, z)\bigg)\bigg],
\end{align}
where, $N_{{ll'|\mathcal{X}\delta}}$ is the covariance matrix which can be calculated from the GW detector noise, network of the GW detectors (which affect the value of $l_{max}$), shot noise of the galaxy surveys, and the sky fraction of the galaxy surveys. $C_l^{\mathcal{X}\delta}$ denotes the model of the cross-correlation power spectrum given in Eq. \ref{sgwbfco1} and in Eq. \ref{sgwbfco2}, where the time-dependence arises only from the bias terms $b_\mathcal{F}$ and $b_{\mathcal{T}}$. However, the power spectrum of the dark matter distribution $P_{DM}(k)$ remains constant in time. As a result, a best-fit value of the bias parameters $b_\mathcal{F}$ and $b_{\mathcal{T}}$ and their time dependence can be inferred by minimizing Eq. \ref{es11}. Though this remains an interesting avenue to better understand the redshift dependence of the bias, its applicability will depend on the angular resolution $\Delta \theta$, the overlap reduction function $\gamma (\hat n, f, t)$, and the detector noise for different configurations of the future GW detectors network. With the currently available network of detectors LIGO Hanford (H), LIGO Livingston (L) and Virgo (V), we are not going to achieve the required sensitivity to measure the redshift dependent bias of the GW sources from the data.
\section{Error estimation for measuring temporal fluctuations in SGWB}\label{error-estimate}
We make a Fisher analysis to estimate the error-bar using the Cramer-Rao bound ($\sigma_{\Delta \Omega}\geq \sqrt{\mathbf{F}^{-1}}$) of the time-variability and frequency-variability of the SGWB signal which can be measured using network of GW detectors such as advanced-LIGO Hanford (H), advanced-LIGO Livingston (L), advanced Virgo (V) {and Cosmic Explorer (CE)}. The rms fluctuation in the SGWB can be expressed as the modification in the amplitude given by $\Delta \Omega (f)= \sqrt{\bigg\langle \bigg(\frac{\Omega_{GW} (t,f) - \bar \Omega_{GW} (f)}{\bar \Omega_{GW} (f)}\bigg)^2\bigg\rangle_T}$. The corresponding Fisher estimate for a network of detectors (I,J) can be written as
\begin{align}
\begin{split}
F_{\alpha\beta}= \int_0^T dt & \sum_I^N\sum_{J>I}^N\bigg(\frac{3H_0^2}{2\pi^2}\bigg)^2 \\ & \int_0^\infty df \frac{\gamma^2(f)\partial_\alpha\Omega_{GW}(f)\partial_\beta\Omega_{GW}(f)}{f^6 \mathcal{N}_I(f)\mathcal{N}_J(f)},
\end{split}
\end{align}
where, $\gamma(f)$ is the unnormalized overlap reduction function which depends upon the location and orientation of LIGO-H, LIGO-L and Virgo \footnote{The normalized overlap reduction function is obtained from this file \href{https://dcc.ligo.org/public/0022/P1000128/026/figure1.dat}{P1000128/026}.}. The overlap function for LIGO-L and LIGO-H (L-H), LIGO-L and Virgo (L-V) and LIGO-H and Virgo (H-V) are shown in the Fig. \ref{fig:gamma}. The first zero crossing of the overlap function takes place at a frequency $f_{char}= c/2D$, where $D$ is the distance between the two detectors \citep{PhysRevD.48.2389}. The $f_{char}$ is smallest for LIGO-H and Virgo, followed by LIGO-L and Virgo and then for LIGO-H and LIGO-L pair of detectors. Only for the frequency range of the GW signal less than $ f \leq f_{char}$, we can make a coincidence detections of the GW signal between a pair of detectors. As a result, most of the statistical power for the measurement of the SGWB signal comes from the range of frequency $f \leq f_{char}$. This implies, the most statistical power comes from LIGO-H and LIGO-L pair of detectors, followed by LIGO-L and Virgo and then LIGO-H and Virgo.
\begin{figure}
\centering
\includegraphics[trim={0 0 0 0cm}, clip, width=0.5\textwidth]{gamma.png}
\caption{The normalized overlap reduction function as a function of frequency $f$ are shown for the three pairs of detectors LIGO Hanford- LIGO Livingston (H-L), LIGO Hanford- Virgo (H-V), and LIGO Livingston-Virgo (L-V).}
\label{fig:gamma}
\end{figure}
Using the instrument noise for advanced-LIGO and VIRGO, we obtain the $1$-$\sigma$ error bar on $\Delta \Omega$ in Fig. \ref{fig:error} as a function of the observation time. For the network of three detectors such as LIGO-H, LIGO-L and Virgo, most of the statistical power in the measurement of the signal comes from LIGO-H and LIGO-L for the frequency of GW SGWB signal $f> 30$ Hz. For a network of three HL-like detectors, we have assumed the same noise curves and the overlap function $\gamma (f)$ as LIGO-H and LIGO-L, and show the amount of gain possible from such a configuration with detector noise of current detectors.
{For the next generation GW detectors such as Cosmic Explorer (CE) \citep{Evans:2016mbw}, we have assumed the same overlap function as between LIGO-H and LIGO-L, and obtain the $1$-$\sigma$ error-bar on $\Delta \Omega$ of the signal for the combination of three (LIGO-H, LIGO-L, and CE) and four (LIGO-H, LIGO-L, Virgo, and CE) network of detectors. As expected, the gain in the measurability of the signal improves by more than an order of magnitude than with the current ongoing GW experiments. As a result, we can measure the fluctuations in the SGWB with high snr from the future experiments. The difference in the $\sigma_{\Delta \Omega}$ are negligible for the case with or without including Virgo are due to two reasons, (i) the fact that the $f_{char}$ is small than for the case with LIGO-H and LIGO-L, and (ii) the detector noise for Virgo interferometer is higher than CE.}
\begin{figure}
\centering
\includegraphics[trim={0.5cm 0 0.5cm 0.5cm}, clip, width=0.5\textwidth]{Fisher_with_CE.png}
\caption{The $1$-$\sigma$ error-bar on the fluctuations in the amplitude of the GW signal for different configurations of the GW detectors LIGO-Hanford (H), LIGO-Livingston (L) and Virgo (V). {For the case of a three HL-like detector network, we have assumed a configuration with the same detector noise and overlap reduction function as LIGO-H and LIGO-L. For the next generation GW detectors such as Cosmic Explorer (CE) we have considered the instrument noise according to \citet{{Evans:2016mbw}} and with the overlap function same as for the LIGO-H and LIGO-L detector pair.}}
\label{fig:error}
\end{figure}
\section{Conclusion}\label{conclusion}
The SGWB is a rich source of cosmological and astrophysical informations. {The astrophysical SGWB arises from a large number of coalescing binary systems of compact objects such as black hole, neutron star and white dwarf. Such events are expected to occur randomly following a Poisson distributed in time with a constant event rate which depends upon the astrophysical parameters such as the mass of the compact objects, star formation history, redshift of the source, etc. Due to the Poissonian nature of the GW events, the SGWB is going to exhibit a temporal fluctuations due to the variation of the number of events and is expected to attain the time translation symmetry on averaging over a long observation time. We point out the temporal dependence of the SGWB in this paper and discuss the implications of measuring it from the ongoing and future GW experiments.} We show that the time-dependence of the SGWB and its rms fluctuations can be a useful probe to learn about several aspects such as the event rate of the GW sources for different chirp masses of the coalescing binaries, the duty cycle of GW signals, and the redshift distribution of the event rate.
{The temporal fluctuations of the SGWB can be studied in frequency domain, temporal domain and in the spatial domain as discussed in Sec. \ref{aspects}. By constructing quantities such as the spectral-derivative of the SGWB denoted by $\mathcal{F}$ we can estimate the redshift integrated total GW energy density arising from different mass windows of the compact objects. The time-derivative of the SGWB denoted by $\mathcal{T}$ captures the temporal fluctuation in the SGWB signal which is related to the event rate of the GW signal contributing at a particular frequency of the SGWB. The spatial distribution of the astrophysical GW sources are expected to follow the spatial distribution of the galaxies and can be estimated by cross-correlating with the galaxy distribution available from the upcoming cosmological surveys (such as DESI \citep{Aghamousa:2016zmz}, EUCLID \citep{2010arXiv1001.0061R}, LSST \citep{2009arXiv0912.0201L}, SPHEREx \citep{Dore:2018kgp}, WFIRST \citep{Dore:2018smn}). The cross-correlation of the SGWB with the galaxy field can probe the time-dependent bias of the SGWB signal for different cosmological redshifts as we have discussed in Sec. \ref{formalism}. The statistical significance of the cross-correlation between the SGWB map with the galaxy distribution is going to be limited by the angular resolution of the SGWB map. For the interferometer-based GW detectors, the smallest angular scale $\Delta \theta$ that can be resolved in SGWB sky is set by the diffraction limit, which can be written in terms of the frequency of the GW signal and the distance between two detectors $D$ as $\Delta \theta= c/2f D$. This implies that the interferometer detectors, which are farthest away are the best for making high-resolution maps of the SGWB signal. However, for the interferometer detectors which are farther away are going to have an overlap reduction function with smaller value of $f_{char}= c/2D$, which can result into a higher instrument noise for frequencies greater than $f_{char}$. }
We propose that data analysis performed for ongoing GW experiments should search for temporal fluctuations along with the algorithms which search for spatial fluctuations. This procedure will be potentially useful for identifying the event rates for different binary species such as BNS, BBHs and BH-NS systems. The procedure outlined here will help establish the time-translation symmetry of the SGWB and inform us about the astrophysics related to the formation of these binary sources. This avenue of research will provide a useful tool for distinguishing between the astrophysical and cosmological SGWB signals. By exploring the time-dependence aspect of the SGWB signal, we can remove contamination from the astrophysical SGWB signal and peer into the higher frequency $f \in 10-1000$ Hz cosmological SGWB signals originating at different epochs of the Universe \citep{Starobinsky:1979ty,Turner:1996ck, Martin:2013nzq,Kibble:1976sj,Damour:2004kw,Kosowsky:1992rz,Kamionkowski:1993fg}.
\section*{Acknowledgement}
The results of this analysis are obtained using the Horizon cluster hosted by Institut d'Astrophysique de Paris. We thank Stephane Rouberol for smoothly running the Horizon cluster. The work of S.M. is supported by the Labex ILP (reference ANR-10-LABX-63) part of the Idex SUPER, and received financial state aid managed by the Agence Nationale de la Recherche, as part of the programme Investissements d'avenir under the reference ANR-11-IDEX-0004-02.
\bibliographystyle{mnras}
|
1,477,468,750,651 | arxiv | \section{Belavin-Drinfeld triples}
Let $(e_i), 1 \leq i \leq n,$ be a basis for $\mathbb C^n$. Set
$\Gamma = \{e_i - e_{i+1}: 1 \leq i \leq n-1\}$. We will use the
notation $\alpha_i \equiv e_i - e_{i+1}$. Let $( , )$ denote the
inner product on $\mathbb C^n$ having $(e_i)$ as an orthonormal basis.
\begin{defe}
A Belavin-Drinfeld triple of type $A_{n-1}$ is a triple
$(\tau, \Gamma_1, \Gamma_2)$ where $\Gamma_1, \Gamma_2 \subset \Gamma$
and $\tau: \Gamma_1 \rightarrow \Gamma_2$ is a bijection, satisfying
two conditions:
(a) $\forall \alpha, \beta
\in \Gamma_1$, $(\tau \alpha,\tau \beta) = (\alpha, \beta)$.
(b) $\tau$ is nilpotent: $\forall \alpha \in \Gamma_1, \exists k
\in \mathbb N$ such that $\tau^k \alpha \notin \Gamma_1$.
\end{defe}
We employ three isomorphisms of Belavin-Drinfeld triples:
a) Any triple $(\tau, \Gamma_1, \Gamma_2)$ is isomorphic to the triple
$(\tau', \Gamma'_1, \Gamma'_2)$ obtained as follows: $\Gamma'_1 =
\{\alpha_m: \alpha_{n-m} \in \Gamma_1\}$, $\tau'(\alpha_m) = \alpha_k$
where $\tau(\alpha_{n-m}) = \alpha_{n-k}$.
b) Any triple $(\tau, \Gamma_1, \Gamma_2)$ is isomorphic to the triple
$(\tau^{-1}, \Gamma_2, \Gamma_1)$.
c) The product of isomorphisms (a), (b).
Modulo these isomorphisms, we found all Belavin-Drinfeld triples for
$n \leq 13$ by computer. The number of such triples is given below:
\vskip 12 pt
\begin{center}
\begin{tabular}{|c|c||c|c||c|c|}\hline
n & \# of triples & n & \# of triples & n & \# of triples \\ \hline
2 & 1 & 6 & 41 & 10 & 10434 \\ \hline
3 & 2 & 7 & 161 & 11 & 45069 \\ \hline
4 & 4 & 8 & 611 & 12 & 201300 \\ \hline
5 & 13 & 9 & 2490 & 13 & 919479 \\
\hline
\end{tabular}
\end{center}
\section{The GGS conjecture}
Let $\mathfrak g = {\mathfrak sl}(n)$ be the Lie algebra of $n \times
n$ matrices of trace zero. Set $\mathfrak h \subset \mathfrak g$ to be
the subset of diagonal matrices. Elements of $\mathbb C^n$ define
linear functions on $\mathfrak h$ by $\bigl( \sum_i \lambda_i e_i
\bigr) \bigl( \sum_i a_i e_{ii} \bigr)= \sum_i \lambda_i a_i$. Set
$\sigma = \sum_{1 \leq i,j \leq n} e_{ij} \otimes e_{ji}$, and let $P$
be the orthogonal projection of $\sigma$ to $\mathfrak g \otimes
\mathfrak g$ with respect to the form $(X,Y) = Tr(XY)$ on
$Mat_n(\mathbb C)$. Then, set $P^0$ to be
the projection of $P$ to $\mathfrak h \otimes \mathfrak h$. Thus $P^0
= \sum_i \frac{n-1}{n} e_{ii} \otimes e_{ii} - \sum_{i \neq j}
\frac{1}{n} e_{ii} \otimes e_{jj}$.
For any Belavin-Drinfeld triple, consider the following equations:
\begin{gather} \label{r01}
r^0_{12} + r^0_{21} = P^0. \\ \label{r02} \forall \alpha \in \Gamma_1,
(\tau \alpha \otimes 1)r^0 + (1 \otimes \alpha) r^{0} = 0.
\end{gather}
Belavin and Drinfeld showed that nonunitary solutions of the CYBE
correspond to solutions of these equations. Define $\tilde r^0 =
r^0-P^0/2$.
The GGS conjecture gives an explicit form of a matrix $R \in
Mat_n(\mathbb C) \otimes Mat_n(\mathbb C)$ for any given triple and
any given $r^0 \in \mathfrak h \otimes \mathfrak h$ satisfying
\eqref{r01}, \eqref{r02} as follows:
Set $\tilde \Gamma_1 = \{v \in \text{Span}(\Gamma_1): v = e_i - e_j, 0
\leq i < j \leq n, i \neq j\}$, and define $\tilde \Gamma_2$ similarly.
Then, extend $\tau$ to a map $\tilde \Gamma_1 \rightarrow \tilde
\Gamma_2$ so that $\tau$ is additive, i.e. $\tau(a+b) = \tau(a) +
\tau(b)$ provided $a,b,(a+b) \in \tilde \Gamma_1$. Further, define
$\alpha \prec \beta$ if $\alpha \in \tilde \Gamma_1$ and
$\tau^k(\alpha) = \beta$, for some $k \geq 1$. It is clear from the
conditions on $\tau$ that this means, given $\alpha = \alpha_i +
\ldots + \alpha_{i+p}$, that $\beta = \alpha_j + \ldots +
\alpha_{j+p}$, $0 \leq p \leq n-2, 1 \leq i,j \leq n, i \neq j$.
Assume $\beta = \tau^k(\alpha), k \geq 1$. If, in this case,
$\tau^k(\alpha_i) = \alpha_{j+p}$, that is, $\tau^k$ sends the left
endpoint of $\alpha$ to the right endpoint of $\beta$, then define
$\text{sign}(\alpha,\beta) = (-1)^p$. Otherwise, set
$\text{sign}(\alpha,\beta) = 1$.
We will use the notation $x \wedge y \equiv \frac{1}{2} (x \otimes y - y
\otimes x)$. Furthermore, for all matrices $x \in Mat_n(\mathbb C)
\otimes Mat_n(\mathbb C)$ we will use the notation $x = \sum_{i,j,k,l}
x_{ik}^{jl} e_{ij} \otimes e_{kl}$. Let $q$ be indeterminate and set
$\hat q \equiv q-q^{-1}$. Finally, for any $\alpha = e_i - e_j$, set
$e_{\alpha} = e_{ij}$, and say $\alpha > 0$ if $i < j$, otherwise
$\alpha < 0$. Now, we can define the matrix $R$ as follows:
\begin{gather} \label{ace}
a = 2 \sum_{\underset{\alpha \prec \beta}{\alpha, \beta > 0}}
\text{sign}(\alpha,\beta)\: e_{-\alpha} \wedge e_{\beta}, \quad c =
\sum_{\alpha > 0} e_{-\alpha} \wedge e_\alpha, \quad \epsilon = ac +
ca + a^2, \\ \label{tars}
\tilde a = \sum_{i,j,k,l} a_{ik}^{jl} q^{a_{ik}^{jl}
\epsilon_{ik}^{jl}}, \quad R_s = q \sum_{i} e_{ii} \otimes e_{ii} +
\sum_{i \neq j} e_{ii} \otimes e_{jj} + \hat q \sum_{i>j} e_{ij}
\otimes e_{ji}, \\ \label{r} R = q^{\tilde r^0} (R_s + \hat q \tilde a)
q^{\tilde r^0}.
\end{gather}
\begin{conj}{\bf (GGS)}
The matrix $R$ satisfies the quantum Yang-Baxter equation,
$R_{12} R_{13} R_{23} = R_{23} R_{13} R_{12}$, and $PR$ satisfies
the Hecke relation, $(PR-q)(PR+q^{-1}) = 0$.
\end{conj}
\section{Checking GGS by computer}
We checked the GGS conjecture through a program written in C, which
takes as input any list of Belavin-Drinfeld triples. For each triple,
it finds a valid $\tilde r^0$, constructs the matrix $R$, and checks
the QYBE and Hecke conditions. Following is a more detailed
description of the procedure.
We will use the notation $\tau(\alpha) = 0$ if $\alpha \notin \tilde
\Gamma_1$. Given a triple, the first step is to find an appropriate
$\tilde r^0$. We rewrite the equations \eqref{r01}, \eqref{r02} as
follows:
\begin{gather} \label{tr01}
\tilde r^0_{12} + \tilde r^0_{21} = 0, \\ \label{tr02} \forall \alpha
\in \Gamma_1, ((\alpha - \tau \alpha) \otimes 1) \tilde r^0 =
\textstyle{\frac{1}{2}} ((\alpha + \tau \alpha) \otimes 1) P^0.
\end{gather}
As before, we view elements of $\mathbb C^n$ as linear functions on
$\mathfrak h$. Then, it is easy to check $(\alpha_i)_{1 \leq i \leq
n-1}$ and $(\alpha_i - \tau \alpha_i)_{1 \leq i \leq n-1}$ are bases
of $\mathfrak h^*$. Let $(g_i)$ and $(f_i)$ be dual to the bases
$(\alpha_i)$ and $(\alpha_i - \tau \alpha_i)$, respectively. Then, if
we view $\tilde r^0$ as an element of $Mat_{n-1}(\mathbb C)$ in the
basis $(f_i)$, it is clear that $\tilde r^0 = (b_{ij})$ where $b_{ij}
= \frac{1}{2} (\alpha_i + \tau \alpha_i, \alpha_j - \tau \alpha_j), i
\in \Gamma_1,$ where the inner product is the same we defined earlier
on $\mathbb C^n$, and $b_{ji} = -b_{ij}, i \notin \Gamma_1, j \in
\Gamma_1$. Then, the free components of $\tilde r^0$ are those
$b_{ij}$ with $i,j \notin \Gamma_1, i < j$, which determine those
$b_{ij}, i,j \notin \Gamma_1, i > j$ since $\tilde r^0$ is
skew-symmetric. Thus, the dimension of the space of all valid $\tilde
r^0$ is $n-m-1 \choose 2$.
The computer program merely chooses $b_{ij} = 0$ whenever $i,j \notin
\Gamma_1$. It is known that it is sufficient to consider one element
from the family of possible $\tilde r^0$ in verifying the GGS
conjecture. Namely, this follows from
\begin{prop}
If $R$ of the form \eqref{r} satisfies the QYBE and PR satisfies the
Hecke relation for a given $\tilde r^0$ satisfying \eqref{tr01},
\eqref{tr02}, then for any other solution $\tilde r^0 + r'$ of
\eqref{tr01}, \eqref{tr02}, $q^{r'} R q^{r'}$ also satisfies the QYBE
and $P q^{r'} R q^{r'}$ satisfies the Hecke relation.
\end{prop}
{\it Proof.} It is clear that $P q^{r'} R q^{r'} = q^{r'_{21}} PR
q^{r'}$. Since $r'_{21} = -r'$ by \eqref{tr01}, the Hecke relation may
be rewritten as $q^{-r'} (PR - q) (PR + q^{-1}) q^{r'} = 0$, which is
true iff $PR$ satisfies the Hecke relation.
To see that $q^{r'} R q^{r'}$ satisfies the QYBE, we take the
following steps. By \eqref{tr02},
\begin{equation} \label{rp1}
((\alpha - \tau \alpha) \otimes 1) r' = 0.
\end{equation}
Suppose that $r' = \sum_i a_i \otimes b_i$ where the $b_i$ are
linearly independent. By \eqref{rp1}, we know that $\alpha(a_i) =
\beta(a_i)$ whenever $\alpha \prec \beta$. Then we consider the
commutator $[a_i \otimes 1 + 1 \otimes a_i, R]$ = $[a_i \otimes 1 + 1
\otimes a_i, q^{\tilde r^0}R_s q^{\tilde r^0} + \hat q q^{\tilde r^0}
\tilde a q^{\tilde r^0}]$. First note that $[a_i, e_{\alpha}] =
\alpha(a_i) e_\alpha$ for any $a_i \in \mathfrak h$. Then, it is
clear $[a_i \otimes 1 + 1 \otimes a_i, q^{\tilde r^0} R_s q^{\tilde
r^0}] = [a_i \otimes 1 + 1 \otimes a_i, \sum_{i > j} d_{ij} e_{ij}
\otimes e_{ji}] = \sum_{i > j} d_{ij} (\alpha(a_i)-\alpha(a_i)) e_{ij}
\otimes e_{ji} = 0$ for the appropriate coefficients $d_{ij}$. Now,
we see that
\begin{multline*}
[a_i \otimes 1 + 1\otimes a_i, q^{\tilde r^0} \tilde a q^{\tilde r^0}]
= [a_i \otimes 1 + 1 \otimes a_i, \sum_{\alpha, \beta > 0, \alpha
\prec \beta} (f_{\alpha,\beta} e_{-\alpha} \otimes e_{\beta} +
g_{\alpha, \beta} e_{\beta} \otimes e_{-\alpha})] \\ = \sum_{\alpha,
\beta > 0, \alpha \prec \beta} (\beta(a_i) - \alpha(a_i))
(f_{\alpha,\beta} e_{-\alpha} \otimes e_{\beta} + g_{\alpha, \beta}
e_{\beta} \otimes e_{-\alpha}) = 0.
\end{multline*}
This implies that $r' \in \Lambda^2 K$ where $K$ is the space of
symmetries of $R$, that is, $K = \{ x \in Mat_{n}(\mathbb C): [1
\otimes x + x \otimes 1, R] = 0\}$. Furthermore, it is well-known and
easy to check that if $x \in \Lambda^2 K$ and $R$ satisfies the QYBE,
then $e^x R e^x$ also satisfies the QYBE. Thus, in our case, we have
proved that $q^{r'} R q^{r'}$ satisfies the QYBE. The proposition is
proved.$\quad\square$
Now, given the chosen $\tilde r^0$ in the basis $(f_i)$, the computer
program changes bases to $(g_i)$. This is accomplished via the
transformation $[\tilde r^0]_{(g_i)} =
([(1-\tau)]_{(\alpha_i)}^{-1})^{T} [\tilde r^0]_{(f_i)}
[(1-\tau)]_{(\alpha_i)}^{-1}$ where $(1-\tau)$ is considered to be a
linear transformation on $\mathfrak h^*$, with $(1-\tau)\alpha_i =
\alpha_i - \tau \alpha_i$. Denote this new matrix by $(b'_{ij})$.
Then, the computer program obtains the matrix $[\tilde r^0]_{(e_{ii})}
\in Mat_n(\mathbb C)$ from this matrix by two quick transformations.
First it finds the intermediate matrix $(b''_{ij}) = [\tilde
r^0]_{(e_{ii}),(g_i)} \in Mat_{n \times (n-1)}(\mathbb C)$ by
$b''_{i1} = \frac{1}{n} ((n-1)b'_{i1} + (n-2)b'_{i2} + \ldots +
b'_{i,n-1})$, and the other terms follow easily. The same technique
on the other side finally gives $[\tilde r^0]_{(e_{ii})}$.
Once $\tilde r^0$ is obtained, the computer constructs the matrix $R
\in Mat_n(\mathbb C) \otimes Mat_n(\mathbb C)$ in the basis $e_{ij}
\otimes e_{kl}, 1 \leq i,j,k,l \leq n$. First it computes $a$, $c$,
and $\epsilon$ by \eqref{ace}. Then, formulas \eqref{tars}, \eqref{r}
are implemented for each entry separately. Elements $x \in
Mat_n(\mathbb C) \otimes Mat_n(\mathbb C)$ are implemented as
3-dimensional arrays $(x_{ik}^j)$, since all matrices presented in the
GGS conjecture take the form $\sum_{i,j,k} x_{ik}^j e_{ij} \otimes
e_{k,i+k-j}$. Polynomials in $q$ are implemented as structures
containing two arrays of integers, one for positive and one for
negative powers of $q$. The sizes of the arrays are determined in the
input of the program.
The computer checks the QYBE and Hecke conditions in the following
manner. For the QYBE condition, the corresponding entries of $R_{12}
R_{13} R_{23}$ and $R_{23} R_{13} R_{23}$ are computed and compared
individually; both take the form
$\sum_{i,j,k,l,m} d_{ikm}^{jl} e_{ij} \otimes e_{kl} \otimes
e_{m,i+k+m-j-l}$. The same method is applied to the Hecke condition
with matrices $\sum_{i,j,k} d_{ik}^j e_{ij} \otimes
e_{k,i+k-j}$. Explicitly, if $R = \sum_{i,j,k} r_{ik}^j e_{ij} \otimes
e_{k,i+k-j}$, the QYBE and Hecke conditions become, respectively:
\begin{gather}
\sum_p r_{ik}^{k+i-p} r_{k+i-p,m}^j r_{p,m+k+i-p-j}^l = \sum_p
r_{km}^p r_{i,m+k-p}^{j+l-p} r_{j+l-p,p}^j, \forall i,j,k,l,m. \\
\sum_l r_{ki}^l r_{k+i-l,l}^j = \delta_{ij} + \hat q r_{ki}^j, \forall
i,j,k.
\end{gather}
Then, the computer prints the matrices $\tilde r^0$ and $R$ and
reports whether or not the conditions passed.
After generating all Belavin-Drinfeld triples for $n \leq 13$ as
described in the previous section, all tests were performed on each
triple where $n \leq 12$ with this procedure, all of which passed.
Thus, by application of the previous proposition, we have the
following result:
\begin{prop}
The GGS conjecture is true for Lie algebras $\mathfrak{sl}(n)$ with $n
\leq 10$.
\end{prop}
The computer program is included with this paper, with instructions on
usage included with the program itself.
\section{Acknowledgements}
I would like to thank Pavel Etingof for his generous help and advice.
I would also like to thank the Harvard College Research Program for
their support.
|
1,477,468,750,652 | arxiv | \section{Introduction}
Why there is so little anti-matter in the Universe, and why the
matter coalesced in the way it did are two of the major problems
facing cosmology. Predictably, both have attracted a great deal of
attention spawning a panoply of explanations and theories. Some of these
theories involve objects known as topological defects\cite{kibb}, regions of
trapped primordial vacuum, an example of which is the
cosmic string. A string approach is an
appealing one since it can be used to address both questions: the wake
left by strings moving through the Universe can produce fluctuations
which may lead to the accretion of matter into large scale
structures\cite{bt}\cite{vish},
whilst their interaction with particles and the
decay of string loops can provide mechanisms
leading to baryon number violation\cite{bviol1} and the observed
matter bias\cite{bviol2}.
Hence, it is important to understand how these strings
form, both to predict how many we
can expect to have been created and how likely the above processes
are.
The formation of topological defects is thought to proceed via
the Kibble mechanism\cite{kibb}.
Modern particle physics and the hot big bang model suggest that
as the Universe cooled it underwent several phase
transitions in which the symmetry of the vacuum was broken into a
successively smaller, and smaller group. During such a transition, it
is possible for fields to acquire non-zero vacuum expectation values.
How they do this depends on the order of the transition.
If we consider a transition where a U(1) symmetry is broken, then
following the transition all points in space will have a physically
identical, non-zero vacuum expectation value, the only variation being
in the difference in phase between any two points.
By causality, we expect the phases to be uncorrelated on
distances greater than the horizon length, and so there is a finite
probability that the phase along a closed path through a number of
Horizon volumes will wind through some multiple of $2\pi$ - an
indication that the loop contains a string. In practice two points do
not need to be separated by a horizon length for their phases to be
uncorrelated. This should also be true if they are a
thermal correlation length (defined later) apart;
usually a considerably
smaller distance than the horizon size.
This is the Kibble mechanism, and it relies upon the so-called geodesic
rule, which is that in passing between two domains of different phase,
the phase will follow the shortest route. In the global case this has
been verified both numerically\cite{ye} and
experimentally\cite{exp1}\cite{exp2},
though, for a local symmetry it has been
argued\cite{rusri} that the presence of
gauge fields may influence the path the
phase takes, and may actually prevent it from following a `geodesic'.
More recently, however, work has been done suggesting that, despite
this, the
geodesic rule holds in the local case for a first order
phase transition\cite{hab}.
As the
temperature falls below the critical temperature, for a while it is
still possible for thermal fluctuations to restore the broken
symmetry, and hence erase any topologically interesting configuration
present at the time. The
point at which it is thought that
this ceases to be possible is referred to as the
Ginzburg temperature, $T_{G}$, and is found by equating the free
energy with the thermal energy (for such a restoring fluctuation will
have a high probability while the former is considerably less
than the latter). Brandenberger and Davis\cite{rac} demonstrated that
given certain constraints on the parameters, the ratio of fluctuations
in the scalar field to the background is less than one beneath a
temperature just below the
Ginzburg temperature, regardless of whether gauge fields are, or are
not, present. This means that topologically non-trivial configurations
arising from thermal fluctuations will become stable to such
fluctuations just under the Ginzburg temperature. This adds weight to the
arguments in favour of the Kibble mechanism.
However, since Brandenberger and Davis considered a linearized model,
only valid for the short time immediately following a spinodal
decomposition, before non-linear effects start to
dominate, their analysis only holds for early times. To study the
evolution of the fields at later times it is necessary to include the
non-linearities. A flexible way to do this is to study the
Langevin equation associated with the classical field
equations\cite{bfmcg}.
This
is the purpose of this paper.
By studying the Langevin equation for the system,
we derive an equation for the probability distribution of the fields,
$P(\phi_{i},A_{i}^{\mu},t)$ which we use to analyse the evolution
of the expectation values of the classical fields coupled to a
thermal bath. This enables us to
study not only the stability of configurations to fluctuations at and below
the Ginzburg temperature, but also the long time evolution of the
fields. The flexibility of this method is demonstrated by the ease
with which it is modified to include the expansion of the Universe.
Our method seems to be the only one that allows the study of the effect that
thermal fluctuations have on the development of the field, and the
stability of defects formed, {\em throughout} the phase transition. Other
methods either concentrate on the start of the phase transition, or
near its completion.
\section{Global Symmetry}
Although our ultimate aim is to study the case of an expanding
universe with broken local symmetry, it is beneficial, for several reasons,
to start with the more
straightforward case of a global, non-expanding model. Firstly, since
the Kibble mechanism has already been verified for this case, we
know that any topologically non-trivial configurations present
should be stable to fluctuations below $T_{G}$,
and hence we have a benchmark to check
our results against. Although this provides a useful test of our
method, it is
by no means a proof of its validity. Secondly, since the amount of
algebra involved is very dependent on the number of fields present,
the global, non-expanding case provides the simplest,
and hence clearest, demonstration
of the method used throughout.
Consider the $U(1)$ toy model
\begin{equation}
{\cal{L}}=(D_{\mu}\phi)^{\dagger}(D^{\mu}\phi)
-V({|\phi|}^2)
-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}
\end{equation}
where $\phi$ is a complex scalar field, $A_{\mu}$ is the U(1) gauge
connection (taken to be zero for now)
and $D_{\mu}=\partial_{\mu}-ieA_{\mu}$. We adopt an effective
potential of the form
\begin{equation}
V({|\phi|}^2)=\frac{\lambda}{4}(|\phi|^{2}-\eta^{2})^{2}
+\frac{\tilde{\lambda}}{2}T^{2}|\phi|^{2},
\end{equation}
where $\tilde{\lambda}=(4\lambda+6e^{2})/12$ and
the temperature dependence reflects the fluctuations on a scale
smaller than some correlation length, defined later.
For sufficiently high temperatures this is symmetric about a global
minimum at zero.
However, as the temperature passes through some critical temperature,
$T_{C}=( \lambda/\tilde{\lambda})^{\frac{1}{2}}\eta$,
the system undergoes a second order phase transition,
breaking the U(1) symmetry,
with new
minima appearing at
$|\phi|^{2}=\eta^{2}\left(1-T^{2}/T_{C}^{2}\right)$.
Any two points in the new vacuum will now
have non-zero vacuum expectation values of equal moduli but random phase.
Setting $\phi=\rho\exp{(i\alpha)}$,
our equations of motion for $\rho$ and $\alpha$ are
\begin{equation}
\begin{array}{r}
\partial_{\mu}\partial^{\mu}\rho-
(\partial_{\mu}\alpha\partial^{\mu}\alpha) \rho+
\frac{dV(\rho^{2})}{d\rho^{2}} \rho
=0,\\
\\
\partial_{\mu}\partial^{\mu}\alpha+
2\partial^{\mu}\alpha\partial^{\mu}\rho/\rho =0.\\
\end{array}
\label{octopus}
\end{equation}
We assume that $\alpha$ is time independent and varies spatially
over some length-scale $2\pi/k_{\alpha}$. Setting $R=\dot{\rho}$ we obtain
\begin{eqnarray}
\dot{R}+F\rho = 0 & , &
\dot{\rho}-R = 0,\\
\label{2eq}
\nonumber
\end{eqnarray}
where
\begin{equation}
F=\left(k_{C}^{2}-
\frac{\lambda}{2}\eta^{2}+
\frac{\tilde{\lambda}}{2}T^{2}-
(k_{\alpha}\alpha)^{2}+
\frac{\lambda}{2}\langle\rho^{2}\rangle\right),
\end{equation}
and $k_{C}$ is explained below.
Note that we have replaced $\lambda\rho^{3}/2$ with
$\lambda\langle\rho^{2}\rangle\rho/2$ c.f the mean square approximation
to make the resulting equations more accessible.
For the purpose of this analysis we consider an initial configuration
varying spatially on a scale of the correlation length. Since we do
not want to become embroiled in a discussion of effects due to
fluctuations on scales shorter than the thermal correlation length,
$\xi_{G}=1/[\eta\sqrt{\lambda(1-T_{G}^{2}/T_{C}^{2})}]$
, we consider a coarse-grained field where we have integrated out all modes
associated with such. This leads to the effective
potential mentioned earlier. Hence, if we were to perform a Fourier
decomposition, then it would be of the form
$\rho=\sum_{k\leq k_{c}}\rho_{k}\exp{(i\underline{k}.\underline{x})}$
for some $k_{c}\sim 1/\xi_{G}$. We also assume that the mode
corresponding to $k_{c}$ dominates, (which we later show to be
self-consistent) and investigate a configuration
with a length scale $2\pi/k_{C}$.
By (\ref{octopus}) we see that the earlier
assumption that $\dot{\alpha}=0$ requires
$k_{\alpha}=2k_{c}$.
To incorporate thermal fluctuations into our model, we modify (\ref{octopus})
such that the equations describing
the evolution of $R$ and $\rho$ over some small time interval $\delta
t$ are
\begin{eqnarray}
R(t+\delta t) & = & R(t)-\delta t F\rho(t) +\delta R,\\
\rho(t+\delta t) & = & \rho(t) + \delta tR(t) +\delta\rho,\\
\nonumber
\end{eqnarray}
where $\delta\rho$ and $\delta R$ are the thermal fluctuations in
$\rho$ and $R$ respectively.
Defining $\rho_{\delta t}=\rho(t+\delta t)$, $\rho=\rho(t)$, and
similarly $R_{\delta t}$ and $R$, we can write
\begin{eqnarray}
P(\rho_{\delta t},R_{\delta t},t+\delta t) & = &
\int d(\delta \rho) d(\delta R)
P_{1}(\delta \rho)P_{2}(\delta R)
\nonumber\\
& & \times P(\rho_{\delta t}-\delta tR_{\delta t}-\delta\rho,
R_{\delta t} +\delta tF\rho_{\delta t}-\delta R,t)
\nonumber\\
& & \times\frac{\partial}{\partial\rho_{\delta t}}
(\rho_{\delta t}-\delta tR_{\delta t}-\delta\rho)
\nonumber\\
& & \times\frac{\partial}{\partial R_{\delta t}}
(R_{\delta t} +\delta tF\rho_{\delta t}-\delta R),\\
\nonumber
\end{eqnarray}
where $P_{1}$ and $P_{2}$ are the probability measures for
$\delta\rho$ and $\delta R$ respectively.
Since $\delta\rho$ and $\delta R$ are random fluctuations, we may
assume that
$\langle\delta\rho\rangle$=$\langle\delta R\rangle=0$.
Expanding the integrands as Taylor series, we find, after
considerable algebra, that
\newline
\begin{eqnarray}
P_{\delta t} & = & P
-\delta t \frac{\partial}{\partial\rho}(RP)
+\delta t \frac{\partial}{\partial R}(F\rho P)\nonumber\\
& & +\frac{\partial}{\partial\rho}
\left(
\frac{1}{2}\langle\delta\rho^{2}\rangle
\frac{\partial P}{\partial\rho}
\right)
+\frac{\partial}{\partial R}
\left(
\frac{1}{2}\langle\delta R^{2}\rangle
\frac{\partial P}{\partial R}
\right)\\
\nonumber
\end{eqnarray}
from which, assuming that $\delta\rho$ and $\delta R$ are independent of
$\rho$ and $R$, we obtain the
following differential equation for P:
\begin{eqnarray}
\frac{\partial P}{\partial t} & = &
-\frac{\partial}{\partial\rho}(RP)
+\frac{\partial}{\partial R}(F\rho P)\nonumber\\
& &
+\frac{1}{2}
\frac{\langle\delta\rho^{2}\rangle}{\delta t}
\frac{\partial P}{\partial\rho}
+\frac{1}{2}\frac{\langle\delta R^{2}\rangle}{\delta t}
\frac{\partial P}{\partial R}.\\
\nonumber
\end{eqnarray}
One interpretation of this equation is as follows. If we move the
first two terms on the right hand side over to the left, then we have
a full derivative of $P$. Liouville's theorem states
that for a closed system this derivative should be zero. However, our
system is coupled to a thermal bath and so there is a flow of
probability between the two, as demonstrated by the two non-zero noise
terms, due to the bath, on the right hand side.
This equation is clearly a rather forbidding equation to solve analytically.
However, we can use it to derive equations governing the quadratic
moments
\begin{eqnarray}
\langle\rho^{2}\rangle =
\int d\rho dR P(\rho,R,t)\rho^{2}, &
\langle\rho R\rangle =
\int d\rho dR P(\rho,R,t)\rho R, &
\langle R^{2}\rangle =
\int d\rho dR P(\rho,R,t)R^{2}.\nonumber\\
\nonumber
\end{eqnarray}
We choose to investigate these moments because their equations form a
closed system with the mean field approximation we have taken.
As a modification we set
\begin{equation}
\begin{array}{ccccccc}
u=\langle\rho^{2}\rangle/\eta^{2} & , &
v=\langle\rho R\rangle/\eta^{3} & , &
w=\langle R^{2}\rangle/\eta^{4} & , &
\tau=\eta t,\\
\end{array}
\end{equation}
since this normalises $u$, $v$ and $w$, and gives the equations for
the moments in
a form where the relative sizes of terms are much more apparent;
\begin{eqnarray}
\dot{u} & = & 2v+\delta_{1},\label{neq1}\\
\dot{v} & = & - f_{o}u - \frac{1}{2}\lambda u^{2} + w^{2},
\label{neq2}\\
\dot{w} & = & - 2f_{o}v - \lambda uv +
\delta_{2},
\label{neq3}\\
\nonumber
\end{eqnarray}
where $^{.}$ denotes $\frac{d}{d\tau}$,
\begin{eqnarray}
\delta_{1}=
\frac{1}{\eta^{3}}
\frac{\langle\rho^{2}\rangle}{\delta t}, & &
\delta_{2}=\frac{1}{\eta^{5}}
\frac{\langle R^{2}\rangle}{\delta t}\\
\nonumber
\end{eqnarray}
are the fluctuations and
\begin{eqnarray}
f_{o} & = & \left(\frac{k^{2}}{\eta^{2}}
-\frac{\lambda}{2}
+\frac{\tilde{\lambda}}{2}\frac{T^{2}}{\eta^{2}}
-\frac{(k_{\alpha}\alpha)^{2}}{\eta^{2}}\right).\\
\nonumber
\end{eqnarray}
It should be noted that these equations are even nastier than
they look at first glance,
since $f_{o}$ contains a term proportional to $T^{2}$. However,
by assuming that the temperature varies at a much slower rate than the
fields (something which we will see is self-consistent later on) it is
possible to make some progress analytically.
For the time being, we consider the case when fluctuations are absent,
and treat $f_{o}$ as constant over a small time period.
After a bit of
substitution, we integrate the equations to obtain
\begin{eqnarray}
\dot{u}^{2} & = & -\lambda u^{3}
-4f_{o}u^{2}
+\Lambda_{1}u
+\Lambda_{2},\\
\nonumber
\end{eqnarray}
where $\Lambda_{1}=4(w_{i}+f_{o}u_{i})+\lambda u^{2}_{i}$,
$\Lambda_{2}=4(v_{i}^{2}-u_{i}w_{i})$ and $u_{i}$, $v_{i}$ and $w_{i}$
are the initial values of $u$, $v$ and $w$.
Let the three roots of
the polynomial on the right hand side be
$\mu_{1}>\mu_{2}>\mu_{3}$. Hence
\begin{equation}
\begin{array}{ccc}
\mu_{1}\mu_{2}\mu_{3}=\frac{4\Lambda_{2}}{\lambda}, &
\mu_{1}\mu_{2}+
\mu_{2}\mu_{3}+
\mu_{3}\mu_{1}=-\frac{4\Lambda_{1}}{\lambda}, &
\mu_{1}+\mu_{2}+\mu_{3}=-\frac{4f_{o}}{\lambda}.\\
\end{array}
\end{equation}
Taking initial conditions such that, at $T_{G}$,
$u_{i}=1-t_{C}/t_{G}=\lambda^{2}/[g(1+\lambda^{2}/g)]$,
which is the minimum of the effective potential
at that time, and $v_{i}=w_{i}=0$, we find $\Lambda_{2}=0$,
implying that one of the roots is zero. Furthermore, $\Lambda_{1}$ and
$f_{o}$ are
found to be negative, so the two non-zero roots must be positive. Since
$\dot{u}^{2}$ is seen to be positive between $\mu_{1}$ and $\mu_{2}$
we expect $u$ to oscillate between these two.
We can also make an
estimate of the time period of these oscillations, $t_{P}$, since
\begin{equation}
t_{P}=
\frac{4}{\sqrt{\lambda\mu_{1}}}K(\kappa),
\end{equation}
where
\begin{eqnarray}
K(\kappa) & = &
\frac{\sqrt{\mu_{1}}}{2}
\int^{\mu_{1}}_{\mu_{2}}
\frac{du}{\sqrt{(\mu_{1}-u)(u-\mu_{2})(u-\mu_{3})}},\nonumber\\
\nonumber
\end{eqnarray}
is the first complete elliptic integral and
$\kappa=\sqrt{[(\mu_{1}-\mu_{2})/\mu_{1}]}$.
These results closely agree with the corresponding numerical calculations.
However, they are only valid for very small fluctuations, and so we
turn to a numerical approach for a more detailed analysis.
In order to study the effect of fluctuations on the evolution of the
fields, it is necessary to
make some estimate of the
fluctuation terms. To do this we imagine $\phi$ coupled
to some other field, $\psi$, in
thermal equilibrium, via an extra term in the Lagrangian of the form
\begin{eqnarray}
{\cal{L}}_{I} & = & \frac{1}{2}g|\phi|^{2}|\psi|^{2}.\\
\nonumber
\end{eqnarray}
By comparing the resulting equations of motion with those already
obtained we find that
\begin{eqnarray}
\delta\rho=0 & , & \delta R=g\rho\psi^{2}\delta t.\nonumber\\
\nonumber
\end{eqnarray}
Since $\psi$ is in thermal equilibrium, we also have that $\psi\sim
T$, with corresponding number density
$n_{\psi} = 1.202g_{*}T^{3}/\pi^{2}$
where $g_{*}$ is the number of internal degrees of freedom
(107 for a Grand Unified Theory). Taking
$\delta t$ to be a typical interaction time, such that $\delta
t\sim n_{\psi}^{-1}\sigma_{I}^{-1}$, where $\sigma_{I}=\sim gk^{-2}$ is
the interaction cross-section, we find the following approximation for
the fluctuation terms
\begin{eqnarray}
\delta_{1}=0 & , &
\delta_{2}=
g \left(\frac{k^{2}}{\eta^{2}}\right)\frac{T}{\eta}u.\\
\nonumber
\end{eqnarray}
As expected, the size of the fluctuations decreases with temperature.
Before we can carry out a calculation,
we need to address the problem of choice of parameter values.
Since the model we are
considering is a global one, we see from the definition of the
potential that we must have $\tilde{\lambda}=\lambda/3$.
We now demonstrate why we are justified in assuming that the mode
corresponding to $k_{C}$ dominates.
Figure 1 shows the evolution of the quadratic moment $u$,
(corresponding to $\langle \rho^{2}\rangle$) from the Ginzburg time
onwards, where we have taken $\lambda=0.1$,
fluctuation coupling, $g=\lambda/3$ and a range of different
wavelengths.
The most obvious feature is that
the mode varying with wavenumber $k_{C}$ dominates those with longer
wavelengths, consistent with our earlier assumption. However, one may
be slightly alarmed at the fact that at least two of the curves look
like they have no intention of converging to one (corresponding to
$\langle\rho^{2}\rangle\rightarrow\eta^{2}$) as one might expect. The
reasons for this are twofold, and both somewhat of our own creation.
The first is that in assuming that $\rho$ and $\alpha$ vary spatially
with fixed wavenumber, we alter the value of $\rho$ for which
$\dot{\rho}=0$ since we have essentially added two terms onto the
derivative of the potential. This has the effect of raising the
equilibrium value of $\rho$. The second is that we have to make an
arbitrary choice of $\alpha$ (taken throughout as $\alpha=1$),
which effectively scales $k_{\alpha}$,
and so has a similar effect to the first. Conversely it could also be
used to tune the expected equilibrium value to one by choosing a
sufficiently small value of $\alpha$.
This may raise questions over
the validity of this method for studying the evolution of fields.
However, the evolution of the fields is not qualitatively changed by
taking different values of $k_{C}$, $k_{\alpha}$ or $\alpha$ and so we
argue that, as an approximation, our approach is still of interest.
The potential we are using is, unfortunately, only a one loop
approximation, and hence is not
valid above
the Ginzburg temperature where higher loops dominate.
Our simulation therefore must run from the Ginzburg time
onwards, and so we can only investigate the stability of string
configurations to thermal fluctuations and not their formation.
We also only consider the case of a GUT phase transition, since we
expect one at a lower temperature, such as the Electro-weak phase
transition for instance, to be qualitatively the same, but slower due to
the larger value of $t_{C}$.
We take the coupling $\lambda$ to be between 0.01 and 1, and
$g\leq\lambda$.
Figure 2 shows the effect of the two couplings, $g$
and $\lambda$. The former controls the size of fluctuations, and it is
seen, in 2a and 2b,
that the larger $g$, the quicker the rise of the lower bound, so fluctuations
are, bizarrely, actually seen to help
stabilise configurations, on average, by damping oscillations in the
field.
They also cause the upper bound
to rise at a faster rate though this is not as pronounced, nor as
important to the stability of domain structures.
Fig.2c reveals that decreasing $\lambda$ decreases the frequency of
oscillations, and also the asymptotic value for $u$. The latter is
because $k_{C}\propto 1/\xi_{G}\propto\sqrt{\lambda}$ and we have
already noted that the value of $k_{C}$ effects the limiting value.
Finally, Fig.2d demonstrates that the effect of fluctuations decreases
dramatically with $\lambda$.
We see then that, since all curves move away from zero, any
topologically non-trivial configuration is stable from the Ginzburg
temperature onwards, though the fields may take a long time to
reach their equilibrium values. We also note that, in all cases
considered, the
oscillations occur on a much smaller timescale than the evolution
towards the equilibrium value; consistent with our earlier assumption.
\section{The Effect of Gauge Fields}
That the configurations formed in the above transition are stable
against thermal fluctuations is nothing new.
The Kibble mechanism for the global
case is already well accepted, since one can argue in favour of the
geodesic rule just by demanding that the path followed minimizes the
energy density. However, the presence of gauge fields may undermine
this, since their presence in the gradient energy,
$(D_{\mu}\phi)(D^{\mu}\phi)^{\dagger}$,
may make it equally favourable, energetically, to
follow a longer path.
Luckily, the method used for studying the global case works equally
well in the local one, the only drawback being a significant increase in the
amount of algebra that has to be done. We start by writing the
equations of motion in the form
\begin{eqnarray}
\partial_{\mu}\partial^{\mu}\rho
-e^{2}(q_{\mu}q^{\mu})\rho+
\frac{\partial V(\rho^{2})}{\partial\rho^{2}}\rho
& = & 0,\nonumber\\
\partial_{\mu}\partial^{\mu}q^{\nu}
+2e^{2}\rho^{2}q^{\nu}
+\frac{1}{e}\partial_{\mu}\partial^{\mu}\partial^{\nu}\alpha
& = & 0,\nonumber\\
\nonumber
\end{eqnarray}
where $q^{\nu}=A^{\nu}-\frac{1}{e}\partial^{\nu}\alpha$ (Note that
this is gauge invariant). Setting
$Q^{\mu}=\dot{q}^{\mu}$,
$R=\dot{\rho}$ and
$\Delta^{\nu}=\frac{1}{e}\partial_{\mu}\partial^{\mu}\partial^{\nu}\alpha$
for convenience, we find
\begin{equation}
\begin{array}{lcrcrcrcr}
R(t+\delta t) & = &
R(t) & - &
\delta tF_{2}(t)\rho(t) & + &
\delta R, & &\\
\rho(t+\delta t) & = &
\rho(t) & + &
\delta tR(t) & + &
\delta\rho, & &\\
Q^{\mu}(t+\delta t) & = &
Q^{\mu}(t) & - &
\delta tG(t)q^{\mu}(t) & + &
\delta Q^{\mu} & - & \delta t\Delta^{\mu},\\
q^{\mu}(t+\delta t) & = &
q^{\mu}(t) & + &
\delta t Q^{\mu}(t) & + &
\delta q^{\mu}, & &
\end{array}
\end{equation}
where
\begin{eqnarray}
F_{2} & = &
\left(
k^{2}
-\frac{\lambda}{2}\eta^{2}
+\frac{\tilde{\lambda}}{2}T^{2}
+\frac{\lambda}{2}\langle\rho^{2}\rangle
-e^{2}\langle q^{2}\rangle
\right),\nonumber\\
G & = &
\left(k^{2}
+2e^{2}\langle\rho^{2}\rangle\right),\nonumber\\
\nonumber
\end{eqnarray}
and $\delta R$, $\delta\rho$, $\delta Q^{\mu}$, $\delta q^{\mu}$ are
the thermal fluctuations.
Proceeding in exactly the same manner as before,
we arrive, after
some very unpleasant algebra, at the equation for
$P(\rho,R,q^{\mu},Q^{\mu},t)$,
\begin{eqnarray}
\frac{\partial P}{\partial t} & = &
-\frac{\partial}{\partial\rho}(RP)
+\frac{\partial}{\partial R}(F_{2}\rho P)
-\frac{\partial}{\partial q_{\mu}}(Q_{\mu}P)
+\frac{\partial}{\partial Q_{\mu}}(Gq_{\mu}P)
+\Delta_{\mu}\frac{\partial P}{\partial Q_{\mu}}
\nonumber\\
& &
+\frac{1}{2}
\frac{\langle\delta\rho^{2}\rangle}{\delta t}
\frac{\partial^{2}P}{\partial\rho^{2}}
+\frac{1}{2}
\frac{\langle\delta R^{2}\rangle}{\delta t}
\frac{\partial^{2}P}{\partial R^{2}}
\nonumber\\
& &
+\frac{1}{2}
\frac{\langle\delta q_{\mu}\delta q^{\mu}\rangle}{\delta t}
\frac{\partial^{2}P}{\partial q_{\mu}\partial q^{\mu}}
+\frac{1}{2}
\frac{\langle\delta Q_{\mu}\delta Q^{\mu}\rangle}{\delta t}
\frac{\partial^{2}P}{\partial Q_{\mu}\partial Q^{\mu}}.
\nonumber\\
\nonumber
\end{eqnarray}
It is important to note that the sum over indices in the last two
terms is over all four indices, not the usual two.
Once more we see the violation of Liouville's theorem via the coupling
to the heat bath.
{}From this it is straightforward to obtain the equations governing the
quadratic moments.
Following the global method and defining $\tau$,
$u$, $v$, $w$ as before, plus
\begin{equation}
\begin{array}{ccccc}
x=\langle q^{2}\rangle/\eta^{2} & , &
y=\langle q^{\mu}Q_{\mu}\rangle/\eta^{3} & , &
z=\langle Q^{2}\rangle/\eta^{4}\\
\end{array}
\end{equation}
we arrive at
\begin{equation}
\begin{array}{cclcccl}
\dot{u} & = & 2v+\delta_{1}, & &
\dot{x} & = & 2y+\delta_{3}, \\
\dot{v} & = &
- f_{2}u - \frac{\lambda}{2}u^{2} + e^{2}xu + w, & &
\dot{y} & = &
- \left(\frac{k^{2}}{\eta^{2}}\right)x
- 2e^{2}ux + z,\\
\dot{w} & = &
- 2f_{2}v - \lambda uv + 2e^{2}xv
+ gu\left(\frac{k^{2}}{\eta^{2}}\right)\frac{T}{\eta}+\delta_{2}, & &
\dot{z} & = &
- 2\left(\frac{k^{2}}{\eta^{2}}\right)y
- 4e^{2}uy+\delta_{4},\\
\end{array}
\end{equation}
where
\begin{eqnarray}
f_{2} & = & \left(\frac{k^{2}}{\eta^{2}}
-\frac{\lambda}{2}
+\frac{\tilde{\lambda}}{2}\frac{T^{2}}{\eta^{2}}
\right),\\
\nonumber
\end{eqnarray}
and $\delta_{1}$, $\delta_{2}$, $\delta_{3}$ and $\delta_{4}$ are
the thermal fluctuation terms.
Note that the terms involving $\Delta_{\mu}$ have integrated to zero.
Clearly we are not going to get too far with an analytic approach this
time, so we restrict ourselves to a numerical analysis.
As before,
our first preparation is to calculate the fluctuation terms. Since
the coupling of $\phi$ to $\psi$ is independent of gauge fields, the
two new
fluctuation terms, $\delta_{3}$ and $\delta_{4}$ must be zero. Hence,
the only non-zero fluctuation term is $\delta_{2}$, which is unchanged.
We consider once more an initial domain structure of length scale
$\xi_{G}$, and take $e^{2}=40\lambda/3$, since we expect the gauge
coupling to dominate. Figure 3 shows the effect of varying $\lambda$
and $g$, the results being very similar to those observed in the
global symmetry case. Figures 3a and 3b once again demonstrate the
effect of increasing the size of the fluctuations; an increase in the
rate of growth of the lower bound
and a damping of oscillations, whilst Fig.3c reveals how
decreasing the size of the self-coupling decreases the frequency of
oscillations, the effect of fluctuations decreasing in a similar
manner (Fig.3d).
The most important feature however is that, as in the global case,
any non-trivial
domain structure present at $t_{G}$ is seen to be stable against
fluctuations at greater times. This reinforces the work by
Brandenberger and Davis\cite{rac}, and, similarly, the arguments in
favour of the Kibble mechanism.
\section{Including the Expansion of the Universe}
Until now we have ignored the expansion of the Universe,
which we would expect to damp the amplitude of any oscillations present.
To make the analysis more realistic it is necessary to include this
expansion.
\subsection{Global Symmetry}
Taking first the case with a global symmetry, such a modification is
straightforward, and leads to an extra term in the equation of motion
for $\rho$ proportional to the Hubble parameter $H$,
\begin{equation}
\partial_{\mu}\partial^{\mu}\rho-
\partial_{\mu}\alpha\partial^{\mu}\alpha \rho+
\frac{dV(\rho^{2})}{d\rho^{2}} \rho
=-3H\frac{\partial\rho}{\partial t}:
\end{equation}
as expected, a damping term.
The only effect this has on the equations for the quadratic moments
$u$, $v$ and $w$, is to add $-3Hv$ to the right hand side of
equation (\ref{neq2}), and $-6Hw$ to that of
(\ref{neq3}), though $f_{o}$ is now written as
\begin{eqnarray}
f_{o} & = & \left[
\frac{k^{2}}{\eta^{2}}
\left(\frac{a_{0}}{a}\right)^{2}
-\frac{\lambda}{2}
+\frac{\tilde{\lambda}}{2}\frac{T^{2}}{\eta^{2}}
-\frac{(k_{\alpha}\alpha)^{2}}{\eta^{2}}
\left(\frac{a_{0}}{a}\right)^{2}\right]\\
\nonumber
\end{eqnarray}
where $a$ is the expansion parameter, and $a_{0}$ its value at
$T_{G}$\footnote{Since $a(t)$ is an unphysical quantity, we can
without loss of generality take $a_{0}=1$.}.
Figure 4 shows the results for a small selection of values
to illustrate the effects of varying the different parameters. Once
more we consider an initial domain structure of length scale
$\xi_{G}$.
All four diagrams are seen to display the rapid damping due to the expansion of
the Universe.
Figs. 4a and 4b demonstrate the effect of fluctuations. In 4b, where
the fluctuations are suppressed, the lower bound on $u$ rises much
more slowly than in the unsuppressed case, Fig.4a, which actually
overshoots its asymptotic value of one before reconverging.
Hence, it is seen that fluctuations
actually make it less likely that a configuration will be erased,
agreeing with our non-expanding simulations. The
upper bound varies very little between the two.
Fig. 4c shows the effects of reducing the self-coupling; a longer
period of oscillation, a less dramatic initial growth and a much gentler
approach toward its asymptotic value. Fig. 4d demonstrates how for small
values of $\lambda$ the fluctuations have very little effect on the
evolution of the fields.
In summary, topologically non-trivial configurations are stable to
thermal fluctuations.
We also note that due to the scale factor now present in the equations of
motion, the effect of $k_{c}$ and $k_{\alpha}$ is rapidly damped out
so that in all expanding cases considered, $u$ converges on one,
corresponding to $\langle \rho^{2} \rangle$ tending to $\eta^{2}$, the
long-time minimum of the effective potential.
\subsection{Local Symmetry}
Now including gauge fields once more,
the equation of motion for $A_{\mu}$ is, predictably, modified in a very
similar way to that for $\rho$ when we include expansion, the new
version being
\begin{eqnarray}
\partial_{\mu}\partial^{\mu}A^{\nu}+
2e^{2}\rho^{2}A^{\nu}-2e\rho^{2}\partial^{\nu}\alpha & = &
-3H\partial_{0}A^{\nu}.\\
\nonumber
\end{eqnarray}
In addition, the equations for the
quadratic moments $x$, $y$ and $z$ acquire near
identical terms to those already acquired by those for $u$, $v$ and
$w$, the only difference being an extra factor of two in the former case.
Once more, we illustrate four different values of the parameters, for a
configuration varying on a correlation length scale.
As in the non-expanding case, we take
$e^{2}=40\lambda/3$ throughout.
Figures 5a and 5b demonstrate the effect of fluctuations. In Fig.5a, where
the fluctuation coupling is comparable to the self-coupling, the lower
bound to fluctuations is seen to rise much quicker (as was the case
for a global symmetry) than that in Fig.5b where the fluctuations are
suppressed. So much so in fact that it overshoots its expected limit,
though long
time studies show that it gradually bends back and converges to one.
In Fig.5c we see the effect of decreasing the self-coupling; a less
dramatic growth of the lower bound, whilst Fig.5d reveals, once more,
that the effect
of fluctuations decreases dramatically with $\lambda$.
Much the same as in the previous three cases.
Comparing figures 4 and 5, what we notice is that the presence of
gauge fields heavily
damps the initial growth leading to a lower upper bound and
consequently smaller oscillations.
\section{Conclusions}
By studying the Langevin equations for the classical fields we have
verified that for a U(1) model with broken global symmetry (whether
expanding or not), string
configurations formed during a second order phase transition, are
stable to thermal fluctuations below the Ginzburg temperature. We have
also shown this to be true in the case of a system with a local
symmetry, reinforcing earlier work\cite{rac} on the subject, and
lending further support to the Kibble mechanism for the formation of
topological defects.
The same method has also been used to study how the fields
evolve at late times, with the
scalar field gradually tending to its equilibrium value;
a process accelerated by the damping produced by an expanding
universe.
Indeed, our method tracks the evolution of the field {\em throughout}
the phase transition. Other methods are only able to consider early or
late times.
In addition, we have seen that thermal fluctuations
actually accelerate
the early evolution of the field, and damp the amplitude of
oscillations in the field as it tends to its asymptotic value,
making it even less likely that a fluctuation will destroy a
configuration.
This work is still an approximation however, since we have had to
assume $\dot{\alpha}=0$, leading to an arbitrariness in the asymptotic
value of the field in the non-expanding models. However, this should
not affect the stability of configurations, and in the expanding case
this problem is smoothed out anyway as the model scales with time.
Another approximation we have made is in neglecting the dissipation term
necessary when a source of fluctuations is
present\cite{diss1}\cite{diss2}. Since any dissipation term would have a
damping effect, it would only increase the stability of a
non-trivial domain structure, and so including it, a further
avenue of research, should only strengthen our results. This and the
quantum field theoretical treatment are in progress\cite{ray}
Finally, the flexibility of the Langevin equation
approach\cite{bfmcg}
may make this
method suitable for a number of other applications, such as the study
of the evolution of seed magnetic fields following the breaking of a
non-Abelian symmetry\cite{vachy}, and the stability of defects to fluctuations
in
condensed matter systems such as $^{4}$He\cite{zurek}. Work on these subjects
is
in progress\cite{new}.
We are indebted to Robert Brandenberger and Ray Rivers for discussions
and suggestions.
This work is supported in part by P.P.A.R.C. and E.P.S.R.C.
|
1,477,468,750,653 | arxiv | \section{INTRODUCTION}
Despite numerous studies of gravitational instabilities in protoplanetary disks (e.g., Boss 1997, 1998, 2001, 2002, 2005; Pickett et al.~1998, 2000b, 2003; Balbus \& Papaloizou 1999; Nelson et al.~2000; Gammie 2001; Johnson \& Gammie 2003; Rice et al.~2003, 2004, 2005; Mayer et al.~2004, 2007; Lodato \& Rice 2004; Mej\'ia et al.~2005; Rafikov 2005; Durisen et al.~2005; Cai et al.~2006, Boley et al.~2006), researchers still disagree on several key issues (see Durisen et al.~2007 for a review). Specifically, recent three-dimensional radiative hydrodynamics simulations of protoplanetary disks with gravitational instabilities (GIs) show very different disk evolution, where the differences involve the importance of convection in disks, the dependence of disk cooling on metallicity, and the stability of disks against fragmentation and clump formation (Boss 2004, 2005; Cai et al.~2006; Boley et al.~2006; Mayer et al.~2007).
As we demonstrate herein, disk evolution is sensitive to the details of radiative physics algorithms. Therefore, before disk evolution can be addressed by any radiative hydrodynamics code, the code's implementation of radiative physics must be compared to analytic test cases so the limitations of the simulation are understood. In this paper we present such test cases, and we challenge all researchers who use radiative hydrodynamics to study disk evolution to run their algorithms through these tests. We use this test suite to evaluate the accuracy of the radiative routine used by Boley et al.~(2006, hereafter Paper I), who employ the routine developed by Mej\'ia (2005, hereafter the M2004 scheme), and we recompute part of the disk simulation in Paper I with our new radiative scheme (hereafter the BDNL scheme), which uses vertical rays coupled with flux-limited diffusion in the radial and azimuthal directions.
This paper is laid out as follows. We describe the hydrodynamics code and our new radiative cooling algorithm in \S 2 and 3, respectively. In \S 4 we present the radiative test suite and apply it to the radiative algorithms used in Paper I and in Cai et al.~(2006) and to the new algorithm. We compare the BDNL simulation with the Paper I simulation in \S 5, and we discuss the role convection plays in disk cooling in \S 6. The results are summarized in \S 7.
\section{HYDRODYNAMICS}
The three-dimensional hydrodynamics code with self-gravity is the same as that used by Pickett et al.~(1995), Mej\'ia et al.~(2005), Cai et al.~(2006), and Paper I, and it is described in detail in Pickett (1995), Pickett et al.~(1998, 2000a), and Mej\'ia (2004). The code solves the equations of hydrodynamics in conservative form (see Yang 1992) on a uniform cylindrical grid ($r$,
$\phi$, $z$), and assumes an ideal gas equation of state with a constant ratio of specific heats $\gamma=5/3$. The potential due to the mass on the grid is found by using a direct Poisson's equation solver (see Tohline 1980), and the potential on the boundary needed for this solver is calculated using spherical harmonics up to $l=m=10$ (see Pickett et al.~1996). Pickett et al.~(2003) found that when a blob was loaded onto the grid, summing up to $l=m=10$ was sufficient to describe the boundary potential and that the error in the solution was dominated by grid resolution. The code computes the source and flux terms (Norman \& Winkler 1986) separately in
an explicit, second-order time integration (van Albada et al.~1982; Christodoulou 1989; Yang 1992), where the advective terms are calculated with a van Leer scheme (Norman \& Winkler 1986). The outermost radial and vertical grid cells are outflow boundaries. The innermost radial grid cells are also outflow boundaries where the mass flowing through this boundary is added to the central mass. We impose a point source gravitational field due to the star, but we keep the star fixed at the center. We choose to hold it fixed because moving the star to a position that explicitly keeps the center of mass at the grid center (Boss 1998) may also cause spurious dynamics. Improvements to the code that allow for proper star-disk interactions are being developed.
We include the effects of shock heating by artificial bulk viscosity through a second-order Neumann \& Richtmeyer scheme (see Pickett 1995). This artificial viscosity ensures that the jump conditions are satisfied
by adding the correct amount of entropy to the gas. For more details on the implemented AV scheme, we refer the reader to Pickett (1995) and to Pickett et al.~(2000a).
\section{RADIATIVE ROUTINES}
\subsection{The M2004 and C2006 Schemes}
The M2004 scheme is described in detail in Paper I, but we summarize the numerical method here. In their scheme, flux-limited diffusion is used in the $r$, $\phi$, and $z$ directions on the cylindrical grid everywhere that the vertically integrated Rosseland optical depth $\tau > 2/3$, which defines the disk's interior. For mass at lower optical depths, which defines the disk's atmosphere, the gas is allowed to radiate as much as its emissivity allows, with the Planck mean opacity used instead of the Rosseland mean opacity. The disk interior and atmosphere are coupled with an Eddington-like boundary condition over one cell. This boundary condition defines the flux leaving the interior, which can be partly absorbed by the overlaying atmosphere. Likewise, feedback from the atmosphere is explicitly used when solving for the boundary flux. However, cell-to-cell radiative coupling is not explicitly modeled in the disk's atmosphere. This method allows for a self-consistent boundary condition that can evolve with the rest of the disk. Cai et al.~(2006) improved the stability of the routine, as described below, by extending the interior/atmosphere fit over two cells (hereafter the C2006 scheme).
A problem with the routines employed by Mej\'ia (2004), Cai et al.~(2006), and Paper I is a sudden drop in the temperature profile where $\tau=2/3$. The drop is due to the omission of complete cell-to-cell coupling in the optically thin regime $(\tau < 2/3)$. However, as shown in Paper I, the boundary does permit the correct flux through the disk's interior. Because the flux through the disk is correct, the temperature drop is mainly a dynamic concern inasmuch as it might seed convection (see Paper I). In order to obtain the correct flux and temperature profiles, a method for calculating fluxes that takes into account the long-range effects of radiative transfer is required.
\subsection{The BDNL Scheme}
Consider some column in a disk with fixed $r$ and $\phi$. Take that column out of context, and imagine that it is part of a plane-parallel atmosphere. In this case, we can easily describe the heating and cooling by radiation with the method of discrete ordinates (see, e.g., Chandrasekhar 1960; Mihalas \& Weibel-Mihalas 1986). This method uses discrete angles that best sample the solid angle, as determined by Gaussian quadrature. In a plane-parallel atmosphere, a single ray can provide decent accuracy if the cosine of the angle measured downward from the vertical to the ray is $
\mu =1/\sqrt 3$. We use this approach to approximate radiative transfer in the vertical direction, and we include flux-limited diffusion (Bodenheimer et al.~1990) in the $r$ and $\phi$ directions everywhere that $\tau \ge 1/\sqrt 3$. Naturally, this is only a crude approximation when one places the column back into context. However, we believe it represents the best implementation of radiative physics for simulating protoplanetary disks with three-dimensional hydrodynamics thus far, because it captures the long-range effects of radiative transfer that are excluded in pure flux-limited diffusion routines. As we demonstrate later, such coupling can affect disk evolution.
Consider now some incoming intensity $I_-$ and some outgoing intensity $I_+$. In the context of the approximation outlined above, the vertical flux at any cell face can be evaluated by computing the outgoing and incoming rays for a given column and by relating them to the flux with
\begin{equation}
F=2\pi\mu\left(I_+-I_-\right).\label{eq1}
\end{equation}
Once we have vertical fluxes at cell faces, we can compute the vertical component of the divergence of the flux at the cell center by differencing fluxes at cell faces.
We compute the outgoing ray by
\begin{equation}
I_+ = I_+(t_d)\exp (-\Delta t) + \int_{t_u}^{t_d} S(t') \exp(t'-t_d)d t'\rm,
\end{equation}
where $\Delta t=t_d-t_u$, $t_d$ is the optical depth at the base of the cell measured {\it along the ray}, $t_u$ is the optical depth at the top of the cell, and $I_+(t_d)$ is the upward intensity at the base of the cell. Because we are assuming that each column in the disk is part of a plane-parallel atmosphere, the optical depth along the ray can be computed by $t=\tau/\mu$.
Similar to $I_+$, we define the incoming ray solution across one cell as
\begin{equation}
I_- = I_-(t_u)\exp (-\Delta t ) + \int_{t_d}^{t_u} S(t') \exp(t_u-t')d t'\rm,
\end{equation}
where $I_-(t_u)$ is the incoming intensity at the top of the cell.
The 0th approximation for $S(t)$ is that it is constant over the entire cell. This approximation leads to
\begin{eqnarray}
I_+=I_+(t_d)\exp\left(-\Delta t\right)+ S_0(1-\exp(-\Delta t))\\
I_-=I_-(t_u)\exp\left(-\Delta t\right)+ S_0(1-\exp(-\Delta t))\rm,
\end{eqnarray}
and $S_0=\sigma T_0^4/\pi$, where $T_0$ is the temperature at the cell center.
Because the source function typically changes over a cell, additional complexity is necessary. Consider a source function that may be represented by the quadratic
\begin{equation}
S(t)=c + bt + at^2.
\end{equation}
To find the constants $c$, $b$, and $a$, we Taylor expand the source function about the optical depth defined at the cell center $t_0$:
\begin{equation}
S(t)\approx \bigg\{ S_0-\frac{dS}{d t}\bigg |_{t_0}t_0+\frac{d^2S}{2dt^2}\bigg |_{t_0}t_0^2 \bigg \}
+ \bigg \{ \frac{dS}{dt}\bigg |_{t_0}-\frac{d^2S}{dt^2}\bigg |_{t_0}t_0 \bigg \}t
+ \bigg \{\frac{d^2S}{2dt^2}\bigg |_{t_0} \bigg \}t^2.
\end{equation}
The first term in curly brackets in equation (7) is $c$, the second is $b$, and the third is $a$. Using equation (7), we can find solutions for equations (2) and (3) across any given cell (see also Heinemann et al.~2006).
However, in order to use equation (7), the source function's derivatives must be evaluated:
\begin{eqnarray}
\frac{dS}{dt}\bigg |_{t_0}=S'_0&=&2\mu \sigma T_0^3\frac{(T_{-1}-T_{+1})}{\pi\rho_0\kappa_0\Delta z}{\rm,~and}\\
\frac{dS'}{dt}\bigg |_{t_0}&=&\mu\frac{(S'_{-1}-S'_{+1})}{2\rho_0\kappa_0\Delta z}.
\end{eqnarray}
Here, the 0 denotes the {\it center} of the cell of interest, the -1 denotes the cell center below the cell of interest, and the +1 denotes the cell center above the cell of interest. This difference scheme is used unless the following conditions are met: (A) If the +1 cell's density is below the cutoff value, i.e., the minimum density at which we still compute radiative physics, or the -1 cell's density is below the cutoff value, the derivatives are set to zero, which reduces the solutions for $I_+$ and $I_-$ to equations (4) and (5). (B) If cell 0 is the midplane cell, i.e., the first cell in the upper plane, a five-point center derivative is used for $S'$, i.e.,
\begin{equation}
\frac{dS}{dt}\bigg |_{t_0}=\mu\sigma T_0^3\frac{8T_0-7T_{+1}-T_{+2}}{3\pi\rho_0\kappa_0\Delta z}\rm,
\end{equation}
unless exception (A) is met. The simple form of equation (10) is due to the reflection symmetry about the midplane that is built into the grid, which means that the -1 cell's values are equal to the midplane cell's values and that the -2 cell's values are equal to the +1 cell's values. In addition, the second derivative of the source function at the midplane is taken to be the average of the three-point centered difference method and a forward difference method, i.e., equation (9) is used as one would normally use it to compute the second derivative but that answer is averaged with the derivative obtained by differencing $S'_0$ and $S'_{+1}$. Various differencing schemes have been tested, and this differencing scheme yields the best results for the widest range of optical depths and cell resolution.
Now that we have a solution for the source function integral, the incoming and outgoing intensities can be computed. The incoming ray is computed first by summing the solutions to the source function integral as one moves down into the disk along the ray with the previous sum serving as $I_-(t_u)$. If desired, an incident intensity at $t=0$, as in Cai et al.~(2006), can be added to the solution by extincting the intensity according to the optical depth. Because reflection symmetry is assumed about the midplane, the incoming intensity solution at the midplane serves as the $I_+(t_d)$ for the outgoing intensity at the midplane.
For the $r$ and $\phi$ directions, the flux-limited diffusion scheme described by Paper I is employed when the following conditions are met: (A) The vertical Rosseland mean optical depth at the center of the cell of interest is greater than or equal to $1/\sqrt 3$. This condition ensures that we only compute flux-limited diffusion where photons moving vertically have less than about a 50\% chance of escaping. (B) The cells neighboring the cell of interest also have a $\tau \ge 1/\sqrt3$. This should ensure that the code only calculates temperature gradients between relevant cells; the flux at this cell face is accounted for in the total energy loss (gain) of the system. If a neighboring cell has a $\tau < 1/\sqrt3$, then the flux at that face is taken to be the vertical flux through the first cell that is below $\tau < 1/\sqrt3$ in the column of interest. These conditions are similar to those employed by Mej\'ia (2004), Cai et al.~(2006), and Paper I. Once fluxes have been determined for all cell faces, the divergence of the flux can be calculated with
\begin{equation}\nabla\cdot {\bf F}=\frac{\partial \left(r F_r\right)}{r\partial r} + \frac{\partial F_{\phi}}{r \partial \phi} +
\frac{\partial F_z}{\partial z}.\end{equation}
Because the radiative time scale can be much shorter than the hydrodynamic time scale, we employ a radiative cooling (heating) limiter with magnitude
\begin{equation}\nabla\cdot{\bf F}_{\rm limiter} = \frac{\epsilon}{ 0.1\ {\rm orp}},\end{equation}
where 1 orp $\approx 253$ yr is the initial outer rotation period of the disk near 33 AU. In a similar fashion, we define some maximum heating due to artificial bulk viscosity
\begin{equation}\Gamma_{\rm limiter} = \frac{\epsilon}{ 0.1\ {\rm orp}}\end{equation}
because numerical instabilities can arise without this and/or the time step can drastically decrease due to extremely high temperatures in typically uninteresting parts of the calculation. We monitor the number of cells affected by these limiters during the calculation, and during the asymptotic phase, less than a few percent of the relevant AV heated cells are limited and less than a percent of the relevant radiatively cooling cells are limited.
Finally, we note that we only use Rosseland optical depths, with the opacity evaluated at the cell's local temperature. This is a step backwards from the M2004 scheme, which employs Planck means for regions where the Rosseland $\tau\lesssim 2/3$. Simply switching opacities for different regions of the disk can lead to erroneous physics when tracing rays in our scheme, e.g., changing the location of the photosphere of the disk. Regardless, as demonstrated in \S 4, the BDNL scheme performs better overall. A method that can smoothly transition between the two mean opacities, such as some weighted average based on the midplane Rosseland optical depth, would improve treatment of the opacities by the BDNL scheme, but has not been attempted here.
\section{RADIATIVE TESTS}
An analytic solution to a relevant test problem must be found in order to evaluate the accuracy of radiative transport algorithms in disks. In this section, we propose a toy problem based on Hubeny (1990) that can be used to test the accuracy of a radiative routine.
Consider a plane-parallel slab with constant vertical gravitational acceleration $g$ but with a midplane about which reflection symmetry is assumed. This situation is not meant to be realistic, but that is okay for the following tests. Assume there is some heating mechanism that produces a known distribution of astrophysical flux once the system reaches hydrostatic and radiative equilibrium; in equilibrium, energy transport is only vertical. Make the ansatz that the vertical astrophysical flux has the form
\begin{equation} F_z \left( \tau \right) = F_0\left( 1-\frac{\tau}{\tau_m}\right), \label{eq1}\end{equation}
where $F_0=\sigma T_e^4/\pi$ is the effective astrophysical flux from the atmosphere with effective temperature $T_e$, $\tau$ is the Rosseland mean optical depth measured vertically downward, and $\tau_m$ is the optical depth at the midplane. This function ensures that the flux goes to zero at the midplane and that $F_z=F_0$ at $\tau=0$. The heating term required to achieve this flux distribution is then
\begin{equation} \Gamma = -\pi \frac{\partial F\left(\tau\right)}{\partial z}
=\pi F_0 \frac{\rho\kappa}{\tau_m}, \label{eq2}\end{equation}
where $\rho$ is the density at the point of interest and $\kappa$ is the Rosseland mean mass absorption coefficient.
If $\tau_m \gtrsim 10$, the temperature structure may be derived from the flux by using the standard Eddington approximation, which relates the mean intensity $J$ to the astrophysical flux by
\begin{equation}\frac{4}{3}\frac{d J\left(\tau\right)}{d\tau} = F_z\left(\tau\right). \label{eq3}\end{equation}
Integrating equation (16) yields
\begin{equation} T^4=\frac{3}{4}T_e^4\left(\tau\left[1-\frac{\tau}{2\tau_m}\right] + q\right).\label{eq4}
\end{equation}
The constant $q$ can be determined by considering the low optical depth limit. In that limit, the atmosphere reduces to a standard stellar atmosphere; therefore $q=1/\sqrt3$ (Mihalas \& Weibel-Mihalas 1986).
In the limit that $\tau_m \lesssim 0.5$, the atmosphere approaches an isothermal structure. Because the source function becomes constant, the observed flux can be found from $F=2\pi \mu I_+$, which gives us the temperature of the isothermal atmosphere:
\begin{equation} T^4=\frac{T_e^4}{2\mu \left(1-\exp\left[-2\tau_m/\mu\right]\right)}\rm ,\end{equation}
where the factor of 2 in the exponential accounts for both sides of the atmosphere, and we explicitly include $\mu$ because we are using the vertically integrated $\tau$.
An additional assumption about the form of the opacity law permits analytic evaluation of the hydrostatic structure of the disk and allows us to control whether the atmosphere will be convective. For example, we assume that $\kappa$ is constant throughout the disk for two of the tests presented below. This assumption should make the disk convectively stable (Lin \& Papaloizou 1980; Ruden \& Pollack 1991). It also makes the relation between pressure and optical depth simple:
\begin{equation}p=\frac{g}{\kappa}\tau. \label{eq5}\end{equation}
Moreover, the midplane pressure is related to the full disk surface density $\Sigma$ by
\begin{equation}p_m=g\frac{\Sigma}{2}.\end{equation}
\subsection{Relaxation Test}
For the relaxation test, we heat the gas slab in accordance with equation (15) to test whether the radiative routine in question allows it to relax to the correct hydrostatic and radiative equilibrium configuration. By selecting different midplane optical depths, the effects of resolution on the routine can be tested.
Here, we test the radiative routines used by M2004, C2006, and BDNL. The initial condition for the atmosphere is a constant density structure with a $\tau_m = 0.05$, 0.5, 5, 10, and 100. The target effective temperature of the atmosphere, which controls the magnitude of the heating term, is set to $T_e=100$ K, and $\kappa=\kappa_0=\rm~constant$. For the discussion that follows, we take the high optical depth regime to indicate where $\tau\ge 1/\sqrt 3$ and the low optical depth regime to indicate where $\tau < 1/\sqrt 3$. Note that this boundary is slightly different from the M2004 and C2006 schemes; these schemes use $\tau = 2/3 $ to set the boundary between the low and high optical depth regimes, as is done in the standard Eddington solution.
Figure 1 compares the M2004, C2006, and BDNL solutions to the analytic temperature profile and to the flux profile for $\tau_m=100$. The BDNL routine matches the analytic curves very well. The M2004 routine does well in matching the temperature curve for most of the high optical depth regime, which is resolved by 12 cells, but the low optical depth regime has a sudden temperature drop. As reported by Paper I, this temperature drop is an artifact of the lack of complete cell-to-cell coupling in the optically thin region. Despite the temperature drop, the boundary condition between the two regimes typically yields the correct boundary flux, i.e., the flux leaving the high optical depth regime. An unfortunate result of this boundary condition is that it produces oscillations in the flux profile and in the temperature profile when the low/high optical depth boundary is between two cells or when the entire disk height is only resolved by a few vertical cells. These numerical oscillations produce artificial heating, and we believe that this behavior contributes to keeping the inner 7 AU of the disk presented in Paper I hot. Although this is problematic, even when oscillations occur, the time-averaged cooling time is within 10\% of the expected value for the test shown in Figure 1. The C2006 improvement lessens this problem.
\begin{figure}[ht]
\centering
\includegraphics[width=7cm]{f1a.eps}\includegraphics[width=7cm]{f1b.eps}
\includegraphics[width=7cm] {f1c.eps}\includegraphics[width=7cm]{f1d.eps}
\caption{Results of the relaxation test for the M2004, C2006, and BDNL schemes with $\tau_m=100$. Top: The left panel shows the relaxed temperature profile for the M2004 and C2006 routines, while the right panel shows the relaxed temperature profile for the BDNL routine. In both panels, the analytic curve is represented by the dashed curve. The first sudden drop in the M2004 and C2006 schemes is due to the lack of complete cell-to-cell coupling required by radiative transfer. The second drop, which is also in the BDNL profile, is where the density drops to background and where we stop following radiative physics. Bottom: Similar to the top panels but for the flux profile. The undulations in M2004's flux profile are believed to be due to the low/high optical depth boundary lying near a cell face intersection. The C2006 modification helps avoid this problem. Finally, the sudden drop in the M2004 and C2006 flux profile occurs because these schemes only explicitly track the flux through the optically thick disk. The right panels are the same as Figure 1 in Cai et al.~(2007, in prep.).}
\label{fig1}
\end{figure}
Figure 2 compares the results of the M2004 and BDNL routines with each other for $\tau_m=10$. The high optical depth regime is now resolved by only six cells. Both methods compute the correct flux through the slab. The temperature profiles are both skewed more than in the $\tau_m=100$ case mainly because the solution deviates from the Eddington approximation, which is used to derive the analytic temperature profile, as $\tau_m$ becomes small. Figure 3 shows the same comparison as done in Figures 1, 2, and 3 but for $\tau_m=5$. The high optical depth regime is now resolved by only four cells. Again, both methods allow for the correct flux through the slab, and the BDNL temperature distribution is close to the analytic value, with slight departures again due mainly to the inaccuracy of the analytic curve.
\begin{figure}[ht]
\centering
\includegraphics[width=7cm]{f2a.eps}\includegraphics[width=7cm]{f2b.eps}
\includegraphics[width=7cm] {f2c.eps}\includegraphics[width=7cm]{f2d.eps}
\caption{Same as Figure 1, but with $\tau_m=10$ and only for the M2004 and BDNL schemes. The undulations in the M2004 flux profile are no longer present. The slight departure of the BDNL solution from the analytic temperature curve is mainly due to the breakdown in the Eddington approximation, which is assumed in the analytic temperature profile, as $\tau_m$ becomes small. }
\label{fig2}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=7cm]{f3a.eps}\includegraphics[width=7cm]{f3b.eps}
\includegraphics[width=7cm] {f3c.eps}\includegraphics[width=7cm]{f3d.eps}
\caption{The same as Figure 2, but with $\tau_m=5$. Once again, the depature of the BDNL temperature solution is mainly due to the breakdown of the approximations used to calculate the analytic curve.}
\label{fig3}
\end{figure}
To demonstrate the accuracy of each algorithm in the low optical depth limit, we show the temperature profiles in Figure 4 for the M2004 routine and the BDNL routine for $\tau_m=0.05$ and 0.5, and we compare each curve with the temperature estimate calculated from equation (18). The M2004 routine yields temperature profiles that are colder than the expected temperature profiles, and the departure is more severe as $\tau_m$ increases. This is a result of the lack of complete cell-to-cell coupling in the atmosphere. The BDNL routine is in excellent agreement with the predicted temperature. The departure from the analytic estimate observed in the $\tau_m=0.5$ case is a result of the inaccurate assumption that the source function is constant for $\tau\sim1$.
\begin{figure}[ht]
\centering
\includegraphics[width=7cm] {f4.eps} \caption{Temperature curves for the low optical depth limit. The red, dashed curve indicates estimated isothermal temperatures from equation (18), the solid, dark curve indicates the results of the BDNL routine, and the dashed, dark curve indicates the results of the M2004 routine. The curves at the lower temperature correspond to $\tau_m=0.5$, and the curves at the higher temperature correspond to $\tau_m=0.05$. The lower optical depth corresponds to the higher temperature because cooling is less efficient. The M2004 routine yields temperatures that are too cold, because it always uses the free-streaming approximation when $\tau<2/3$. However, the M2004 routine converges to the correct solution as $\tau_m\rightarrow 0$. The BDNL routine is consistent with the estimated temperature for the $\tau_m=0.05$ case, and it is roughly consistent with the $\tau_m=0.5$ case. The inconsistency seen in the $\tau_m=0.5$ case is a result of the small $\tau_m$ assumption, which is used to derive equation (18).}
\label{fig4}
\end{figure}
\subsection{Contraction Test}
In order to study how accurately the radiative algorithms work with the hydrodynamics routines and to study the effects of resolution, we allow the atmosphere to cool and follow the contraction. If one assumes a constant opacity law and a large $\tau_m$, then the contraction becomes homologous, and a relationship between the midplane temperature and the cooling time is easily attainable. Consider the cooling time
\begin{equation}t_{\rm cool}=\frac{U}{\sigma T_e^4}\sim \frac{p_m H \tau_m}{T_m^4},\end{equation}
where $H=\Sigma/\rho_m$ is the scale height of the atmosphere and $U$ is the internal energy per unit area. If one assumes an ideal gas law, then one expects that
\begin{equation}t_{\rm cool} \sim \frac{1}{T_m^3}.\end{equation}
Finally, because $U\sim p_m H\sim T_m$,
\begin{equation}t_{\rm cool}\sim \frac{1}{U^3}.\end{equation}
For this test, we take the relaxed atmospheres shown in Figure 1 with $\tau_m=100$ and turn the heating term off. The contraction is followed until the scheme breaks down. Figure 5 indicates that both cases follow the expected contraction closely until the optically thick atmosphere is resolved by five (BDNL) or six (M2004) cells.
\begin{figure}[ht]
\centering
\includegraphics[width=7cm]{f5.eps}
\caption{Contracting slabs as shown in the $t_{\rm cool}$ - $U^{-3}$ plane for the same cases as shown in Figure 1; all curves should be linear if the slabs contract as expected. Both schemes break down when the high optical depth regime is contained within 5 cells ($U^{-3}\approx0.75$), but the M2004 routine (light curve) starts to deviate from the expected solution once the slab is resolved by 6 cells ($U^{-3}\approx0.43$). The sudden decreases in cooling times for the M2004 routine are where the optically thin/thick boundary transitions into another cell. The line is a by-eye fit to show the expected behavior. }
\label{fig5}
\end{figure}
\subsection{Convection Test}
The last test we describe demonstrates whether the radiative scheme permits convection when it should occur. Lin \& Paploizou (1980) and Ruden \& Pollack (1991) show that convection is expected in a disk-like atmosphere when the Roesseland mean optical depths are large and when $\beta > \beta_{\rm crit}$ for $\kappa\sim T^{\beta}$. For a $\gamma=5/3$ gas, the vertical entropy gradient is driven to a negative value when the critical exponent $\beta_{\rm crit} \approx1.5$. So we present two cases for which almost identical atmospheres are allowed to relax to an equilibrium. In one case, $\beta=1$, which should make the atmosphere convectively stable, and the other has $\beta=2$, which should make it unstable. As shown in the top panels of Figure 6, we find that the BDNL routine produces convection when it should and does not when it should not. Likewise, the bottom panels of Figure 6 demonstrate that the M2004 routine also permits or inhibits convection correctly. However, M2004 does seed an artificial superadiabatic gradient at the boundary between the optically thin and thick regions. Nevertheless, convection does not occur for $\beta=1$, and the superadiabatic gradients are an order of magnitude smaller for $\beta=1$ than for $\beta=2$ at that boundary.
\begin{figure}[ht]
\centering
\includegraphics[width=7cm]{f6a.eps}\includegraphics[width=7cm]{f6b.eps}
\includegraphics[width=7cm]{f6c.eps}\includegraphics[width=7cm]{f6d.eps}
\caption{Convection test. The heavy black contour indicates the same density contour for each panel. Arrows show relative velocities in the $r$ and $z$ directions. The blue contours indicate superadiabatic regions. The motions in the left panels are a few orders of magnitude smaller than the motions in the right panels. Top: BDNL. The left panel shows the case $\beta=1$; convection and superadiabatic gradients are absent, and the velocities represent low level noise. The right panel shows the case $\beta=2$; convective cells and superadiabatic gradients are present. Bottom: Same as the top panel but for the M2004 scheme. The superadiabatic regions near the top density contour are due to the artificial, sudden drop in temperature at the optically thin/thick interface. The superadiabatic gradients in the left panel are about an order of magnitude smaller than those in the right panel. }
\label{fig6}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=7cm]{f7.eps}
\caption{Average flux through the atmosphere with $\beta=2$ and with the BDNL scheme. Convection carries about 30\% of the flux at maximum, but almost all the flux is radiative in the photospheric region near cell 15. }
\label{fig7}
\end{figure}
Finally, we measure the flux that is carried by convection in the $\beta=2$ case for the BDNL scheme. As indicated in Figure 7, convection carries about 30\% of the flux. Figure 7 also acts as a reminder that the energy must ultimately be radiated away. Paper I and Rafikov (2006) argue that because the convective flux is controlled by the radiative flux leaving the photosphere of the disk, convection should not be expected to lead to rapid cooling and fragmentation in protoplanetary disks, as claimed by Boss (2004) and Mayer et al.~(2007).
\section{SIMULATION}
In order to verify the results of Paper I and to test the dependence of disk evolution on the details of the treatment of radiative physics, we restarted the Paper I simulation at 6.6 orp\footnote{For these simulations, one outer rotation period (orp) corresponds to about 253 yr.}, but we evolved the simulation with the BDNL radiative algorithm. This time corresponds to just after the {\it burst phase}, which is characterized by a rapid and violent onset of global nonaxisymmetry in a gravitationally unstable disk. We chose to restart the simulation at this time, because for this experiment, we are primarily interested in the behavior of the disk during the {\it asymptotic phase}, i.e., the phase of evolution during which heating roughly balances radiative cooling. Moreover, beginning the simulation just after the burst is a compromise between lowering the computational cost of the simulation and obviating possible transients in the asymptotic phase that are caused by abruptly changing the cooling algorithm.
The initial model of Paper I is the same as that used by Mej\'ia et al.~(2005), and we refer the reader to Mej\'ia et al.~for a full description. The model is a 40 AU radius, 0.07 $M_{\odot}$ disk surrounding a $0.5$ $M_{\odot}$ star. The initial disk has a $r^{-1/2}$ surface density profile and a $r^{-1}$ temperature profile, but these are significantly altered during the burst. We use a constant ratio of specific heats $\gamma=5/3$ and the opacity tables of D'Alessio et al.~(2001), with a maximum grain size of 1 $\mu m$. To control the numerical experiment, we use the same resolution that was used for the Paper I simulation, but the vertical direction is expanded to account for the more extended atmosphere the BDNL solution produces; $(r,~\phi ,~z)=(512,~128,~64)$ cells. In addition, the same erroneous mean molecular weight table, which typically yields $\mu=2.7$, was used (see Paper I for details).
\subsection{Comparison between Disk Structures}
Although not extreme, qualitative differences between the simulations are discernible (Fig.~8). The Paper I simulation has more pronounced spiral structure than the BDNL simulation throughout most of the disk. Also, the BDNL simulation is more extended in the vertical direction because the disk's atmosphere is hotter, as expected from the tests. In order to quantify the structural differences, we compute the global Fourier amplitude spectrum (Imamura et al.~2000) for each simulation, where the sum over the amplitudes is a measure of the nonaxisymmetric structure in the disk and the spectrum is indicative of the dominate modes in the disk. We compute the time-averaged Fourier component $\left<A_m\right>$ for $m$-arm structure by
\begin{equation}
A_m = \frac{\int\rho_m rdrdz}{\int \rho_0 rdrdz},
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[width=6.5in]{f8.eps}
\caption{Midplane and meridional logarithmic density images for the Paper I simulation (top) and for the BDNL simulation (bottom). The BDNL simulation begins from the Paper I simulation at 6.6 orp. Although the differences are not extreme, they are readily noticeable. The BDNL simulation shows a more washed out structure overall. It is also more extended vertically than the Paper I simulation. Each axis is in AU, and the logarithmic grayscale indicates density in code units. The number in the upper right of each image is the time in orp.}
\label{fig8}
\end{figure}
\noindent where $\rho_0$ is the axisymmetic density component and $\rho_m$ is the total Fourier amplitude of the $\cos(m\phi)$ and $\sin(m\phi)$ density component. The time-average is calculated by finding $A_m$ for a large number of snapshots over the last two orps. The summed global Fourier amplitude $\left<A_+\right>=\sum_{m=2}^{63}\left<A_m\right>=1.4$ for the Paper I disk, while $\left<A_+\right>=1.1$ for the BDNL disk, which indicates that the Paper I disk is more nonaxisymmetric. We exclude $m=1$ from the summation because we keep the star fixed. The difference between the sums is also depicted by the Fourier spectrum (Fig.~9). The Paper I disk has larger amplitudes everywhere except for $m=2$, which is consistent with the qualitative differences portrayed in Figure 8. Including long-range transport has resulted in weaker GIs. This is also consistent with the results of Cai et al.~(2007, in preparation), who find that when mild envelope irradiation is included, GIs are weakened as a whole, but the $m=2$ mode remains strong. As described in Paper I, a curve with the functional form $A_m\sim\left(m^2+m_0^2\right)^{-n}$ can be fit to the data, where $n\approx1.6$. The BDNL disk is also consistent with this functional form, and both disks are roughly consistent with by-eye fits of $n\approx 1.5$. The similar slopes at large $m$ may be indicative of gravitoturbulence Gammie (2001). However, whether this slope is caused by nonlinear mode coupling or a turbulent cascade is a topic for future discussion.
\begin{figure}[ht]
\centering
\includegraphics[width=6.5in]{f9.eps}
\caption{Fourier amplitude spectrum for the Paper I (black) disk and the BDNL (red [gray]) disk, time averaged over the last two orp of each simulation. The bars represent typical fluctuations over the two-orp period. The Paper I has larger amplitudes everywhere except for $m=2$, which is consistent with the Paper I disk being more nonaxisymmetric. Both spectra can be fit with a functional form of $A_m\sim(m^2+m_0^2)^{-n}$, where $n\approx 1.5$. This behavior at large $m$ may be indicative of gravitoturbulence (Gammie 2001).}
\label{fig9}
\end{figure}
The surface density profiles (not shown) for the end of each simulation are comparable and follow a Gaussian profile. A ring forms in both simulations near 7 AU. However, this ring appears to be caused by poor vertical resolution. For $r\ge10$ AU, the disk is resolved well vertically, and the radial mass concentrations that form due to multiple, tightly wrapped spiral arms are reliable. These mass concentrations can work to concentrate solids (Haghighipour \& Boss 2003; Rice et al.~2004), which may act to accelerate the core accretion plus gas capture mechanism of gas giant planet formation(Durisen et al.~2005).
\subsection{Comparison between Disk Energetics}
Figure 10 shows the evolution of the internal energy for each disk. There is a precipitous drop in energy over about an orp after switching the radiative transport schemes. However, the schemes roughly follow each other after the drop, with the BDNL profile having a more shallow slope than the Paper I profile.
The effective temperature profiles (Fig.~11) are also similar. Each profile can be fit by an exponential, and as discussed in Paper I, we believe that the deviation from the observationally expected $r^{-0.5}$ profile probably results from our exclusion of stellar irradiation in these simulations.
\begin{figure}[ht]
\centering
\includegraphics[width=6.5in]{f10.eps}
\caption{Internal energy normalized to the initial value for BDNL (heavy curve) and Paper I (light curve). The precipitous drop is a result of suddenly switching radiative schemes. Between about 7 and 10.5 orp the curves approximately track each other.}
\label{fig10}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=6.5in]{f11.eps}
\caption{Effective temperature profiles for BDNL (heavy curve, time-averaged over about the last 5 orp) and Paper I (light curve, time-averaged over the last 6 orp). Both follow an exponential profile, and are reasonably consistent. Their departure from the observed $r^{-1/2}$ effective temperature profiles is likely due to our exclusion of stellar irradiation.}
\label{fig11}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=6.5in]{f12.eps}
\caption{Cooling time curves scaled by the local angular speed for BDNL (heavy curve) and Paper I (light curve) for the time-averaged periods over about the last 5 and 6 orp, respectively. Both curves are relatively consistent for $r\gtrsim 35$, but they depart for inner radii. This is likely due to a combination of the free-streaming approximation, employed by the Paper I simulation, in regions where long-range coupling matters and the different opacities used by the two routines.}
\label{fig12}
\end{figure}
For each simulation, we calculate the time-averaged cooling time
$
t_{cool} = \int \epsilon dV/\int \nabla \cdot F dV,
$
for each annulus on the grid, where $\epsilon$ is the internal energy density of the gas and $\nabla \cdot F $ is the total radiative cooling. The temporal average is taken to be about the last six orp of evolution for Paper I and about the last 5 orp for BDNL. In Figure 12, we compare $t_{cool}\Omega$ curves for each disk, where $\Omega$ is the angular speed of the gas. The cooling time is much longer for $r \lesssim 35$ AU in the BDNL disk than it is for the Paper I disk. However, the curves converge outside that radius. The longer cooling times are consistent with the washed out structure in the BDNL simulation. For both disks, the cooling times are well above the fragmentation criterion $t_{cool}\Omega \lesssim 6 $ for a $\gamma=5/3$ gas (Gammie 2001; Rice et al.~2005), so we expect neither disk to fragment. Regardless, we ran the BDNL simulation at 512 azimuthal divisions between 10 and 11 orp to test for fragmentation and found no signs of fragmentation, as one expects from the long cooling times.
\subsection{Comparison between Angular Momentum Transport}
In this section, we compare the angular momentum transport in each disk by analyzing the gravitational torque on the inner disk due to the outer disk and by measuring the effective Shakura \& Sunyaev (1973) $\alpha$. As discussed in Paper I, the gravitational torque is calculated by
\begin{equation} C = -\int \rho {\bf x} \times {\bf \nabla}\Phi~dV, \end{equation}
where $\Phi$ is the gravitational potential, ${\bf x}$ is the position vector, and the integral is over the entire volume. For our analysis, we are concerned with the vertical component $C_z$. The time-averaged torque, averaged over the last six orp for Paper I and about the last 5 orp for BDNL, is shown for each simulation in Figure 12. The solid curves represent the torque profiles, with the heavy curve indicating the BDNL disk. The dashed curves show the mass flux for each disk with arbitrary but consistent scaling; the peak mass flux $\dot{M}=$ few$\times10^{-7}~M_{\odot}~\rm yr^{-1}$ for each disk. The torques are of the same magnitude, but the torque profiles are noticeably different. Based on the $\left<A_m\right>$ plots and the visual differences in disk structures, the Paper I disk has a more complex morphology and stronger modes. The multiple peaks in the Paper I torque profile are another indication of this complex morphology and competing global, dominate modes. In contrast, the BDNL disk has a more washed out structure, and so its torque profile has only a single maximum.
Both disks have complicated mass flux profiles. The principal inflow/outflow boundary in the Paper I simulation is at about $r\approx 26$ AU. The BDNL disk, by contrast, has two main inflow/outflow boundaries. The $r\approx15$ AU boundary corresponds to the peak torque in the BDNL disk, and we refer to this as the principal inflow/outflow boundary because the mass fluxes are the highest near it. Roughly, the peak in each torque profile aligns with the principal inflow/outflow boundary. The agreement is imprecise, because the mass flux average is based on differencing mass cylinders at different times, which yields a time average based on the second-order mass flux integration. The torques are derived in post-analysis calculations, and so the temporal sampling is much sparser. Moreover, the mass fluxes are highly variable with time, and averages over slightly different time periods can result in different mass flux profiles. However, major inflow/outflow transitions are usually near torque profile maxima. The mass fluxes for $r\gtrsim 40$ are complicated by pulsations that begin just before the disk bursts and continue throughout the evolution.
The gravitational torque can also be used to derive an effective $\alpha$.
We use this torque to calculate the vertically integrated gravitational stress $T$, where $T=C_z/2\pi r^2$ (Paper I). This stress can be used to calculate an effective $\alpha$ (Gammie 2001)
\begin{equation} \alpha = \Bigl\lvert\frac{d\ln \Omega}{d\ln r}\Bigl\lvert^{-1}\frac{T}{\left< c^2 \Sigma \right>},\end{equation}
where the brackets indicate an azimuthally averaged quantity, $c$ is the midplane sound speed, and $\Sigma$ is the surface density. We use the adiabatic sound speed for consistency with Paper I even though the isothermal sound speed may be more appropriate (e.g., Balbus \& Papaloizou 1999; Gammie 2001). The comparison between the effective $\alpha$ for BDNL and Paper I is shown in Figure 13. The profiles are of similar magnitude everywhere, but with BDNL being significantly lower between about 20 and 36 AU.
We also show in Figure 13 the $\alpha$ one would expect for an $\alpha$ disk (see Gammie 2001)
\begin{equation}
\alpha = \left(\Bigl\lvert\frac{d\ln\Omega}{d\ln r}\Bigl\lvert^2 \gamma'\left[\gamma'-1\right]t_{cool}\Omega\right)^{-1},
\end{equation}
where $\gamma'$ is the two-dimensional adiabatic index. For $\gamma=5/3$, $\gamma'\approx 1.8$ in a strongly self-gravitating disk and 1.5 in a non-self-gravitating disk (Gammie 2001). We use the $t_{cool}\Omega$ profiles (Fig.~12) in equation (27) to plot the anticipated $\alpha$ for a local model in the non-self-gravitating limit. For most radii, both disks are roughly consistent with this $\alpha$ prescription, and $\alpha$ is roughly constant between 20 and 35 AU. The main difference between the two simulations is the lower $\alpha$ in the BDNL simulation, which is consistent with the longer cooling times. These different cooling times are probably due to a combination of the different degrees of cell-to-cell coupling and the use of different opacities.
Although the overall evolutions are in rough agreement, they demonstrate sensitivity to the details of radiative transfer. By including the long-range effects of radiative transfer and by using different opacities for different optical depth regimes, the BDNL disk shows less structure, is more flared, and has a lower effective $\alpha$ that deviates slightly more from equation (27). These differences demonstrate the need for a radiative algorithm that includes the long-range effects of radiative transfer in all three-dimensions, which will be missed by diffusion approximations and is missed in our schemes in the $r$ and $\phi$ directions. To illustrate the importance of radiative transport in all three directions, we show in Figure 15 the effective $\alpha$ profile for a disk that was evolved with the BDNL radiative routine, but with the radial and azimuthal diffusive radiative transport turned off. According to this plot, without any $r$ and $\phi$ transport, we would underestimate the cooling times in the optically thick regime and surmise that the disk deviates strongly from the predicted effective $\alpha$.
\begin{figure}[ht]
\centering
\includegraphics[width=6.5in]{f13.eps}
\caption{Gravitational torque (solid curves) and mass flux (dashed curves) profiles for BDNL (heavy curves), time-averaged over the last 5 orp, and Paper 1 (light curves), time-averaged over the last 6 orp. BDNL's torque profile shows one strong peak and several minor peaks, while the Paper I torque profile has two very strong peaks. The mass fluxes for each disk (arbitrary but consistent scaling) are consistent in magnitude. }
\label{fig13}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=6.5in]{f14.eps}
\caption{Effective $\alpha$ profiles for BDNL (heavy curves) and Paper I (light curves). The solid curves indicate the effective $\alpha$ derived from the torque profiles, and the dashed curves indicate the predicted $\alpha$ based on an $\alpha$ disk prescription, for which the predicted $\alpha$ is derived from the $t_{cool} \Omega$ profiles (Gammie 2001) in Figure 12 with the assumption of negligible self-gravity (see text). Both disks roughly follow the predicted $\alpha$ over a larger range of radii. }
\label{fig14}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=6.5in]{f15.eps}
\caption{Same as Figure 14, but the BDNL simulation has only vertical radiative transport. }
\label{fig15}
\end{figure}
\section{CONVECTION}
Boss (2001) and Mayer et al.~(2007) report very short cooling times in their disks, and they attribute these fast cooling times to convection (Boss 2004). This is contrary, however, to what Ravikov (2006) predicts analytically and to what is reported in Paper I, where convection does not lead to cooling times short enough for fragmentation. Paper I and Ravikov (2006) argue that convection should not lead to fast cooling times in protoplanetary disks because the energy transported by convection must ultimately be radiated away near the photosphere of the disk. The BDNL disk is stable to convection in the high-optical depth regions, and the superadiabatic regions of the Paper I disk\footnote{Although Paper I found some superadiabatic regions in their disk during the asymptotic phase, convection was not observed because the spiral waves dominated the dynamics and the superadiabatic regions were in regions where $\tau\sim1$.} are mostly related to the artificial temperature drop at the photosphere that is characteristic of the M2004 scheme. In order to make relevant comments regarding convection in this paper, we briefly discuss a disk model with parameters tuned to induce convection.
The model that we explore is of a 10 AU in radius, optically thick, approximately 0.1 M$_{\odot}$ disk around 1 M$_{\odot}$ star. We set $\gamma=1.4$ and the opacity law $\kappa = (T/150 K)^3$. This opacity power law, along with the chosen $\gamma$, ensure that the disk should be convective (see \S 4.3) as long as no other dynamics are present. The model is moderately stable to GIs, with $Q\approx 1.8$ for most radii.
Figure 16 shows that convection appears to be very active in this model. There are large pockets of negative entropy gradients at mid-disk altitudes. Crude measurements of energy transport by vertical
gas motions $F_c=\rho c_p v_z \Delta T$ indicate that more than half of the energy can be carried by these motions in the low to mid-disk altitudes. Here, $c_p$ is the specific heat at constant pressure, $v_z$ is the vertical gas velocity, and $\Delta T$ is the deviation of the temperature at the cell of interest from the mean temperature for a given annulus and height. Even though a large fraction of the energy in the low to middle regions of the disk can be transported by vertical gas motion, $t_{cool}\Omega\approx1000$ near 5 AU! The energy must ultimately be radiated away near the photosphere, and so the cooling times remain long because the cooling time at the photosphere regulates convection. This result is consistent with Ravikov's (2006) analytic predictions, with our numerical tests, our numerical simulations, and the numerical simulations of Nelson et al.~(2000) and Nelson (2000). We also point out that Nelson et al.~(2000) assume a vertically isentropic density profile when calculating the cooling times for their 2D SPH calculations, which is similar to assuming efficient convection. We suspect that Boss (2004) and Mayer et al.~(2007) see fast cooling due to convection because of the different treatment of the photospheric layers in their simulations. As discussed by Nelson (2006) and in this paper, proper treatment of radiative physics, especially near the photosphere, is crucial for estimating proper cooling times. In fact, in order to lower the cooling times for the calculation in this section to those expected to lead to fragmentation, $t_{cool}\Omega\lesssim12$ for a $\gamma=7/5$ gas (Rice et al.~2005), the effective temperature would need to be approximately equal to the midplane temperature, which is about three times the actual effective temperature at $r=5$ AU.
\begin{figure}[ht]
\centering
\includegraphics[width=6.5in]{f16.eps}
\caption{Convection-like motions in an optically thick protoplanetary disk model (see text in \S 6). The heavy curve roughly indicates the disk's photosphere, and the arrows, which are scaled to each axis and to the midplane density for each column, indicate the momentum density. Typical Mach numbers for the gas range between a few hundredths to a few tenths. Convective-like eddies are present throughout most of the disk, and in the 5 AU region, vertical motions can carry most of the flux to upper disk altitudes. However, cooling times remain long in this disk, because ultimately, the energy must be radiated away.}
\label{fig16}
\end{figure}
\section{SUMMARY}
To help evaluate the accuracy of radiative transport schemes in protoplanetary disk simulations, this paper presented a test suite that assesses an algorithm's accuracy (1) in matching analytic temperature and vertical flux profiles for a simple geometry with a dissipation-like heating source, (2) in following the expected contraction sequence for a slab undergoing quasi-static gravitational contraction, and (3) in permitting and inhibiting convection under the appropriate conditions. We used this suite to test the M2004 and the BDNL radiative transfer algorithms and presented the results. We recognize that even if an algorithm passes all of our tests, it does not guarantee that the algorithm will always be accurate. However, if an algorithm cannot pass our tests, simulations using that algorithm are probably untrustworthy.
The BDNL scheme is an improvement over the M2004 scheme because it includes cell-to-cell coupling in the vertical direction. This difference leads to the correct temperature structure for the disk, according to our tests, in both the optically thick and thin regimes, while the temperature structure for the M2004 scheme is too cool in the optically thin regime.
To investigate possible consequences for disk evolution when employing the BDNL radiative scheme and to verify the results of the Paper I simulation, we evolved the Paper I disk with the BDNL algorithm. Even though the two schemes have many similarities, the BDNL simulation shows less structure overall. This indicates that radiative schemes that employ pure flux-limited diffusion, which excludes long-distance coupling, will likely behave differently from ray-based schemes. This also indicates that there is much room for improving the BDNL scheme. A fully three-dimensional ray method (e.g., Heinemann et al.~2006) would be more realistic and should wash out the GIs even more than in the BDNL simulation. The mass transport in the disk is roughly consistent with the effective $\alpha$ predicted by Gammie (2001). Even so, we note that the picture of mass slowly diffusing through the disk is misleading. As indicated in Figure 9 and in \S 5.3, mass transport is dominated by global modes, with large fluctuations in the mass fluxes at any given radius.
Overall, we verify the basic results of Paper I. Cooling times are long, and the disks are stable against fragmentation. GIs are efficient at transporting angular momentum, with effective $\alpha\sim10^{-2}$. In addition, our simulations agree with analytic predications that convection should not lead to rapid cooling and fragmentation.
Why do researchers disagree on key properties of disk evolution? Boley et al.~(2007) suggest that the treatment for the internal energy of H$_2$ may be a contributing factor. The results of the comparisons presented here strongly suggest that radiative physics is another likely cause. In fact, the sensitivity of disk evolution to radiative transfer details herein reported indicates that the radiative cooling algorithms may be the {\it primus inter pare} of causes.
\acknowledgments{We would like to thank A.~Boss, K.~Cai, C.~Gammie, L.~Mayer, S.~Michael, M.~Pickett, T.~Quinn, D.~Stamatellos, and A.~Whitworth for useful discussions and comments during the preparation of this manuscript. A.C.B.'s contribution was supported by a NASA Graduate Student Researchers Program fellowship. Contributions by R.H.D., \AA.N., and J.L.~ were supported by NASA grant NNG05GN11G, by the Danish Agency for Science, Technology and Innovation and the Danish Center for Scientific Computing, and by the National Science Foundation through grant AST-0452975 (astronomy REU program to Indiana University), respectively. This work was also supported in part by systems obtained by Indiana University by Shared University Research grants through IBM, Inc.~to Indiana University.
\clearpage
|
1,477,468,750,654 | arxiv | \section{Introduction}
\subsection{Motivation from evolutionary game theory and random polynomial theory}
Large random systems, in particular random polynomials and systems of random polynomials, arise naturally in a variety of applications in physics (such as in quantum chaotic dynamics \cite{Bogomolny1992}), biology (such as in theoretical ecology \cite{May1972}, evolutionary game theory and population dynamics \cite{GT10}), computer science (such as in the theory of computational complexity \cite{Shub1993}) and in social sciences (such as in social/complex networks \cite{Newman2003}). They are indispensable in the modelling and analysis of complex systems in which very limited information is available or where the environment changes so rapidly and frequently that one cannot describe the payoffs of their inhabitants’ interactions \cite{may2001stability,fudenberg1992evolutionary,GT10,gross2009generalized,Galla2013}. The study of statistics of equilibria in large random systems provides important insight into the understanding of the underlying physical, biological and social system such as the complexity-stability relationship in ecosystems \cite{May1972,gross2009generalized,Pimm1984,FyoKho2016}, bio-diversity and maintenance of polymorphism in multi-player multi-strategy games \cite{GT10}, and the learning dynamics \cite{Galla2013}. A key challenge in such study is due to the large (but finite) size of the underlying system (such as the population in an ecological system, the number of players and strategies in an evolutionary game and the number of nodes and connections in a social network). Understanding the behaviour of the system at finite size or characterizing its asymptotic behaviour when the size tends to infinity are of both theoretical and practical interest, see for instance \cite{Pereda2019, PENA2018}.
In this paper we are interested in the number of internal equilibria in $(n+1)$-player two-strategy random evolutionary games as in \cite{DH15,DuongHanJMB2016,DuongTranHanDGA, DuongTranHanJMB}. We consider an infinitely large population that consists of individuals using two strategies, A and B. We denote by $y$, $0 \leq y\leq 1$, the frequency of strategy A in the population. The frequency of strategy B is thus $(1-y)$. The interaction of the individuals in the population is in randomly selected groups of $(n+1)$ participants, that is, they interact and obtain their fitness from $(n+1)$-player games. In this paper, we consider symmetric games where the payoffs do not depend on the ordering of the players. Suppose that $a_i$ (respectively, $b_i$) is the payoff that an A-strategist (respectively, B) achieves when interacting with a group of $n$ other players consisting $i$ ($0\leq i\leq n$) A strategists and $(n-i)$ B strategists. In other words, the payoff matrix is given by
\begin{equation*}
\begin{blockarray}{ccccccc}\hline
\text{Opossing A players} &0 & 1&\ldots & i & \ldots & n \\ \hline
\begin{block}{ccccccc}
\text{A} & a_0 & a_1 & \ldots & a_i&\ldots & a_n \\
\text{B} & b_0 & b_1 & \ldots & b_i &\ldots & b_n\\
\end{block}
\hline
\end{blockarray}
\end{equation*}
The average payoffs (fitnesses) of strategies A and B are respectively given by
\begin{equation*}
\pi_A= \sum\limits_{i=0}^{n}a_i\begin{pmatrix}
n\\
i
\end{pmatrix}y^i (1-y)^{n-i} \quad\text{and}\quad
\pi_B = \sum\limits_{i=0}^{n}b_i\begin{pmatrix}
n\\
i
\end{pmatrix}y^i(1-y)^{n-i}.
\end{equation*}
Internal equilibria in $(n+1)$-player two-strategy games can be derived using the replicator dynamic approach~\cite{GT10} or the definition of an evolutionary stable strategy, see e.g., \cite{broom:1997aa}. They are those points $0<y<1$ (note that $y=0$ and $y=1$ are trivial equilibria in the replicator dynamics) such that the fitnesses of the two strategies are the same $\pi_A=\pi_B$, that is
\begin{equation*}
\sum\limits_{i=0}^{n}\xi_i \begin{pmatrix}
n\\
i
\end{pmatrix}y^i (1-y)^{n-i}=0\quad\text{where}\quad \xi_i=a_i-b_i.
\end{equation*}
In the literature, the sequence of the difference of payoffs $\{\xi_i\}_{i}$ is called the gain sequence \cite{Bach2006, Pena2014}. Dividing the above equation by $(1-y)^{n}$ and using the transformation $x=\frac{y}{1-y}$, we obtain the following polynomial equation for $x$ ($x>0$)
\begin{equation}
\label{eq: P}
P_n(x):=\sum\limits_{i=0}^{n}\xi_i\begin{pmatrix}
n\\
i
\end{pmatrix}x^i=0,
\end{equation}
In random games, the payoff entries $\{a_i\}_i$ and $\{b_i\}$ are random variables, thus so are the gain sequence $\{\xi_i\}_i$. Therefore, the expected number of internal equilibria in a $(n+1)$-player two-strategy random game is the same as the expected number of positive roots of the random polynomial $P_n$, which is half of the expected number of the real roots of $P_n$ due to the symmetry of the distributions. This connection between evolutionary game theory and random polynomial theory has been revealed and exploited in recent serie of papers \cite{DH15,DuongHanJMB2016,DuongTranHanDGA, DuongTranHanJMB}. It has been shown that, if $\{\xi_i\}_i$ are i.i.d normal (Gaussian) distributions then \cite{DH15,DuongHanJMB2016}
\begin{equation}
\label{eq: finite1}
\frac{2n}{\pi\sqrt{2n-3}}\leq \mathbb{E} N_n\leq \frac{2\sqrt{n}}{\pi}\Big(1+\ln 2+\frac{1}{2}\ln (n)\Big)\quad \forall n,
\end{equation}
where $N_n$ is the number of real roots of $P_n$. We emphasize that \eqref{eq: finite1} is true for all finite group size $n$, which is useful for practical purposes, for instance when doing simulations. A direct consequence of this estimate is the following asymptotic limit
\begin{equation}
\label{eq: limitn}
\lim\limits_{n\rightarrow\infty}\frac{\ln \mathbb{E} N_n}{\ln n}=\frac{1}{2}.
\end{equation}
On the other hand, the expected number of real roots of random polynomials has been a topic of intensive research over the last hundred years. Three most well-known classes studied in the literature are
\begin{enumerate}[(i)]
\item Kac polynomials: $\sum_{i=0}^n \xi_i x^i$,
\item Weyl (or flat) polynomials: $\sum_{i=0}^n \frac{1}{i!}\xi_i x^i$,
\item Elliptic (or binomial) polynomials: $\sum_{i=0}^n\sqrt{\begin{pmatrix}
n\\
i
\end{pmatrix}} \xi_i x^i$.
\end{enumerate}
When $\{\xi_i\}$ are Gaussian distributions, it has been proved, see for instance \cite{EK95}, that
\begin{equation}
\label{eq: others}
\mathbb{E} N_n=\begin{cases}
\frac{2}{\pi}\ln (n)+C_1+\frac{2}{n\pi}+O(1/n^2)\quad&\text{for Kac polynomials},\\
\sqrt{n}\quad&\text{for elliptic polynomials},\\
\sqrt{n}\Big(\frac{2}{\pi}+o(1)\Big)\quad&\text{for Weyl polynomials}.
\end{cases}
\end{equation}
We refer the reader to standard monographs \cite{BS86,farahmand1998} for a detailed account and \cite{NNV2016,Do2018} for recent developments of the topic. The asymptotic formulas \eqref{eq: others} are much stronger than the limit \eqref{eq: limitn} because they provide precisely the leading order of the quantity $\mathbb{E}N_n$. A natural question arises: \textit{can one obtain an asymptotic formula akin to \eqref{eq: others} for the random polynomial from random multi-player evolutionary games?} It has been conjectured, in a study on computational complexity, by Emiris and Galligo \cite{Emiris:2010} and formally shown in \cite{DuongTranHanDGA} that
\begin{equation}
\label{eq: limitn2}
\mathbb{E} N_n\sim \sqrt{2n}+O(1).
\end{equation}
In this paper, we rigorously prove generalizations of the asymptotic formula \eqref{eq: limitn2} and of the finite group size estimates \eqref{eq: finite1} for two more general classes of random polynomials
\begin{equation}
\label{eq: generalP}
P_n^{(\gamma)}(x)=\sum_{i=0}^n \xi_i \begin{pmatrix}
n\\i
\end{pmatrix}^\gamma x^i\quad\text{and}\quad P^{(\alpha,\beta)}_n(x)=\sum_{i=0}^n \begin{pmatrix}
n+\alpha\\ n-i
\end{pmatrix}^\frac{1}{2}\begin{pmatrix}
n+\beta\\ i
\end{pmatrix}^\frac{1}{2} \xi_i x^i.
\end{equation}
Here $\gamma>0, \alpha, \beta>-1$ are given real numbers, $\{\xi_i\}_{i=0,\ldots,n}$ are standard normal i.i.d. random variables. The class of random polynomials $P_n$ arising from evolutionary game theory is a special case of both $P_n^{(\gamma)}$ (when $\gamma=1$) and $P_n^{(\alpha, \beta)}$ (when $\alpha=\beta=0$). For general values of $\alpha, \beta$ and $\gamma$, $P_n^{(\gamma)}$ and $P_n^{(\alpha, \beta)}$ are related to more complex models in evolutionary game theory where the gain sequence $\{\xi_i\}_i$ depends not only on $i$ but also on group size $n$. An example for such scenarios is in a public goods game in which the benefit from cooperation are shared among all group members rather than accruing to each individual \cite{Hauert2006, Pacheco2009, PENA2018}. From a mathematical point of view, the class $P^{(\gamma)}_n$ is a natural extension of $P_n$ and covers both Kac polynomials and elliptic polynomials as special cases (corresponding to $\gamma=0$ and $\gamma=\frac{1}{2}$ respectively). In addition, as previously shown in \cite{DuongHanJMB2016}, $P_n$ is connected to Legendre polynomials. As will be shown in Section \ref{sec: finite estimate}, the class $P_n^{(\alpha,\beta)}$ is intrinsically related to Jacobi polynomials, which contain Legendre polynomials as special cases. The link between $P_n$ and Legendre polynomials in \cite{DuongHanJMB2016} is extended to that of between $P_n^{(\alpha,\beta)}$ and Jacobi polynomials in the present paper.
\subsection{Main results}
Throughout this paper, we suppose that $\{\xi_i\}$ are i.i.d standard normal distributions. We denote by $\mathbb{E} N_n^{(\gamma)}$ and $\mathbb{E} N_n^{(\alpha,\beta)}$ the expected number of real roots of $P_n^{(\gamma)}$ and $P_n^{(\alpha,\beta)}$ respectively. The main results of the present paper are the following theorems.
\begin{theorem}[Estimates of $E N_n^{(\alpha,\beta)}$ for any $n$]
\label{thm: finite n estimates}
\begin{enumerate}[(1)] Suppose that $\alpha, \beta>-1$.
\item (estimates in terms of roots of Jacobi polynomial) Let $0<s_{n,max}<1$ be the maximum root of the Jacobi's polynomial of degree $n$ as defined in (\ref{eq: Jacobi}) . Then
\begin{equation}
\sqrt{n}\frac{1-s_{n,max}}{1+s_{n,max}}\leq \mathbb{E} N_n^{(\alpha,\beta)}\leq \sqrt{n}\frac{1+s_{n,max}}{1-s_{n,max}}.
\end{equation}
\item (explicit estimates for finite $n$) For all $\alpha=\beta>-1$, it holds that
\begin{equation}
\frac{2}{\pi}\sqrt{\frac{n(n+2\alpha)}{2n+2\alpha-1}}\leq \mathbb{E} N_n^{(\alpha,\alpha)}\leq \frac{2\sqrt{n}}{\pi}\Big(1+\ln(2)+\frac{1}{2}\log\frac{n+\alpha}{1+\alpha}\Big).
\end{equation}
\end{enumerate}
\end{theorem}
Theorem \ref{thm: finite n estimates}, which combines Theorems \ref{thm: Jacobi-estimate} and \ref{thm: ultraspherical}, provides lower and upper bounds for $E N_n^{(\alpha,\beta)}$ in terms of the group size $n$. It is only applicable to the class $P_n^{\alpha,\beta}$ since our proof makes use of a connection between $P_n^{\alpha,\beta}$ and Jacobi polynomials. In addition, in the second part, we use a symmetry condition on the coefficients of the polynomial $P_n^{(\alpha,\beta)}$ which requires $\alpha=\beta$. The next result characterizes the asymptotic limits, as the group size $n$ tends to infinity, of both $\mathbb{E} N_n^{(\gamma)}$ and $\mathbb{E} N_n^{(\alpha, \beta)}$.
\begin{theorem}[Asymptotic behaviour as $n\rightarrow +\infty$]
\label{thm: asymptotic}
We have
\begin{equation}
\label{eq: main asymptotic behaviour}
\mathbb{E} N_n^{(\gamma)}\sim \sqrt{2\gamma n}(1+o(1))\quad\text{and}\quad \mathbb{E} N_n^{(\alpha,\beta)}\sim \sqrt{2 n}(1+o(1))\quad \text{as}~~n\rightarrow \infty.
\end{equation}
As a consequence, there is a phase transition (discontinuity) in the expected number of roots of $\mathbb{E} N_n^{(\gamma)}$ as a function of $\gamma$ as $n\rightarrow \infty$
\begin{equation}
\label{eq: phase transition}
\mathbb{E} N_n^{(\gamma)}\sim \begin{cases}
\frac{2}{\pi} \ln (n)\quad \text{for}~~\gamma=0,\\
\sqrt{2\gamma n}\quad \text{for}~~ \gamma>0.
\end{cases}
\end{equation}
\end{theorem}
Our study on the expected number of real roots of $P_n^{(\gamma)}$ and $P_n^{(\alpha,\beta)}$ contributes to both evolutionary game theory and random polynomial theory. From an evolutionary game theory point of view, our results show surprisingly that in random multiplayer evolutionary games, one expects much less number of equilibria, which is proportional to the square root of the group size, than in deterministic games (recalling that the expected number of internal equilibria is the same as the expected number of positive roots, which is half of the expected number of real roots). In addition, since for a polynomial equation, the number of stable equilbria is half of that of equilibria, our results also apply to stable equilibria. From a random polynomial theory point of view, the present paper introduces two meaningful classes of random polynomials that have not been studied in the literature. In particular, the fact that the asymptotic behavour of $\mathbb{E} N_n^{(\alpha,\beta)}$ is independent from $\alpha$ and $\beta$ is rather unexpected and is interesting on its own right. In addition the phase transition phenomenon \eqref{eq: phase transition}, to the best of our knowledge, is shown for the first time.
\subsection{Organization of the paper}
The rest of the paper is organized as follows. In Section \ref{sec: Kac-Rice} we recall the Kac-Rice formula for computing the expected number of real roots of a random polynomial. In Section \ref{sec:finite and asymptotic of ENalpha}, we establish connections between $P_n^{(\alpha,\beta)}$ and Jacobi polynomials and prove Theorem \ref{thm: finite n estimates}. Proof of Theorem \ref{thm: asymptotic} is presented in Section \ref{sec: asymptotic results} and Section \ref{sec: asymptotic of ENalpha}. In Section \ref{sec: summary} we provide further discussions and outlook. Finally, detailed proofs of technical lemmas are given in Appendix \ref{sec: Appendix}.
\section{Kac-Rice formula}
\label{sec: Kac-Rice}
In this section, we recall the celebrated Kac-Rice formula for computing the expected number of real roots of a random polynomials, which is the starting point of our analysis. Consider a general random polynomial
$$
p_n(x)=\sum_{i=0}^n a_i \xi_i x^i.
$$
Let $\{\xi\}$ are standard i.i.d. random variables. Let $\mathbb{E} N_n(a,b)$ be the expected number of real roots of $p_n$ in the interval $(a,b)$. Then the Kac-Rice formula is given by, see for instance \cite{EK95}
\begin{equation}
\label{eq: Kac-Rice general}
\mathbb{E}N_n(a,b)=\frac{1}{\pi}\int_a^b \frac{\sqrt{A_n(x)M_n(x)-B_n^2(x)}}{M_n(x)}\,dx
\end{equation}
where
$$
M_n(x)=\mathrm{var}(p_n(x)),\quad A_n(x)=\mathrm{var}(p_n'(x)),\quad B=\mathrm{cov}(p_n(x)p_n'(x)).
$$
We can find $M_n, A_n$ and $B_n$ explicitly in terms of the coefficients $\{a_i\}$ of $p_n$ as follows. Since $\{\xi_i\}$ are standard i.i.d. random variables, we have
\begin{align*}
&p_n'(x)=\sum_{i=0}^n a_i i \xi_i x^{i-1},\quad
p_n(x)^2=\sum_{i,j=0}^n a_ia_j \xi_i\xi_j x^{i+j},\quad p_n(x)p_n'(x)=\sum_{i,j=0}^n a_ia_j i\xi_i\xi_j x^{i+j-1},\\
&\mathbb{E}(p_n(x))=\sum_{i=0}^n a_ix^i\mathbb{E}(\xi_i)=0, \quad \mathbb{E}(p_n'(x))=0
\\&M_n(x)=\mathrm{var}(p_n(x))=\mathbb{E}(p_n^2(x))-(\mathbb{E}(p_n(x)))^2=\sum_{i,j=0}^n a_ia_jx^{i+j}\mathbb{E}(\xi_i\xi_j)=\sum_{i=0}^n a_i^2 x^{2i},
\\&A_n(x)=\mathrm{var}(p_n'(x))=\mathbb{E}((p_n'(x))^2)-(\mathbb{E}(p_n'(x)))^2=\sum_{i,j=0}^n a_i a_j ij x^{i+j-2}\mathbb{E}(\xi_i\xi_j)=\sum_{i=0}^n a_i^2 i^2 x^{2(i-1)},
\\& B_n(x)=\mathrm{cov}(p_n(x)p_n'(x))=\mathbb{E}(p_n(x)p_n'(x))=\sum_{i,j=0}^n i a_i a_j x^{i+j}\mathbb{E}(\xi_i\xi_j)=\sum_{i=0}^n i\, a_i^2 x^{2i-1}.
\end{align*}
In conclusion, we have
\begin{equation}
\label{eq: A, B, M}
M_n(x)=\sum_{i=0}^n a_i^2 x^{2i},\quad A_n(x)= \sum_{i=0}^n a_i^2 i^2 x^{2(i-1)},\quad B_n(x)=\sum_{i=0}^n i\, a_i^2 x^{2i-1}.
\end{equation}
Furthermore, the following relations between $M_n, A_n$ and $B_n$, which follow directly from the above formulas, will also be used in the subsequent sections:
\begin{align*}
B_n(x)&=\frac{1}{2}M_n'(x),\quad A_n(x)=\frac{1}{4x}\Big(xM_n'(x)\Big)',
\\ \frac{A_n(x)M_n(x)-B_n^2(x)}{M_n^2(x)}&=\frac{1}{4}\Big(\frac{M_n''(x)}{M_n(x)}+\frac{1}{x}\frac{M_n'(x)}{M_n(x)}-\Big(\frac{M_n'(x)}{M_n(x)}\Big)^2\Big)
\\&=\frac{1}{4}\Big(\frac{1}{x}\frac{M_n'(x)}{M_n(x)}+\Big(\frac{M_n'(x)}{M_n(x)}\Big)'\Big)=\frac{1}{4x}\Big(x\frac{M_n'(x)}{M_n(x)}\Big)',
\end{align*}
where the prime $'$ notation denotes a derivative with respect to the variable $x$.
Let $\mathbb{E} N_n^{(\gamma)}(a,b)$ and $\mathbb{E} N_n^{(\alpha,\beta)}(a,b)$ be respectively the expected number of real roots of $P_n^{(\gamma)}$ and of $P_n^{(\alpha,\beta)}$ in a given interval $[a,b]$. Applying \eqref{eq: Kac-Rice general}-\eqref{eq: A, B, M} to $P_n^{(\gamma)}$ and to $P_n^{(\alpha,\beta)}$, we obtain the following common formula for $\mathbb{E} N_n^{(\gamma)}(a,b)$ and $\mathbb{E} N_n^{(\gamma)}(a,b)$ but with different triples $\{A_n, B_n, M_n\}$
\begin{equation}
\label{eq: formula EN}
\mathbb{E} N_n^{(*)}(a,b)=\frac{1}{\pi}\int_{a}^b \frac{\sqrt{A_n(x) M_n(x)-B_n^2(x)}}{M_n(x)}\,dx,
\end{equation}
where $(*)\in\{(\gamma), (\alpha,\beta)\}$.
For $EN_n^{(\gamma)}(a,b)$:
\begin{equation}
\label{eq: MAB}
M_n(x)=\sum_{k=0}^{n}\begin{pmatrix}
n\\k
\end{pmatrix}^{2\gamma} x^{2k},
\; A_n(x)=\sum_{k=0}^{n}k^2\begin{pmatrix}
n\\k
\end{pmatrix}^{2\gamma}x^{2(k-1)},
\; B_n(x)=\sum_{k=0}^n k \begin{pmatrix}
n\\k
\end{pmatrix}^{2\gamma} x^{2k-1}.
\end{equation}
For $E N_n^{(\alpha,\beta)}$:
\begin{align}
\label{eq: MAB2}
&M_n(x)=\sum_{k=0}^{n}\begin{pmatrix}
n+\alpha\\ n-k
\end{pmatrix}\begin{pmatrix}
n+\beta\\ k
\end{pmatrix} x^{2k},
\; A_n(x)=\sum_{k=0}^{n}k^2\begin{pmatrix}
n+\alpha\\ n-k
\end{pmatrix}\begin{pmatrix}
n+\beta\\ k
\end{pmatrix}x^{2(k-1)}, \notag\\
& B_n(x)=\sum_{k=0}^n k \begin{pmatrix}
n+\alpha\\ n-k
\end{pmatrix}\begin{pmatrix}
n+\beta\\ k
\end{pmatrix} x^{2k-1}.
\end{align}
By writing $\mathbb{E} N_n^{(\gamma)}$ or $\mathbb{E} N_n^{(\alpha,\beta)}$ it becomes clear which class of random polynomials is under consideration; therefore, for notational simplicity, we simply write $\{A_n, B_n, M_n\}$ without superscripts $(\gamma)$ or $(\alpha,\beta)$. The above Kac-Rice formulas are starting points for our analysis. The difficulty now is to analyze the integrand in \eqref{eq: formula EN} for each class of random polynomials.
\section{Finite group-size estimates}
\label{sec:finite and asymptotic of ENalpha}
In this section, we show a connection between the class $P_n^{(\alpha,\beta)}$ and Jacobi polynomials which extends that of between $P_n$ and Legendre polynomials in \cite{DuongHanJMB2016}. Using this connection, we will prove Theorem \ref{thm: finite n estimates} on the estimates of $\mathbb{E} N_n^{(\alpha,\beta)}$ for finite $n$.
\subsection{Connections to Jacobi polynomials and finite estimates of $\mathbb{E} N_n^{(\alpha,\beta)}$}
\label{sec: finite estimate}
We recall that the Jacobi polynomial is given by
\begin{equation}
\label{eq: Jacobi}
J^{(\alpha,\beta)}_n(x)=\sum_{i=0}^n \begin{pmatrix}
n+\alpha\\n-i
\end{pmatrix}\begin{pmatrix}
n+\beta\\ i
\end{pmatrix}\Big(
\frac{x-1}{2}
\Big)^i\Big(
\frac{x+1}{2}
\Big)^{n-i}.
\end{equation}
If $\alpha=\beta$, Jacobi's polynomial $J_n^{(\alpha,\beta)}(x)$ is called an ultraspherical polynomial. Legendre's polynomial is a special case of Jacobi's polynomial when $\alpha=\beta=0$. It is well-known that the zeros of $J_n^{(\alpha,\beta)}$ are real, distinct and are located in the interior of the interval $[-1,1]$ \cite{szego1975book}. The following lemma links $M_n^{(\alpha,\beta)}$ to Jacobi polynomials. Its proof is given in Appendix \ref{sec: Appendix}.
\begin{lemma}
\label{lem: relation Mn vs Jn} It holds that
\begin{equation}
M^{(\alpha,\beta)}_n(x)=(1-x^2)^n J^{(\alpha,\beta)}_n\Big(\frac{1+x^2}{1-x^2}\Big).
\end{equation}
\end{lemma}
The following theorem provides estimates of $\mathbb{E}(\mathcal{N}_\mathbb{R})$ in terms of the maximum root of the Jacobi polynomial.
\begin{theorem}
\label{thm: Jacobi-estimate}
Let $0<s_{n,max}<1$ be the maximum root of the Jacobi's polynomial of degree $n$. Then the expected number of real roots, $\mathbb{E} N_n^{(\alpha,\beta)}$, of $P^{(\alpha,\beta)}_n$ satisfies
\begin{equation}
\sqrt{n}\frac{1-s_{n,max}}{1+s_{n,max}}\leq \mathbb{E} N_n^{(\alpha,\beta)}\leq \sqrt{n}\frac{1+s_{n,max}}{1-s_{n,max}}.
\end{equation}
\end{theorem}
\begin{proof}
Let $\{-1<s_1<s_2<\ldots<s_ n<1\}$ be the zeros of the Jacobi polynomial of degree $n$. Note that $s_k=-s_{n+1-k}<0$ for $k=1,\ldots, \lfloor\frac{n}{2}\rfloor$. We deduce from Lemma \ref{lem: relation Mn vs Jn} that $M_n$ has $2n$ distinct zeros given by $\{\pm i\sqrt{\frac{1-s_k}{1+s_k}},~~1\leq k\leq n\}$ which are purely imaginary. Thus $M_n$ can be written as
\begin{equation}
\label{eq: representation of Mn}
M_n(x)=m_n \prod_{k=1}^n (x^2+r_k),
\end{equation}
where $m_n$ is the leading coefficient and for $1\leq k\leq n$
\begin{equation}
\label{eq: roots of M vs root of Jacobi}
r_k=\frac{1-s_k}{1+s_k}>0.
\end{equation}
It follows from the properties of $\{s_k\}$ that
$r_1>r_2>\ldots>r_n>0$ and $r_{k}r_{n+1-k}=1$ for $k=1,\ldots, \lfloor\frac{n}{2}\rfloor$.
Using the representation \eqref{eq: representation of Mn} of $M_n$ we have
\begin{align*}
M_n'(x)=2x m_n\sum_{k=1}^n\prod_{j\neq k} (x^2+r_j), \quad \frac{M_n'(x)}{M_n(x)}=\sum_{k=1}^n\frac{2x}{x^2+r_k},\quad
\Big(x\frac{M_n'(x)}{M_n(x)}\Big)'=\sum_{k=1}^n\frac{4 x r_k}{(x^2+r_k)^2}.
\end{align*}
Hence the density function can be represented as
\begin{align}
\label{eq: f in terms of roots of Mn}
f_n(x)^2=\frac{1}{4x}\Big(x\frac{M_n'(x)}{M_n(x)}\Big)'=\sum_{k=1}^n\frac{r_k}{(x^2+r_k)^2}.
\end{align}
Since $0<r_n<\ldots<r_1$, we deduce that
\begin{equation}
n \frac{r_n}{(x^2+r_1)^2}\leq f_n(x)^2= \sum_{k=1}^n\frac{r_k}{(x^2+r_k)^2}\leq n\frac{r_1}{(x^2+r_n)^2},
\end{equation}
that is
$$
\sqrt{n}\frac{\sqrt{r_n}}{x^2+r_1}\leq f_n(x)\leq \sqrt{n}\frac{\sqrt{r_1}}{x^2+r_n}.
$$
Since
$$
\mathbb{E} N_n^{(\alpha,\beta)}=\frac{1}{\pi}\int_{-\infty}^\infty f_n(x)\,dx
$$
we have
\begin{equation*}
\frac{1}{\pi}\int_{-\infty}^{\infty}\frac{\sqrt{n r_n}}{x^2+r_1}\,dx\leq \mathbb{E} N_n^{(\alpha,\beta)}\leq \frac{1}{\pi}\int_{-\infty}^{\infty}\frac{\sqrt{n r_1}}{x^2+r_n}\,dx,
\end{equation*}
that is, since $\int_{-\infty}^\infty\frac{1}{x^2+a}\,dx=\frac{\pi}{\sqrt{a}}$ for $a>0$,
\begin{equation*}
\sqrt{n}\sqrt{\frac{r_n}{r_1}}\leq \mathbb{E} N_n^{(\alpha,\beta)}\leq \sqrt{n}\sqrt{\frac{r_1}{r_n}}.
\end{equation*}
Since $r_1r_n=1$, the above expression can be written as
$$
\sqrt{n}r_n\leq \mathbb{E} N_n^{(\alpha,\beta)}\leq \sqrt{n}r_1.
$$
From \eqref{eq: roots of M vs root of Jacobi}, we obtain the following estimate for $\mathbb{E} N_n^{(\alpha,\beta)}$ in terms of roots of Jacobi's polynomials
\begin{equation*}
\sqrt{n}\frac{1-s_n}{1+s_n}=\sqrt{n}\frac{1+s_1}{1-s_1}\leq \mathbb{E} N_n^{(\alpha,\beta)}\leq \sqrt{n}\frac{1-s_1}{1+s_1}=\sqrt{n}\frac{1+s_n}{1-s_n}.
\end{equation*}
This completes the proof of the theorem.
\end{proof}
The following theorem provides an explicit finite estimate for $\mathbb{E} N_n^{(\alpha,\beta)}$ in the ultraspherical case. It generalizes a previous result for $\alpha=0$ (see \eqref{eq: finite1}) obtained in \cite{DuongHanJMB2016}.
\begin{theorem}
\label{thm: ultraspherical}
Consider the ultraspherical case (i.e., $\alpha=\beta$). We have
\begin{equation}
\frac{2}{\pi}\sqrt{\frac{n(n+2\alpha)}{2n+2\alpha-1}}\leq \mathbb{E} N_n^{(\alpha,\alpha)}\leq \frac{2\sqrt{n}}{\pi}\Big(1+\ln(2)+\frac{1}{2}\log\frac{n+\alpha}{1+\alpha}\Big).
\end{equation}
As a consequence,
\begin{equation}
\lim\limits_{n\rightarrow+\infty}\frac{\ln(\mathbb{E} N_n^{(\alpha,\alpha)})}{\ln(n)}=\frac{1}{2}.
\end{equation}
\end{theorem}
\begin{proof}
Since $\alpha=\beta$ changing $x$ to $1/x$ and $x$ to $−x$ leaves the distribution of the coefficients of $P_n^{(\alpha,\alpha)}(x)$ invariant. Thus we obtain that
$$
\mathbb{E} N_n^{(\alpha,\beta)}=4\mathbb{E} N_n^{(\alpha,\beta)}(-\infty,-1)=4\mathbb{E} N_n^{(\alpha,\beta)}(-1,0)=4\mathbb{E} N_n^{(\alpha,\beta)}(0,1)=4\mathbb{E} N_n^{(\alpha,\beta)}(1,\infty).
$$
It follows from \eqref{eq: f in terms of roots of Mn} that $f_n(x)$ is decreasing on $(0,+\infty)$. Thus for any $x\in[0,1]$, we have
\begin{equation}
\label{eq: estimate1}
f_n(0)=\sqrt{\frac{n(n+\alpha)}{1+\alpha}}\geq f_n(x)\geq f_n(1)=\frac{1}{2}\sqrt{\frac{n(n+2\alpha)}{2n+2\alpha-1}}.
\end{equation}
In addition, since $(x^2+r_k)^2\geq 4r_k x^2$ for all $x>0$, we also deduce from \eqref{eq: f in terms of roots of Mn} that
$$
f_n(x)^2\leq \frac{n}{4 x^2} \quad\text{for}~~x>0,
$$
that is
\begin{equation}
\label{eq: estimate 2}
f_n(x)\leq \frac{\sqrt{n}}{2x}\quad\text{for}~~x>0.
\end{equation}
Using the second inequality in \eqref{eq: estimate1} we obtain the lower bound for $\mathbb{E} N_n^{(\alpha,\beta)}$ as follows
$$
\mathbb{E} N_n^{(\alpha,\beta)}=\frac{4}{\pi}\int_{0}^{1}f_{n}(x)\,dx\geq \frac{4}{\pi}\int_0^1 f_n(1)\,dx=\frac{4f_n(1)}{\pi}=\frac{2}{\pi}\sqrt{\frac{n(n+2\alpha)}{2n+2\alpha-1}}.
$$
Using the first inequality in \eqref{eq: estimate1} and \eqref{eq: estimate 2} we obtain the following upper bound for $\mathbb{E} N_n^{(\alpha,\beta)}$ for any $0<\gamma<1$
\begin{align*}
\mathbb{E} N_n^{(\alpha,\beta)}&=\frac{4}{\pi}\int_{0}^{1}f_{n}(x)\,dx=\frac{4}{\pi}\Big(\int_{0}^{\gamma}f_{n}(x)\,dx+\int_{\gamma}^{1}f_{n}(x)\,dx\Big)
\\&\leq \frac{4}{\pi}\Big(\int_{0}^{\gamma}f_{n}(0)\,dx+\int_{\gamma}^{1}\frac{\sqrt{n}}{2x}\,dx\Big)
\\&=\frac{4}{\pi}\Big(\gamma\sqrt{\frac{n(n+\alpha)}{1+\alpha}}-\frac{\sqrt{n}}{2}\ln(\gamma)\Big).
\end{align*}
We choose $\gamma\in(0,1)$ that minimizes the right-hand side of the above expression. That is
$$
\gamma=\frac{1}{2}\sqrt{\frac{1+\alpha}{n+\alpha}},
$$
which gives
$$
\mathbb{E} N_n^{(\alpha,\beta)}\leq \frac{2\sqrt{n}}{\pi}\Big(1+\ln(2)+\frac{1}{2}\log\frac{n+\alpha}{1+\alpha}\Big).
$$
This completes the proof of the theorem.
\end{proof}
\section{Asymptotic behaviour of $E N_n^{(\gamma)}$}
\label{sec: asymptotic results}
In this section, we prove Theorem \ref{thm: asymptotic} obtaining asymptotic formulas for $\mathbb{E} N_n^{(\gamma)}$.
\noindent\textbf{Strategy of the the proof}. Let us first explain the main idea of the proof since it requires a rather delicate analysis. The first observation is that, similarly as the proof of Theorem \ref{thm: ultraspherical}, since changing $x$ to $1/x$ and $x$ to $−x$ leaves the distribution of the coefficients of $P_n^{(\gamma)}(x)$ invariant, we have
$$
\mathbb{E} N_n^{(\gamma)}(-\infty,-1)=\mathbb{E} N_n^{(\gamma)}(-1,0)=\mathbb{E} N_n^{(\gamma)}(0,1)=\mathbb{E} N_n^{(\gamma)}(1,\infty).
$$
Thus $\mathbb{E} N_n^{(\gamma)}=4\mathbb{E} N_n^{(\gamma)}(0,1)$ and it suffices to analyze $\mathbb{E} N_n^{(\gamma)}(0,1)$. We then split the interval $(0,1)$ further into two smaller intervals $(0,\eta)$ and $(\eta, 1)$, $\mathbb{E} N_n^{(\gamma)}(0,1)=\mathbb{E} N_n^{(\gamma)}(0,\eta)+\mathbb{E} N_n^{(\gamma)}(\eta,1)$ for a carefully chosen $0<\eta<1$ (which may depend on $n$) such that $\mathbb{E} N_n^{(\gamma)}(0,\eta)$ is negligible. To select a suitable $\eta$, we will use Jensen's inequality (see Lemma \ref{lem: Jensen}) that provides an upper bound on the number of roots of an analytic function (including polynomials) in an open ball. As such, we obtain $\eta=n^{-3\gamma/4}$ and write
$$
\mathbb{E} N_n^{(\gamma)}(0,1)=\mathbb{E} N_n^{(\gamma)}(0,n^{-3\gamma/4})+\mathbb{E} N_n^{(\gamma)}(n^{-3\gamma/4},1).
$$
In fact as will be shown, $\mathbb{E} N_n^{(\gamma)}(0,n^{-3\gamma/4})$ is of order $o(\sqrt{n})$, which is negligible (see Proposition \ref{prop: fisrt interval}). The next step is to obtain precisely the leading order in $\mathbb{E} N_n^{(\gamma)}(n^{-3\gamma/4},1)$. We recall that by Kac-Rice formula (see Section \ref{sec: Kac-Rice}) we have
\begin{equation}
\label{eq: temp1}
\mathbb{E} N_n^{(\gamma)}(n^{-3\gamma/4},1)=\int_{n^{-3\gamma/4}}^1\frac{\sqrt{A_n(x)M_n(x)-B^2_n(x)}}{M_n(x)}\,dx
\end{equation}
where $A_n, B_n$ and $M_n$ are given in \eqref{eq: MAB}. Therefore, we need to understand thoroughly the asymptotic behaviour of $A_n(x)M_n(x)-B_n^2(x)$ and of $M_n(x)$ in the interval $[n^{-3\gamma/4},1]$. This will be the content of Proposition \ref{prop: M}. Its proof requires a series of technical lemmas and will be presented in Appendix \ref{sec: Appendix}.
We now follow the strategy, starting with Jensen's inequality.
\begin{lemma}[Jensen's inequality]
\label{lem: Jensen}
Let $f$ be an entire complex function $f$ and $R,r>0$. The number of roots of $f$ in $\mathbb{B}(r) =\{z \in \mathbb{C} : |z| \leq r \}$, denoted by $N_f(r)$, satisfies
\ben{ \label{nfr}
N_f(r) \leq \frac{\log \tfrac{M_R}{M_r}}{ \log \tfrac{R^2+r^2}{2Rr}},
}
where $M_t=\max_{|z| \leq t} |f(z)|$ for $t>0$.
\end{lemma}
An elementary proof of Jensen's inequality can be found in \cite[Section 15.5]{nguyen2017roots}. Now we show that $\mathbb{E} N_n^{(\gamma)}(0,n^{-3\gamma/4})$ is negligible as an interesting application of Jensen's inequality.
\begin{proposition}
\label{prop: fisrt interval}
We have
$$\mathbb{E} N_n^{(\gamma)}(0,n^{-3\gamma/4})=o(\sqrt{n}).$$
\end{proposition}
\begin{proof}
We aim to apply \eqref{nfr} to $P_n^{(\gamma)}(z)$, which is indeed an entire function. Let $r=n^{-3\gamma/4}$ and $R=n^{-2\gamma/3}$. Then
\ben{
\log \frac{R^2+r^2}{2Rr} \asymp \log n.
}
Moreover,
\bean{
M_r = \max_{|z| \leq r} |P_n^{(\gamma)}(z)| \geq |P(0)| = |\xi_0|,
}
and
\bean{ \label{MR}
&M_R = \max_{|z| \leq R} |P_n^{(\gamma)}(z)| & \displaystyle \leq \sum_{i=0}^n |\xi_i| R^i \binom{n}{i}^{\gamma} \leq \max_{0\leq i \leq n} |\xi_i| \times \displaystyle \sum_{i=0}^n \left( \sum_{i=0}^n R^{i/\gamma} \binom{n}{i}\right)^{\gamma} \notag\\
&&\leq n \max_{0\leq i \leq n} |\xi_i| \times (1+R^{1/\gamma})^{n\gamma}\notag\\
&& \leq n\max_{0\leq i \leq n} |\xi_i| \times \exp (\gamma O(n^{1/3})).
}
We define the event
\[\mathcal{E} = \big \{ \max_{1\leq i \leq n} |\xi_i| \leq n \big \} \cap \{n^{-1} \leq |\xi_0| \leq n \}. \]
Since $\{\xi_i\}_{i=0,\ldots,n}$ are standard normal i.i.d. random variables,
\ben{ \label{pne}
\mathbb{P}(\mathcal{E}) \geq 1- O(1/n).
}
By combining \eqref{nfr}--\eqref{MR}, we obtain
\ben{ \label{ene}
N_n^{(\gamma)}(r) \mathbb{1}(\mathcal{E}) \leq \frac{C n^{1/3}}{\log n},
}
for some positive constant $C$, where $N_n^{(\gamma)}(r)$ is the number of roots of $P_n^{(\gamma)}$ in the ball $\mathbb{B}(r)$ defined above. We notice also that $N_n^{(\gamma)}(r) \leq n$. Therefore, by \eqref{pne} and \eqref{ene}
\bea{
\mathbb{E} N_n^{(\gamma)}(0,n^{-3\gamma/4}) \leq \mathbb{E} N_n^{(\gamma)}(r) &=& \mathbb{E}[N_n^{(\gamma)}(r) \mathbb{1}(\mathcal{E})] + \mathbb{E}[N_n^{(\gamma)}(r) \mathbb{1}(\mathcal{E}^c)] \notag \\
& \leq & \frac{Cn^{1/3}}{\log n} + n \mathbb{P}(\mathcal{E}^c) \leq C n^{1/3}.
}
As a consequence, we obtain
\ben{
\mathbb{E} N_n^{(\gamma)}(0,n^{-3\gamma/4}) = o(\sqrt{n}).
}
\end{proof}
As already mentioned, the following proposition characterizes precisely the asymptotic behaviour of $A_nM_n-B_n^2$ and of $M_n$, the two quantities appearing in \eqref{eq: temp1}. The proof of this proposition is presented in Appendix \ref{sec: Appendix}.
\begin{proposition}
\label{prop: M}
If $1\geq x \geq \frac{(\log n)^{4\gamma}}{n^{\gamma}}$ then
\begin{equation}\label{termM}
\displaystyle M_n(x)=\sum_{i=0}^n \binom{n}{i}^{2\gamma}x^{2i}=\binom{n}{i_{\gamma,x}}^{2\gamma}x^{2i_{\gamma,x}} \times(\sqrt{ \pi} +o(1)) \sqrt{\frac{n x^{1/\gamma}}{\gamma (1+x^{1/\gamma})^2 }},
\end{equation}
and
\begin{equation}\label{termABM}
A_n(x)M_n(x) -B^2_n(x) =\binom{n}{i_{\gamma,x}}^{2\gamma}x^{4i_{\gamma,x}-2} \times \left(\frac{ \pi}{2} +o(1)\right) \left(\frac{n x^{1/\gamma}}{\gamma (1+x^{1/\gamma})^2 }\right)^2,
\end{equation}
where $i_{\gamma,x} =[nt_{\gamma,x}]$ with $t_{\gamma,x}= \tfrac{x^{1/\gamma}}{1+x^{1/\gamma}}$.
\end{proposition}
We are now ready to prove the asymptotic behaviour of $\mathbb{E} N_n^{(\gamma)}$ (the first part of \eqref{eq: main asymptotic behaviour} in Theorem \ref{thm: asymptotic}).
\begin{proof}[Proof of asymptotic formula of $\mathbb{E} N_n^{(\gamma)}$]
From Proposition \ref{prop: fisrt interval}, Proposition \ref{prop: M} and Kac-Rice formula, we get
\begin{align*}
\mathbb{E} N_n^{(\gamma)}(0,1) & \displaystyle =\mathbb{E} N_n^{(\gamma)}(0,n^{-3\gamma/4}) +\mathbb{E} N_n^{(\gamma)}(n^{-3\gamma/4},1) = \frac{1}{\pi} \int_{n^{-3\gamma/4}}^1 \frac{\sqrt{A_n(x)M_n(x) -B^2_n(x)}}{M_n(x)} dx +o(\sqrt{n})\\
& \displaystyle = \frac{\sqrt{n}}{\sqrt{2}\pi} \int_0^1 \frac{x^{\frac{1}{2\gamma}-1}}{\sqrt{\gamma} \left(1+x^{1/\gamma}\right)} dx +o(\sqrt{n}) = \frac{\sqrt{n}}{\sqrt{2}\pi} \times \frac{2\sqrt{\gamma}\pi}{4}+o(\sqrt{n}) = \frac{\sqrt{2\gamma n}}{4}+o(\sqrt{n}),
\end{align*}
where the last line follows from the change of variable $u=x^{1/(2\gamma)}$ and the equality $\displaystyle \int_0^1 \frac{du}{1+u^2}=\frac{\pi}{4}$. Hence
$$
\mathbb{E} N_n^{(\gamma)}=4\mathbb{E} N_n^{(\gamma)}(0,1)=\sqrt{2\gamma n}+o(\sqrt{n}).
$$
The proof is complete.
\end{proof}
\label{sec:asymptotic of ENgamma}
\section{Asymptotic behaviour of $E N_n^{(\alpha,\beta)}$}
\label{sec: asymptotic of ENalpha}
This section deals with the asymptotic formula of $\mathbb{E} N_n^{(\alpha,\beta)}$ (the second part of \eqref{eq: main asymptotic behaviour} in Theorem \ref{thm: asymptotic}). The strategy of the proof is as follows. We will first relate the asymptotic behaviour of $
\mathbb{E} N_n^{(\alpha,\beta)}$ for general $(\alpha,\beta)$ to that of $\mathbb{E} N_n^{(0,0)}$ for $\alpha=\beta=0$. We then exploit the relation that $\mathbb{E} N_n^{(0,0)}=\mathbb{E} N_n^{(1)}$ and use the result from the previous section. \\
\textit{The negligible interval $[0,n^{-3/4}]$}. We use the same argument as in Proposition \ref{prop: fisrt interval}. The estimate for $M_R$ can be replaced as
\bea{
M_R = \max_{|z| \leq R} |P_n^{(\alpha,\beta)}(z)| & \leq &\sum_{i=0}^n |\xi_i| R^i \binom{n+\alpha}{n-i}^{\tfrac{1}{2}} \binom{n+\beta}{i}^{\tfrac{1}{2}}\\
&\leq& \max_{0\leq i \leq n} |\xi_i| (n+|\alpha|)^{|\alpha|} (n+|\beta|)^{|\beta|} \times \displaystyle \sum_{i=0}^n R^{i} \binom{n}{i} \notag\\
& \leq& \max_{0\leq i \leq n} |\xi_i| \times \exp (O(n^{1/3})),
}
where for the second line we used the inequality that $\binom{n+\alpha}{k} \leq \binom{n}{k} (n+|\alpha|)^{2|\alpha|}$. Then by repeating the same argument in Proposition \ref{prop: fisrt interval}, we can show that
\ben{ \label{nabn}
\mathbb{E}[N_n^{(\alpha,\beta)}(0,n^{-3/4})] = o(\sqrt{n}).
}
\textit{The main interval $[n^{-3/4},1]$}. We first study the coefficients. It follows from Stirling formula that as $i\wedge (n-i) \rightarrow \infty$,
\bean{
a_i^{(\alpha, \beta)} &=& \binom{n+\alpha}{n-i} \binom{n+\beta}{i} \notag\\
&=& (1+o(1)) \sqrt{\frac{(n+\alpha)(n+\beta)}{4 \pi^2 i(i+\alpha) (n-i) (n+\beta-i)} } \exp \left( (n+\alpha) I(\tfrac{\alpha +i}{n+\alpha}) + (n+\beta) I(\tfrac{i}{n+\beta}) \right) \notag \\
&=&(1+o(1)) \frac{n}{2 \pi i (n-i) } \exp \left( (n+\alpha) I(\tfrac{\alpha +i}{n+\alpha}) + (n+\beta) I(\tfrac{i}{n+\beta}) \right),\notag
}
where $I(t)=-t\log t + (t-1) \log (1-t)$. By Taylor expansion,
\bea{
I(\tfrac{i+\alpha}{n+\alpha}) &=& I(\tfrac{i}{n}) + I'(\tfrac{i}{n}) \left( \frac{i+\alpha}{n+\alpha} -\frac{i}{n} \right) + O(I''(i/n)) n^{-2}\\
&=& I(\tfrac{i}{n}) + I'(\tfrac{i}{n}) \frac{\alpha (n-i)}{n^2} +O(\tfrac{1}{i(n-i)}).\notag
}
Note that $I''(t)=-t^{-1}(1-t)^{-1}$. Therefore, as $i\wedge (n-i) \rightarrow \infty$,
\bean{
(n+\alpha)I(\tfrac{i+\alpha}{n+\alpha}) - n I(\tfrac{i}{n}) = (1+o(1)) \left( \alpha I(\tfrac{i}{n}) + \alpha I'(\tfrac{i}{n}) \frac{ (n-i)}{n} \right).
}
Similarly,
\bean{
(n+\beta)I(\tfrac{i}{n+\beta}) - n I(\tfrac{i}{n}) = (1+o(1)) \left( \beta I(\tfrac{i}{n}) - \beta I'(\tfrac{i}{n}) \frac{ i}{n} \right).\notag
}
Hence,
\bean{
a_i^{(\alpha, \beta)} = (1+o(1)) \frac{n}{2 \pi i (n-i) } \exp \left( (\alpha+\beta)I(\tfrac{i}{n}) +I'(\tfrac{i}{n}) \frac{\alpha (n-i)-\beta i}{n} \right) \exp(2 n I(\tfrac{i}{n})),\notag
}
so that
\bea{
a_i^{(\alpha, \beta)}&=&(1+o(1)) \exp \left( (\alpha+\beta)I(\tfrac{i}{n}) +I'(\tfrac{i}{n}) \frac{\alpha (n-i)-\beta i}{n} \right) a_i^{(0, 0)}\\
&=& (1+o(1)) h_{(\alpha,\beta)}(\tfrac{i}{n}) a_i^{(0,0)},
}
where for $t \in (0,1)$
\[ h_{(\alpha,\beta)}(t)=(\alpha+\beta) I(t) + I'(t) (\alpha (1-t) -\beta t). \]
Suppose that $x \in [n^{-3/4},1]$. In the case $(\alpha, \beta) =(0,0)$, or equivalently the case $\gamma =1$, in Lemma \ref{lem: lem1} below, we show that the terms $a_i^{(0,0)} x^i$ attain the maximum around $i=i_x\pm i_x^{3/4}$ with $i_x = [nx/(x+1)]$, and the other terms are negligible. Here, the asymptotic behavior of $a_i^{(\alpha,\beta)}$ differs from that of $a_i^{(0,0)}$ only on the term $h_{(\alpha,\beta)}(\tfrac{i}{n})$ which is minor compared with $a_i^{(0,0)}$. Hence, using exactly the same analysis in Lemma \ref{lem: lem1}, we can also show that these terms $a_i^{(\alpha,\beta)} x^i$ when $|i-i_x| \geq i_x^{3/4}$ are negligible. Therefore,
\bea{
M_n^{(\alpha,\beta)}(x) &=& (1+o(1)) \sum_{i:|i-i_x| \leq i_x^{3/4}} a_i^{(\alpha,\beta)}x^i = (1+o(1)) \sum_{i:|i-i_x| \leq i_x^{3/4}} h_{(\alpha,\beta)}(\tfrac{i}{n}) a_i^{(0,0)}x^i \\
&=&(1+o(1)) h_{(\alpha,\beta)}( \tfrac{x}{x+1}) \sum_{i:|i-i_x| \leq i_x^{3/4}} a_i^{(0,0)} x^i = (1+o(1)) h_{(\alpha,\beta)}( \tfrac{x}{x+1})M_n^{(0,0)}(x),
}
since when $|i-i_x| \leq i_x^{3/4}$, we have $h_{(\alpha,\beta)}(\tfrac{i}{n}) =(1+o(1))h_{\alpha,\beta}(\tfrac{x}{x+1})$. Similarly,
\bea{
A_n^{(\alpha,\beta)} (x)= (1+o(1))h_{(\alpha,\beta)}( \tfrac{x}{x+1})A_n^{(0,0)}(x), \quad B_n^{(\alpha,\beta)}(x) = (1+o(1))h_{(\alpha,\beta)}( \tfrac{x}{x+1})B_n^{(0,0)}(x).
}
Thus for $x \in [n^{-3/4},1]$,
\be{
f_n^{(\alpha,\beta)}(x) = (1+o(1))f_n^{(0,0)}(x),
}
and hence
\ben{ \label{nabm}
\mathbb{E} N_n^{(\alpha,\beta)}(n^{-3/4},1) =(1+o(1)) \mathbb{E} N_n^{(0,0)}(n^{-3/4},1) = (1+o(1)) \frac{\sqrt{2n}}{4}.
}
Combining \eqref{nabn} and \eqref{nabm}, we obtain that $\mathbb{E} N_n^{(\alpha,\beta)}(0,1) =(1+o(1))\tfrac{\sqrt{2n}}{4}$, and hence
\be{
\mathbb{E} N_n^{(a,b)} =4 \mathbb{E} N_n^{(a,b)}(0,1) = (1+o(1)) \sqrt{2n}.
}
\section{Summary and outlook}
\label{sec: summary}
In this paper, we have proved asymptotic formulas for the expected number of real roots of two general class of random polynomials. As a consequence, we have obtained an asymptotic formula for the expected number of internal equilibria in multi-player two-strategy random evolutionary games. Our results deepen the connection between evolutionary game theory and random polynomial theory which was discovered previously in \cite{DuongHanJMB2016,DuongTranHanJMB}. Below we discuss some important directions for future research.
\textit{Extensions to other models in EGT}. The class of random polynomials that we studied in this paper arises from the replicator dynamics. It would be interesting to generalize our results to more complex models in evolutionary game theory and population dynamics. The most natural model to study next is the replicator-mutator dynamics where mutation is present. Equilibria for the replicator-mutator dynamics are positive roots of a much more complicated class of random polynomials, which depend on the mutation. Studying the effect of mutation on the equilibrium properties, in particular on the expected number of internal equilibria, is a challenging problem, see \cite{DuongHanDGA2020} for an initial attempt. One can also ask whether our results can be extended to multi-player multi-strategy evolutionary games whose equilibria are positive roots of a system of random polynomials. In this case, the assumption that the gain sequence is independent is not realistic from evolutionary game theory's point of view (see \cite[Remark 4]{DH15} for a detailed explanation). Therefore, one needs to deal with a dependent system of random polynomials, which is very challenging.
\textit{Universality and other statistical properties}. The assumption that the random variables $\{\xi_i\}_{i=0}^n$ are Gaussian distributions is crucial in the present paper. It allowed us to employ the fundamental tool of Kac-Rice formula in Section \ref{sec: Kac-Rice}. What happens if $\{\xi_i\}$ are not Gaussian? Very recently, it has been shown that \textit{universality phenomena} hold for three classes of random polynomials: Kac polynomials, elliptic polynomials, and Weyl polynomials (recall the Introduction for their explicit expressions) \cite{TaoVu15, NNV2016, Do2018}. The universality results state that the expectation of the number of real roots of these classes of random polynomials depend only on the mean and variance of the coefficients $\{\xi_i\}$ but not on their type of the distributions. It would be very interesting to obtain such a universality theorem for the class of random polynomials arising from evolutionary game theory studied in this paper. The distributions of the roots in different classes are different, and the methods to study them need to be tailored to each of the class. It remains elusive to us whether the techniques in \cite{TaoVu15, NNV2016, Do2018} can be applied to the class of random polynomials in this paper. Furthermore, studying other statistical properties such as central limit theorem and the distribution of the number of equilibria also demands future investigations, see for instance \cite{CanDuongPham2019} for a characterization of the probability that a multiplayer random evolutionary random game has no internal equilibria.
\section{Appendix}
\label{sec: Appendix}
In this appendix, we present detailed computations and proofs of technical results used in previous sections.
\subsection{Proof of Lemma \ref{lem: relation Mn vs Jn} and detailed computations of $f_n(0)$ and $f_n(1)$,}
In this section, we prove Lemma \ref{lem: relation Mn vs Jn} and compute $f_n(0)$ and $f_n(1)$.
\begin{proof}[Proof of Lemma \ref{lem: relation Mn vs Jn}]
It follows from the definition of Jacobi polynomial \eqref{eq: Jacobi} that for any $q\in\mathbb{R}$
\begin{align*}
J_n^{\alpha,\beta}\left(\frac{1+q}{1-q}\right)&=\frac{1}{2^n}\sum_{i=0}^n\begin{pmatrix}
n+\alpha\\
n-i
\end{pmatrix}\begin{pmatrix}
n+\beta\\
i
\end{pmatrix}\left(\frac{1+q}{1-q}-1\right)^i\left(\frac{1+q}{1-q}+1\right)^{n-i}
\\&=\frac{1}{2^n}\sum_{i=0}^n\begin{pmatrix}
n+\alpha\\
n-i
\end{pmatrix}\begin{pmatrix}
n+\beta\\
i
\end{pmatrix}\left(\frac{2q}{1-q}\right)^{i}\left(\frac{2}{1-q}\right)^{n-i}
\\&=\frac{1}{(1-q)^n}\sum_{i=0}^n\begin{pmatrix}
n+\alpha\\
n-i
\end{pmatrix}\begin{pmatrix}
n+\beta\\
i
\end{pmatrix} q^{n-i}.
\end{align*}
Taking $q=x^2$ yields the statement of Lemma \ref{lem: relation Mn vs Jn}.
\end{proof}
Next we compute $f_n(0)$ and $f_n(1)$. We have
\begin{equation}
M_n(0)=\begin{pmatrix}
n+\alpha\\n
\end{pmatrix},\quad A_n(0)=\begin{pmatrix}
n+\alpha\\n-1
\end{pmatrix}\begin{pmatrix}
n+\beta\\mathbb{1}
\end{pmatrix}=(n+\beta)\begin{pmatrix}
n+\alpha\\n-1
\end{pmatrix},\quad B_n(0)=0.
\end{equation}
Thus
$$
f_n(0)^2=\frac{A_n(0)M_n(0)-B_n(0)^2}{M_n(0)^2}=\frac{A_n(0)}{M_n(0)}=\frac{n(n+\beta)}{\alpha+1},
$$
that is
$$
f_n(0)=\sqrt{\frac{n(n+\beta)}{1+\alpha}}.
$$
Next we compute $f_n(1)$. We have
\begin{equation*}
M_n(1)=\sum_{i=0}^n\begin{pmatrix}
n+\alpha\\n-i
\end{pmatrix}\begin{pmatrix}
n+\beta\\i
\end{pmatrix}=\begin{pmatrix}
2n+\alpha+\beta\\n
\end{pmatrix}
\end{equation*}
Using the following formula for the derivative of Jacobi polynomials \cite[Section 4.5]{szego1975book}
\begin{align*}
&(2n+\alpha+\beta)(1-x^2)\frac{d}{dx}J_n^{(\alpha,\beta)}(x)
\\&=-n\Big((2n+\alpha+\beta)x+\beta-\alpha\Big)J_n^{(\alpha,\beta)}(x)+2(n+\alpha)(n+\beta)J_{n-1}^{(\alpha,\beta)}(x)
\end{align*}
and Lemma \ref{lem: relation Mn vs Jn} we obtain the following formula for the derivative of $M_n^{\alpha,\beta}(x)$.
\begin{equation}
\label{eq: derivative}
x(2n+\alpha+\beta)M_n'(x)=\Big(n(2n+\alpha+\beta)+\beta-\alpha\Big)M^{(\alpha,\beta)}_n(x)-2(1-x^2)(n+\alpha)(n+\beta)M^{(\alpha,\beta)}_{n-1}(x).
\end{equation}
Applying \eqref{eq: derivative} for $x=1$, we obtain
\begin{align*}
B_n(1)=\frac{1}{2}M_n'(1)=\frac{n(2n+\alpha+\beta)+\beta-\alpha)M_n(1)}{2(2n+\alpha+\beta)}=\frac{1}{2}\Big(n+\frac{\beta-\alpha}{2n+\alpha+\beta}\Big)\begin{pmatrix}
2n+\alpha+\beta\\n
\end{pmatrix}.
\end{align*}
Taking derivative of \eqref{eq: derivative} we have
\begin{align*}
(2n+\alpha+\beta)(M_n'(x)+xM_n''(x))&=(n(2n+\alpha+\beta)+\beta-\alpha)M_n'(x)
\\&\qquad-2(n+\alpha)(n+\beta)\Big[-2xM_{n-1}(x)+(1-x^2)M_{n-1}'(x)\Big].
\end{align*}
It follows that
\begin{align*}
A_n(1)&=\frac{1}{4}(M_n'(1)+M_n''(1))
\\&=\frac{1}{4(2n+\alpha+\beta)}\Big[(n(2n+\alpha+\beta)+\beta-\alpha)M_n'(1)+4(n+\alpha)(n+\beta)M_{n-1}(1)\Big]
\\&=\frac{1}{4}\Big(n+\frac{\beta-\alpha}{2n+\alpha+\beta}\Big)^2\begin{pmatrix}
2n+\alpha+\beta\\
n
\end{pmatrix}+\frac{(n+\alpha)(n+\beta)}{2n+\alpha+\beta}\begin{pmatrix}
2n+\alpha+\beta-2\\
n-1
\end{pmatrix}.
\end{align*}
Hence
\begin{align*}
f_n(1)&=\frac{\sqrt{A_n(1)M_n(1)-B_n(1)^2}}{M_n(1)}
\\&=\sqrt{\frac{(n+\alpha)(n+\beta)}{2n+\alpha+\beta}}\sqrt{\frac{\begin{pmatrix}
2n+\alpha+\beta-2\\
n-1
\end{pmatrix}}{\begin{pmatrix}
2n+\alpha+\beta\\
n
\end{pmatrix}}}
\\&=\frac{1}{2n+\alpha+\beta}\sqrt{\frac{n(n+\alpha)(n+\beta)(n+\alpha+\beta)}{2n+\alpha+\beta-1}}.
\end{align*}
In particular, when $\alpha=\beta$,
$$
f_n(1)=\frac{1}{2}\sqrt{\frac{n(n+2\alpha)}{2n+2\alpha-1}}.
$$
\subsection{Proof of Proposition \ref{prop: M}}
In this section we prove Proposition \ref{prop: M}. The proof will be established after a series of technical lemmas.
We start with the following lemma that provides an estimate for a power of the binomial coefficient, which is a key factor appearing in the expressions of $A_n, B_n$ and $M_n$.
\begin{lemma}
\label{lem: lem0}
For $0<t<1$and $x>0$ we define $I(t):= t\log\frac{1}{t} +(1-t)\log\frac{1}{1-t} $ and $J_{\gamma ,x}(t):= \gamma I(t) + t\log x$. Then
\begin{equation} \label{stirl}
\begin{pmatrix}
n\\i
\end{pmatrix}^\gamma x^i = \left(\frac{n}{2\pi i (n-i)}\right)^{\gamma/2} \left( 1+O\left( \frac{1}{i}+\frac{1}{n-i}\right)\right)^{\gamma} e^{n J_{\gamma ,x}\left( \frac{i}{n}\right)}.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem: lem0}]
It follows from Stirling formula
\begin{eqnarray*}
i! = \sqrt{2\pi i}(1+O(i^{-1})) \left(\frac{i}{e}\right)^i
\end{eqnarray*}
that
$$
\begin{pmatrix}
n\\i
\end{pmatrix} = \sqrt{\frac{n}{2\pi i (n-i)}} \left( 1+O\left( \frac{1}{i}+\frac{1}{n-i}\right)\right) e^{n I\left( \frac{i}{n}\right)},
$$
where
$$
I(t)=t\log\frac{1}{t}+(1-t)\log\frac{1}{1-t}
$$
Therefore,
$$
\begin{pmatrix}
n\\i
\end{pmatrix}^\gamma x^i = \left(\frac{n}{2\pi i (n-i)}\right)^{\gamma/2} \left( 1+O\left( \frac{1}{i}+\frac{1}{n-i}\right)\right)^{\gamma} e^{n J_{\gamma ,x}\left( \frac{i}{n}\right)},
$$
which is the statement of the lemma.
\end{proof}
\begin{lemma}
\label{lem: lem1}
We define
\begin{equation}
\label{chisoi}
t_{\gamma ,x}:=\frac{x^{1/\gamma}}{1+x^{1/\gamma}}\quad \text{and}\quad i_{\gamma ,x}:= \lfloor n t_{\gamma ,x}\rfloor,
\end{equation}
We note that $t_{\gamma ,x}$ is the unique solution of the equation $J'_{\gamma ,x}(t)=0$, where
\begin{equation} \label{hamj}
J'_{\gamma ,x}(t)= \gamma \log \frac{1-t}{t}+\log x \; \; \mbox{and} \; \; J''_{\gamma ,x}(t)= \frac{-\gamma}{t(1-t)}.
\end{equation}
Assume that $1\geq x \geq (\log n)^{4\gamma}/n^\gamma$.
\begin{itemize}
\item[a.] If $|i-i_{\gamma ,x}|\geq i_{\gamma ,x}^{3/4}$ then
$$\displaystyle \frac{\binom{n}{i}^\gamma x^i}{\binom{n}{i_{\gamma ,x}}^\gamma x^{i_{\gamma ,x}}} \leq \frac{1}{n^{10}}.$$
\item[b.] If $|i-i_{\gamma ,x}|< i_{\gamma ,x}^{3/4}$ then
$$\displaystyle \frac{\binom{n}{i}^\gamma x^i}{\binom{n}{i_{\gamma ,x}}^\gamma x^{i_{\gamma ,x}}} = \left(1+O\left(\frac{1}{\log n}\right)\right) \exp \left\{ \left[ J_{\gamma ,x}''\left(t_{\gamma ,x}\right) + O\left(\frac{(n t_{\gamma ,x})^{3/4}}{nt_{\gamma ,x}^2}\right) \right]\frac{(i-i_{\gamma ,x})^2}{2n} \right\}.$$
\end{itemize}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem: lem1}]
a. Since $\displaystyle \binom{n}{i} \leq \exp(nI(i/n))$, using (\ref{stirl}) we have
\begin{align*}
\displaystyle \frac{\binom{n}{i}^\gamma x^i}{\binom{n}{i_{\gamma ,x}}^\gamma x^{i_{\gamma ,x}}} & \leq \displaystyle (2\pi i_{\gamma ,x})^{\gamma/2} \exp \left( n \left[ J_{\gamma ,x}(i/n) -J_{\gamma ,x}(i_{\gamma ,x}/n) \right]\right)\\
& \leq \displaystyle (2\pi i_{\gamma ,x})^{\gamma/2} \exp \left( n \left[ J_{\gamma ,x}\left( \frac{i_{\gamma ,x} \pm i_{\gamma ,x}^{3/4}}{n}\right) -J_{\gamma ,x}(i_{\gamma ,x}/n) \right]\right),
\end{align*}
where the second lines follows from the fact that the function $J_{\gamma ,x}(t)$ is concave and attains maximum at $t_{\gamma ,x}$.
By Taylor expansion, there exists $\theta \in \left(\frac{i_{\gamma ,x} - i_{\gamma ,x}^{3/4}}{n} , \frac{i_{\gamma ,x} + i_{\gamma ,x}^{3/4}}{n} \right)$ such that
$$J_{\gamma ,x}\left( \frac{i_{\gamma ,x} \pm i_{\gamma ,x}^{3/4}}{n}\right) -J_{\gamma ,x}\left( \frac{i_{\gamma ,x}}{n}\right)=\frac{ \pm i_{\gamma ,x}^{3/4}}{n} J'_{\gamma ,x}\left( \frac{i_{\gamma ,x}}{n}\right)+ J''_{\gamma ,x}(\theta)\frac{i_{\gamma ,x}^{3/2}}{2n^2}.$$
Notice that
\begin{eqnarray}
\Big |J'_{\gamma ,x}\left(\frac{i_{\gamma ,x}}{n}\right) \Big |= \Big |J'_{\gamma ,x}\left(\frac{i_{\gamma ,x}}{n}\right)-J'_{\gamma ,x}\left(t_{\gamma ,x}\right) \Big | &\leq& \sup \limits_{y \in (\tfrac{i_{\gamma ,x}}{n}, t_{\gamma ,x})} |J''_{\gamma ,x}(y)| \Big |\frac{i_{\gamma ,x}}{n}-t_{\gamma ,x} \Big | \notag \\
&\leq& \frac{1}{n}\sup \limits_{y \in (\tfrac{i_{\gamma ,x}}{n}, t_{\gamma ,x})} \Big | \frac{-\gamma}{y(1-y)}\Big | \leq \frac{C}{nx^{1/\gamma}}. \label{jpx}
\end{eqnarray}
Combining this with the fact that
\begin{eqnarray*}
J_{\gamma ,x}''(\theta)=\frac{-\gamma}{\theta(1-\theta)} \leq \frac{-\gamma}{\theta} \leq \frac{-n\gamma}{i_{\gamma ,x}+i_{\gamma ,x}^{3/4}} \leq \frac{-cn}{i_{\gamma ,x}},
\end{eqnarray*}
we get that for $n$ large enough,
\begin{eqnarray*}
\displaystyle \frac{\binom{n}{i}^\gamma x^i}{\binom{n}{i_{\gamma ,x}}^\gamma x^{i_{\gamma ,x}}} &\leq (2\pi i_{\gamma ,x})^{\gamma/2} \exp \left( \frac{C i_{\gamma ,x}^{3/4}}{nx^{1/\gamma}} - ci_{\gamma ,x}^{1/2}\right)\\
& \displaystyle \leq (2\pi n)^{\gamma/2} \exp \left( - c' (\log n)^2\right) \leq \frac{1}{n^{10}},
\end{eqnarray*}
where in the last line we use the estimate $i_{\gamma ,x} \simeq n x^{1/\gamma} =(\log n)^4$. This ends the proof of Part a.
b. Suppose that $|i-i_{\gamma ,x}|< i_{\gamma ,x}^{3/4}$. By using \eqref{stirl} and Taylor expansion,
\begin{eqnarray*}
&& \frac{\binom{n}{i}^{\gamma} x^{i}}{\binom{n}{i_{\gamma ,x}}^{\gamma} x^{2_{\gamma ,x}}} = \left[(1+O(i_{\gamma ,x}^{-1})) \frac{i_{\gamma ,x}(n-i_{\gamma ,x})}{i(n-i)} \right]^{\gamma/2} \exp \left( n \left[ J_x\left(\frac{i}{n}\right) -J_x\left(\frac{i_x}{n}\right) \right] \right)\\
&=& \left(1+O\left(\frac{1}{\log n}\right)\right) \exp \left( n \left[ J_{\gamma ,x}'\left(\frac{i_{\gamma ,x}}{n}\right) \frac{(i-i_{\gamma ,x})}{n}+ J_{\gamma ,x}''\left(\frac{i_{\gamma ,x}}{n}\right) \frac{(i-i_{\gamma ,x})^2}{2n^2} + J_{\gamma ,x}'''\left(\theta\right) \frac{(i-i_{\gamma ,x})^3}{6n^3} \right] \right),
\end{eqnarray*}
for some $\theta \in (\tfrac{i_{\gamma ,x}}{n}, \tfrac{i}{n})$.
Observe that
\begin{itemize}
\item $\displaystyle \Big |J'_{\gamma ,x}\left(\frac{i_{\gamma ,x}}{n}\right) \Big | \leq \frac{C'}{nx^{1/\gamma}}\leq \frac{C}{i_{\gamma ,x}} $ as in the proof of Part a.
\item $J_{\gamma ,x}'''(y) = O(y^{-2})$ for all $y\in \mathbb{R}$. Therefore $J_{\gamma ,x}'''(\theta) = O(t_{\gamma ,x}^{-2})$.
\item $\displaystyle J_{\gamma ,x}''\left(\frac{i_{\gamma ,x}}{n}\right)=J_x''\left(t_{\gamma ,x}\right) + O\left(\frac{1}{t_{\gamma ,x}^2}\right) \left(\frac{i_{\gamma ,x}}{n}-t_{\gamma ,x}\right) =J_x''\left(t_{\gamma ,x}\right) + O\left(\frac{1}{nt_{\gamma ,x}^2}\right).$
\end{itemize}
Combining these estimates,
\begin{eqnarray*}
n \left[ J_x\left(\frac{i}{n}\right) -J_x\left(\frac{i_x}{n}\right) \right]
& =& \displaystyle O\left(\frac{i_{\gamma ,x}^{3/4}}{i_{\gamma ,x}}\right)+ \left[ J_{\gamma ,x}''\left(t_{\gamma ,x}\right) + O\left(\frac{1}{nt_{\gamma ,x}^2}\right) + O\left(\frac{1}{t_{\gamma ,x}^2}\right)\frac{i-i_{\gamma ,x}}{n} \right]\frac{(i-i_{\gamma ,x})^2}{2n} \\
& = &\displaystyle O\left(i_{\gamma ,x}^{-1/4}\right)+ \left[ J_{\gamma ,x}''\left(t_{\gamma ,x}\right) + O\left(\frac{(n t_{\gamma ,x})^{3/4}}{nt_{\gamma ,x}^2}\right) \right]\frac{(i-i_{\gamma ,x})^2}{2n}.
\end{eqnarray*}
Then Part b. follows.
\end{proof}
\begin{lemma}
\label{lem: lem2}
We have
\begin{equation}
A_n(x) M_n(x) - B_n(x)^2=\frac{1}{2} \sum_{i,j=0}^n (i-j)^2 \binom{n}{i}^{2\gamma} \binom{n}{j}^{2\gamma} x^{2(i+j-1)}.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem: lem2}]
Using formulas of $M_n, A_n$ and $B_n$ given in \eqref{eq: MAB}, we obtain
\begin{align*}
A_n(x) M_n(x) - B_n(x)^2 &=\sum_{i,j=0}^n i^2 \binom{n}{i}^{2\gamma} x^{2(i-1)} \binom{n}{j}^{2\gamma} x^{2 j} - \sum_{i,j=0}^n i \binom{n}{i}^2 x^{2i-1} j \binom{n}{j}^2 x^{2j-1} \notag \\& = \frac{1}{2} \sum_{i,j=0}^n [i^2 + j^2 -2 ij] \binom{n}{i}^{2\gamma} \binom{n}{j}^{2\gamma} x^{2(i+j-1)} \notag \\
&= \frac{1}{2} \sum_{i,j=0}^n (i-j)^2 \binom{n}{i}^{2\gamma} \binom{n}{j}^{2\gamma} x^{2(i+j-1)}.
\end{align*}
The last step is the desired equality.
\end{proof}
The following lemma will be used to obtain asymptotic behaviour of $A_nM_n-B_n^2$ later on.
\begin{lemma}
\label{lem: lem3}
Let $f(x,y)$ be a bivariate function such that $f(x,y)=O(x^2+y^2)$. Consider two sequences $(a_n)$ and $(b_n)$ such that $a_n \rightarrow 0$ and $\frac{b_n(\log n)^4}{a_n} \rightarrow 0$. Then for $k< i_{\gamma,x}-a_n^{-1}$ and $l> i_{\gamma,x}+a_n^{-1}$,
\begin{align*}
& \displaystyle \sum_{i,j=k}^l f(i,j) \exp \left( -(a_n+b_n) \left( (i-i_{\gamma,x})^2+(j-i_{\gamma,x})^2\right)\right)\\
= & \displaystyle \left(1+O\left(\frac{1}{\log n}\right)\right) \sum_{{\substack{|i-i_{\gamma,x}| < (a_nb_n)^{-1/4}\\ |j-i_{\gamma,x}| < (a_nb_n)^{-1/4}}}} f(i,j) \exp \left( -a_n \left( (i-i_{\gamma,x})^2+(j-i_{\gamma,x})^2\right)\right) +O(1).
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem: lem3}]
Denote by $\theta = (a_nb_n)^{-1/4}$. If $|i-i_{\gamma,x}|\geq \theta$ or $|j-i_{\gamma,x}|\geq \theta$ then
$$ -(a_n+b_n) \left( (i-i_{\gamma,x})^2+(j-i_{\gamma,x})^2\right) \leq -a_n \theta^2 \leq -c(\log n)^2.$$
Therefore
$$\sum_{{\substack{|i-i_{\gamma,x}| < (a_nb_n)^{-1/4} \; \textrm{or}\\ |j-i_{\gamma,x}| < (a_nb_n)^{-1/4}}}} f(i,j) \exp \left( -(a_n+b_n) \left( (i-i_{\gamma,x})^2+(j-i_{\gamma,x})^2\right)\right) \leq Cn^4 e^{-c(\log n)^2} =O(1).$$
Consider $|i-i_{\gamma,x}|< \theta$ and $|j-i_{\gamma,x}|< \theta$. Then
$$b_n \left( (i-i_{\gamma,x})^2+(j-i_{\gamma,x})^2\right) = O (b_n \theta^2) =O \left( \frac{1}{(\log n)^2}\right).$$
It implies that
$$\exp \left( -(a_n+b_n) \left( (i-i_{\gamma,x})^2+(j-i_{\gamma,x})^2\right)\right)= \left(1+O\left(\frac{1}{\log n}\right)\right) \exp \left( -a_n \left( (i-i_{\gamma,x})^2+(j-i_{\gamma,x})^2\right)\right). $$
Then the result follows.
\end{proof}
\begin{lemma}
\label{lem: lem4}
Let $g:\; \mathbb{R} \rightarrow \mathbb{R}$ be a differentiable function such that
$$\int_{\mathbb{R}}(|g(x)+|g'(x)|)(x^2+|x|+1)dx < \infty.$$
Then for any $K,l,m$ such that $K,\frac{l}{\sqrt{K}},\frac{m}{\sqrt{K}} \rightarrow \infty$, we have
\begin{equation}
\label{firstinter}
\sum_{i=-l}^m g\left( \frac{i}{\sqrt{K}}\right)= (1+o(1)) \sqrt{K} \int_{\mathbb{R}} g(x)dx,
\end{equation}
and
\begin{equation}
\label{secondinter}
\sum_{i,j=-l}^m (i-j)^2 g\left( \frac{i}{\sqrt{K}}\right)g\left( \frac{j}{\sqrt{K}}\right)= (1+o(1)) (\sqrt{K})^4 \int_{\mathbb{R}^2} (x-y)^2 g(x)g(y)dxdy.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem: lem4}]
We can rewrite (\ref{firstinter}) as
$$\underset{k\rightarrow \infty}{\lim} \frac{1}{\sqrt{K}}\sum_{i=-l}^m g\left( \frac{i}{\sqrt{K}}\right)= \int_{\mathbb{R}} g(x)dx < \infty.$$
Indeed, for any $\epsilon >0$ there exists $N_{\epsilon} > \epsilon^{-1}$ such that
$$\int_{\mathbb{R}\setminus [-N_{\epsilon},N_{\epsilon}]} |g(x)|dx\leq \epsilon. $$
It is clear that
$$\underset{k\rightarrow \infty}{\lim} \frac{1}{\sqrt{K}}\sum_{-\sqrt{K}N_{\epsilon} \leq i \leq \sqrt{K}N_{\epsilon} } g\left( \frac{i}{\sqrt{K}}\right)= \int_{-N_{\epsilon}}^{N_{\epsilon}} g(x)dx < \infty,$$
then there exists $K_{\epsilon}>0$ such that for any $K>K_{\epsilon}$,
$$\left| \frac{1}{\sqrt{K}}\sum_{-\sqrt{K}N_{\epsilon} \leq i \leq \sqrt{K}N_{\epsilon} } g\left( \frac{i}{\sqrt{K}}\right)- \int_{-N_{\epsilon}}^{N_{\epsilon}} g(x)dx\right| \leq \epsilon.$$
For the remainder term, using the fact that there exists $M>0$ such that $|g(x)|x^4 \leq M,\; \forall x\in \mathbb{R}$,
\begin{align*}
&\displaystyle\left| \frac{1}{\sqrt{K}}\sum_{i\in [-l,m]\setminus [-\sqrt{K}N_{\epsilon},\sqrt{K}N_{\epsilon}] } g\left( \frac{i}{\sqrt{K}}\right) \right| \leq \frac{2}{\sqrt{K}}\sum_{i>\sqrt{K}N_{\epsilon}} \frac{M}{(i/\sqrt{K})^4} \\
\leq &\displaystyle 2MK^{3/2} \int_{\sqrt{K}N_{\epsilon}}^{\infty} x^{-4}dx =\frac{2MK^{3/2}}{3 (\sqrt{K}N_{\epsilon})^3} = \frac{2M}{3N_{\epsilon}^3}\leq \epsilon.
\end{align*}
In conclusion, for any $\epsilon >0$ there exists $K_{\epsilon}>0$ such that for any $K>K_{\epsilon}$,
$$\left| \frac{1}{\sqrt{K}}\sum_{i=-l}^mg\left( \frac{i}{\sqrt{K}}\right)- \int_{\mathbb{R}} g(x)dx\right| \leq 3\epsilon.$$
It implies (\ref{firstinter}). By the same argument, we can prove (\ref{secondinter}).
\end{proof}
Finally, we now bring all previous technical lemmas to prove Proposition \ref{prop: M}.
\begin{proof}[Proof of Proposition \ref{prop: M}]
We have
$$M_n(x)= \sum_{|i-i_{\gamma,x}| \geq i_{\gamma,x}^{3/4}} \binom{n}{i}^{2\gamma}x^{2i}+\sum_{|i-i_{\gamma,x}| < i_{\gamma,x}^{3/4}}\binom{n}{i}^{2\gamma}x^{2i}=:M_{1,n}+M_{2,n}.$$
By Lemma \ref{lem: lem1} Part a.,
$$\frac{M_{1,n}}{\binom{n}{i_{\gamma,x}}^{2\gamma}x^{2i_{\gamma,x}}} \leq \frac{n}{n^{20}} \leq \frac{1}{n^{10}};$$
and by Lemma \ref{lem: lem1} Part b., Lemma \ref{lem: lem3} and Lemma \ref{lem: lem4},
\bea{
\frac{M_{2,n}}{\binom{n}{i_{\gamma,x}}^{2\gamma}x^{2i_{\gamma,x}}} &=& (1+o(1)) \sum_{|i-i_{\gamma, x}| < i_{\gamma, x}^{3/4} } \exp \left( \left[ J_{\gamma ,x}''\left(t_{\gamma ,x}\right) + O\left(\frac{(n t_{\gamma ,x})^{3/4}}{nt_{\gamma ,x}^2}\right) \right]\frac{(i-i_{\gamma ,x})^2}{n} \right) \\
&=& (1+o(1)) \int_{\mathbb{R}}e^{-x^2}dx \times \sqrt{n/|J"(t_{\gamma,x})|} \\
&=& (\sqrt{ \pi} +o(1)) \sqrt{\frac{n x^{1/\gamma}}{\gamma (1+x^{1/\gamma})^2 }}.
}
Now according to Lemma \ref{lem: lem2}, we have
$$ A_n(x)M_n(x) -B^2_n(x)=\frac{1}{2}\sum_{i,j=0}^n (i-j)^2 \binom{n}{i}^{2\gamma}x^{2i}\binom{n}{j}^{2\gamma}x^{2i+2j-2}.$$
Then by the same argument as above, we get
\bea{
A_n(x)M_n(x) -B^2_n(x)&=&\frac{1}{2}(1+o(1))\int_{\mathbb{R}^2}(x-y)^2 e^{-(x^2+y^2)}dxdy \times (n/|J"(t_{\gamma,x})|)^2 \\
&=& \left(\frac{ \pi}{2} +o(1)\right) \left(\frac{n x^{1/\gamma}}{\gamma (1+x^{1/\gamma})^2 }\right)^2,
}
which completes the proof.
\end{proof}
\bibliographystyle{plain}
|
1,477,468,750,655 | arxiv | \section{Introduction}\label{sec:intro}
Contacts of surfaces at the atomic length scale are crucial in many modern applications, from experimental techniques such as nanoindentation~\cite{landman_atomistic_1990, oliver_measurement_2004, fischer-cripps_nanoindentation_2004} or atomic/friction force microscopy (AFM/FFM)~\cite{binnig_atomic_1986, meyer_scanning_2004, bennewitz_friction_2005} to nanotechnologies applied, for example, in nano-/microelectromechanical-systems (NEMS/MEMS)~\cite{komvopoulos_surface_1996, spearing_materials_2000, maboudian_surface_2004, kim_nanotribology_2007, bhushan_nanotribology_2008}. The reliability, performance, and lifetime of such systems, for example, depend sensitively on the interactions between contacting materials. Furthermore, detailed insights into such contacts are of fundamental interest for better comprehension of tribological processes, such as nanoscale wear~\cite{bhushan_nanotribolgy_1995, gnecco_abrasive_2002, bhushan_nanotribology_2005, gnecco_fundamentals_2007, gotsmann_atomistic_2008, bhaskaran_ultralow_2010, jacobs_nanoscale_2013, mishina_wear_2013}, for which there is still a lack of understanding due to its highly complex nature~\cite{kim_nano-scale_2012}.
Metal-ceramic interfaces~\cite{howe_bonding_1993} are of fundamental and technological interest because they exhibit advantages of both types of materials such as valuable mechanical properties, high thermal stability, and degradation resistance~\cite{johansson_electronic_1995}. Hence, such interfaces are important in numerous applications such as communication devices and nanoelectronics~\cite{ruhle_preface_1992}. In this paper the interface between the metal Al and the transition-metal nitride TiN is investigated. This interface consists of a soft material and a hard material, which simplifies wear processes because the softer material is primarily affected.
Since the 1980s classical molecular dynamics (MD) simulations have commonly been applied to nanotribological problems (see, e.g., Refs.~\onlinecite{thompson_simulations_1989, landman_structural_1989, bhushan_computer_2000, mulliah_molecular_2004, kenny_molecular_2005, david_schall_molecular_2007, szlufarska_recent_2008, vernes_three-term_2012, eder_derjaguin_2013, eder_methods_2014, eder_analysis_2014}) and still constitute a standard tool in numerical atomistic simulations. Nevertheless, during the last decade density functional theory (DFT) calculations have been increasingly used in this field (see, e.g., Refs.~\onlinecite{zhong_first-principles_1990, dag_atomic_2004, ciraci_ab-initio_2007, zilibotti_ab_2009, garvey_shear_2011, zilibotti_ab_2011, cahangirov_frictional_2012, kwon_enhanced_2012, wang_theoretical_2012, wang_atomic-scale_2012, garvey_pressure_2013, wolloch_ab-initio_2014}) and should be seen as an extension to the more common computational tools in tribology. DFT allows for parameter-free calculations and an accurate description of quantum-mechanical systems and does not depend on empirical potentials. However, due to computational challenges DFT calculations are currently limited to rather small systems of typically a few hundred atoms. Since DFT has proven to yield reliable results for this class of systems~\cite{finnis_theory_1996, lundqvist_density-functional_2001, sinnott_ceramic/metal_2003}, it is also employed in this study to analyze the electronic and atomic structure of the Al/TiN interfaces and to determine properties such as adhesion energies. Results obtained with DFT, such as potential-energy curves, can be used as an input for, e.g., large-scale classical MD simulations~\cite{ercolessi_interatomic_1994, jaramillo-botero_general_2014}. Furthermore, the combination of approaches such as DFT and MD as well as the continuously increasing available computer power and advances in software tools promise the possibility to investigate even larger and more realistic systems in the near future.
Al/TiN and similar interfaces have already been investigated by various researchers with experimental~\cite{avinun_nucleation_1998, chun_interfacial_2001-1} and theoretical~\cite{liu_adhesion_2003, liu_first-principles_2004, liu_first-principles_2005, song_adhesion_2006, zhang_first-principles_2007, zhang_effects_2007, song_mechanism_2008, yadav_first-principles_2012, yadav_first-principles_2014} methods. Here, however, the emphasis lies on a realistic way to simulate the separation of the interfaces as well as on a comprehensive discussion of interfaces between Al and TiN low-index surfaces. To assess this problem, the effects of various configurations at the interface as well as approach and subsequent separation of Al and TiN slabs are analyzed. Various tests on, e.g., the effect of adjusted lattice parameters, the simulation cell size, and various approximations for the exchange correlation functional in DFT are carried out.
\section{Computational Details}\label{sec:comp_methods}
\subsection{Density Functional Theory Calculations}
To study the interfacial properties of Al and TiN slabs upon approach and subsequent separation, we performed first-principles calculations within the framework of DFT employing the Vienna Ab initio Simulation Package (\textsc{VASP})~\cite{kresse_ab_1993, kresse_ab_1994, kresse_efficient_1996, kresse_efficiency_1996}. \textsc{VASP} utilizes a plane-wave basis and periodic boundary conditions. Projector augmented-wave (PAW) pseudopotentials~\cite{blochl_projector_1994,kresse_ultrasoft_1999} were used to model the potential between the ionic core and the valence electrons. Unless explicitly mentioned otherwise, the generalized gradient approximation (GGA) in the Perdew, Burke, and Ernzerhof (PBE) parametrization~\cite{perdew_generalized_1996} was applied to describe the exchange and correlation functional. Since GGAs often underestimate binding and adhesion energies~\cite{stampfl_density-functional_1999}, the local-density approximation (LDA)~\cite{perdew_self-interaction_1981}, which usually overestimates these quantities~\cite{van_de_walle_correcting_1999}, was also employed for comparison. Additionally, the van der Waals (vdW) density functional (DF) optB86b~\cite{klimes_chemical_2010,klimes_van_2011} was used, which includes a nonlocal correlation term approximating vdW interactions. vdW-DFs have been applied to a wide range of materials (e.g., see Refs.~\onlinecite{chakarova-kack_application_2006, sony_importance_2007, carrasco_wet_2011, mittendorfer_graphene_2011, antlanger_pt_3zr0001:_2012, graziano_improved_2012, bedolla_effects_2014, choi_growth_2014, bedolla_density_2014}) and have proven to be of good accuracy. Although vdW interactions should not play a major role in the investigated systems, the calculations are included for comparison and clarification. The calculation parameters were carefully chosen to obtain accurate total energies. An energy cutoff of \unit[800]{eV} was used for the plane-wave basis. Unless noted otherwise, the Brillouin zone sampling was performed using a \(\Gamma\)-centered \(15\times15\times1\) Monkhorst-Pack mesh~\cite{monkhorst_special_1976}. Both settings allow for total energies accurate to \unit[1]{meV/atom}. While the tetrahedron method with Bl\"ochl corrections~\cite{blochl_improved_1994} was utilized for static calculations, for relaxations a smearing of \unit[0.11]{eV} using the first-order method of Methfessel and Paxton~\cite{methfessel_high-precision_1989} was selected. In order to relax the structures a damped molecular dynamics (MD) algorithm was employed, allowing for atomic movements until an energy convergence criterion of \unit[\(10^{-5}\)]{eV} was fulfilled. This damped MD scheme was chosen instead of the widely used quasi-Newton or conjugate-gradient algorithms, because these caused convergence problems as well as the tendency to remain stuck in local minima. Each converged relaxation was followed up by a static calculation to obtain more accurate total energies. For electronic self-consistency cycles a convergence criterion of \unit[\(10^{-6}\)]{eV} was used. All simulations were performed at \unit[0]{K}.
\subsection{Simulation Model}\label{subsec:sim-model}
To model our systems we built simulation cells from a fcc Al slab at the bottom and a rock salt TiN slab above (see Fig.~\ref{fig:Al-TiN-initial}). Such cells were constructed for the low-index surface orientations (001), (011), and (111) of both slabs. Only configurations with slabs of the same surface orientations at the interface and without any relative rotations were considered. The two slabs were separated by a gap which is given by the vertical distance between the top Al and bottom TiN layers and will be referred to as the ``interface distance''. The vertical distance between the bottom Al and top TiN layers, which is the sum of the interface distance and the heights of the two slabs, is called ``slab height''. In the case of (111) slabs this height is measured up to the top Ti and N layer for Ti and N termination, respectively. Unless otherwise stated 1\(\times\)1 surface cells were used, which represent an infinitely extended surface due to the periodic boundary conditions. The Al slab consisted of at least seven layers, and the TiN slab consisted of a minimum of six Ti and six N layers. These thicknesses were found to be sufficient to converge the surface energies and to mimic bulklike features in the center of the respective slab. These system dimensions are in good agreement with other published work~\cite{marlo_density-functional_2000, liu_first-principles_2004, zhang_first-principles_2007, yadav_first-principles_2014}.
\begin{figure}[hbt]
\centering
\includegraphics[width=.25\linewidth]{fig1.pdf}
\caption{Side view of a (111) Al/TiN interface (TiN: Ti terminated). The simulation interface cell is indicated by the solid black lines. During relaxations the orange Al, cyan N and purple Ti atoms were kept rigid, while the red Al, green N and blue Ti ones were allowed to relax.}
\label{fig:Al-TiN-initial}
\end{figure}
The (111)~TiN slab can be terminated with either Ti or N atoms. To investigate the stability of these terminations a thermodynamic analysis~\cite{wang_hematite_1998, reuter_composition_2001} was performed to calculate the surface Gibbs free energy for the off-stoichiometric slabs~\cite{lee_stoichiometry_2011}. The surface Gibbs free energy \(\Omega\) for surface termination \(i\) without vibrational contributions is given by
\begin{equation}
\Omega^i = \frac{1}{2} \left( E^i_{slab} - N^i_{Ti} E^{bulk}_{TiN} \right) - \Gamma^i_{Ti,N} E_N - \Gamma^i_{Ti,N} \Delta\mu_N,
\label{equ:surf-energy}
\end{equation}
where \(E^i_{slab}\) is the total energy of the slab with termination \(i\), \( N^i_{Ti}\) isthe number of Ti atoms in the slab, \(E^{bulk}_{TiN}\) is the total energy of bulk TiN, and \(E_N\) is the total energy of a nitrogen atom. The two latter terms in Eq.~\eqref{equ:surf-energy} are necessary to calculate the surface energy of off-stoichiometric slabs. The number of off-stoichiometric atoms \(\Gamma^i_{Ti,N}\) is defined as
\begin{equation}
\Gamma^i_{Ti,N} = \frac{1}{2} \left( N^i_N - N^i_{Ti} \frac{N^{bulk}_N}{N^{bulk}_{Ti}} \right),
\label{equ:gamma}
\end{equation}
where \(N^i_j\) and \(N^{bulk}_j\) are the number of atoms of type j in the slab and in bulk, respectively. For rock-salt bulk TiN the fraction \(N^{bulk}_N/N^{bulk}_{Ti}\) in Eq.~\eqref{equ:gamma} is equal to 1. \(\Delta\mu_N\) is the deviation of the nitrogen chemical potential \(\mu_N\) from the molecular reference \(\frac{1}{2} E_{N_2}\),
\begin{equation}
\Delta\mu_N = \mu_N - \frac{1}{2} E_{N_2}.
\label{equ:delta-mu}
\end{equation}
In Figure~\ref{fig:TiN-term} the calculated surface Gibbs free energy is plotted for the N- and Ti-terminated TiN (111) slabs in the stability range of nitrogen in TiN obtained from the heat of formation of bulk TiN~\cite{chase_nist-janaf_1998} at \unit[0]{K}, \(\Delta H^0_{f,(TiN)} =-3.461\)~\unit[]{eV}, and the chemical potential of gas phase nitrogen, i.e., \( \Delta H^0_f \le \Delta\mu_N \le 0\). Fig.~\ref{fig:TiN-term} shows that the favorable termination of a (111) TiN slab depends on the chemical potential of nitrogen, in agreement with the Refs.~\onlinecite{liu_first-principles_2004, wang_surface_2010}. Since both cases are found in reasonable nitrogen concentration ranges, both terminations are investigated.
Dipole corrections~\cite{neugebauer_adsorbate-substrate_1992} perpendicular to the interface (z direction) were tested for the systems but were found to be negligible. Atop the TiN slab a vacuum spacing of at least \unit[10]{\AA} was included to decouple periodically repeated cells in the z direction. The lattice parameters of the single slabs, \unit[4.04]{\AA} and \unit[4.254]{\AA} for Al and TiN, respectively, were obtained from bulk calculations. These values are in very good agreement with the experimental lattice constants of \unit[4.05]{\AA} and \unit[4.265]{\AA} for Al and TiN, respectively~\cite{wyckoff_crystal_1963}. The relative error between calculated and experimental values is below 0.5\%. For the simulation cells combining Al and TiN slabs, unless otherwise stated, an intermediate lattice parameter of \unit[4.144]{\AA} was used for the lateral xy lattice vectors to equalize the relative error of about 2.6\% for both materials. For the z direction the material-specific values were kept assuming a pseudomorphic interface. In reality such a combination of stretching and compression of thick slabs does not usually occur, but dislocations at the interface or an incommensurate contact are possible. Thus, some of the atoms on both sides of the interface would not be aligned perfectly, but rather sample slightly different local environments. For computational reasons, here these different local arrangements are assessed by considering various orientations at the interface as limiting cases.
\begin{figure}[hbt]
\centering
\includegraphics[width=.9\linewidth]{fig2.pdf}
\caption{Surface phase diagram for TiN. The surface Gibbs free energy \(\Omega\) [see Eq.~\eqref{equ:surf-energy}] referenced to a 1\(\times\)1 surface cell of the (111) orientation is plotted vs the deviation \(\Delta\mu_{N}\) of the nitrogen chemical potential from its molecular reference [see Eq.~\eqref{equ:delta-mu}] for N- and Ti-terminated (111) TiN slabs (solid lines) as well as for (001) and (011) orientations (dashed lines).}
\label{fig:TiN-term}
\end{figure}
The approach of the two slabs was simulated by moving the upper slab in discrete steps along the negative z direction and allowing for electronic and atomic relaxations after each step. Alternatively, moving the bottom slab toward the upper slab or both toward each other would not affect the results. For the atomic relaxations the top TiN (three Ti and three N) and the bottom three Al layers were kept fixed at bulklike distances, whereas the intermediate ``free'' ones were allowed to fully relax. This is depicted in Fig.~\ref{fig:Al-TiN-initial} for the Ti-terminated (111) surface orientation. For the approaching movement a step size of \unit[0.2]{\AA} was used for all configurations. Before the slabs were brought into contact, the free layers were allowed to relax in order to simulate surfaces in their equilibrium for the chosen lattice parameters. The separation of the slabs was initiated from the equilibrium, i.e., the structure with the lowest energy determined during the approach. To simulate a realistic separation of the slabs only the topmost, rigid TiN layers were moved in discrete steps in the positive z direction, again allowing for electronic and atomic relaxations after each step. The choice of the step size is crucial for the separation process. Separation velocities allowing for an adiabatic behavior of the system were assumed, meaning that the system continuously fully adjusts during the separation at each step. However, this assumption should also be valid for velocities up to several hundred meters per second as long as these are still considerably lower than the material-specific speed of sound, which is above \unit[6000]{m/s} for Al and TiN~\cite{kundu_ultrasonic_2012}. It is evident that the step size has to be small enough to mimic the adiabatic relaxation, but on the other hand, a smaller step size leads to increased computational costs. For the investigated systems a step size of \unit[0.1]{\AA} was found to be a practical trade-off because calculations showed this value to be necessary to converge the final results of the simulated separation processes. Smaller step sizes down to \unit[0.01]{\AA} were also considered for approach and separation but did not yield qualitatively different results. Clearly, quantities such as the slab height corresponding to the initial material transfer can be determined more accurately.
In order to study the effects of different alignments of the slabs at the interface, the upper slab was also laterally placed on various sites with respect to the surface of the lower slab. The definitions of the configurations are depicted in Fig.~\ref{fig:bond-sites} by marking the high-symmetry points on the low index TiN surfaces where the next Al atom can be placed. In this context the interaction energy \(E_I(z)\) is an important quantity, which is defined as the difference of the total energy of the interacting slabs \(E_{(Al/TiN)}(z)\) at slab height \(z\) and the reference energies of the independent slabs, \(E_{(Al)}\) and \(E_{(TiN)}\),
\begin{equation}
E_I(z) = E_{(Al/TiN)}(z) - E_{(Al)} - E_{(TiN)}.
\label{equ:interaction-energy}
\end{equation}
\begin{figure}[hbt]
\centering
\includegraphics[width=1.\linewidth]{fig3.pdf}
\caption{Top view of (001), (011), and Ti-terminated (111) TiN surfaces. For each orientation the 1\(\times\)1 surface cell is presented. Filled circles indicate atoms in the top surface layer (Ti and N are given by large blue and small green circles, respectively), while empty circles label atoms below the top surface layer. To obtain a N-terminated (111) TiN surface the Ti and N atoms of the Ti-terminated surface have to be exchanged. High-symmetry points are highlighted. For the (011) TiN surface the ``Ti plane'' and ``N plane'' are marked by dashed lines.}
\label{fig:bond-sites}
\end{figure}
\section{Results and Discussion}\label{sec:results}
\subsection{Removal of Layers from an Al Slab}\label{subsec:remove-al}
As a first step the energy cost for removing layers from an Al slab was examined for all three low-index surface orientations. The removal of the layers was simulated by placing the layers at a large distance from the slab, which does not allow for interactions between the slab and layers. The TiN slab is not investigated here because the Al slab is assumed to be mainly affected by deformations or material transfer within an Al/TiN interface because TiN forms a much more rigid lattice. The energetical results for the removal of the top Al layer are given in Table~\ref{tab:removal}. These removal energies are calculated for simulation cells using the bulk lateral lattice parameters as well as the modified ones used for the Al/TiN simulation cell. For the modified lattice parameters the removal energies are typically overestimated by about 5\%--10\%, meaning that it is actually easier to remove layers from the equilibrium structure. The removal energy for the modified Al slab is increased because the lateral stretching causes a vertical compression of the slab if relaxations are allowed. This compression occurs to minimize the volume change and locally strengthens the bonding of the surface layers. This effect is strongest for the top surface layer, which moves about \unit[0.24]{\AA} towards the rigid part and becomes weaker for the subsurface layers; for example, the fourth layer is only shifted by about \unit[0.08]{\AA}.
The influence of compressive and tensile stress on the removal energies of the top Al layer is illustrated in Fig.~\ref{fig:Al-xy-removal} for the three low-index surface orientations. The data points for the (001) and (111) surfaces follow a similar trend, whereas the behavior of the (011) surface clearly deviates. This difference occurs probably due to the openness of the (011) surface and the significant impact of relaxations. The influence of stress, found for all surfaces, supports the notion of stress-assisted wear~\cite{gotsmann_atomistic_2008,jacobs_nanoscale_2013}, which states the possibility of a reduction of the activation barriers for the detachment of atoms from a structure due to stress. Furthermore, different approximations for the exchange correlation functional were tested. As expected it was found that LDA and the vdW-DF optB86b yield larger removal energies by about 15\%--20\%, where LDA typically gives values larger by a few percent than the vdW-DF (see Table~\ref{tab:removal}).
\begin{table}[hbt]
\caption{\label{tab:removal} Energy costs to remove the top layer from an Al slab for the (001), (011), and (111) surface orientations using PBE, LDA, and vdW-DF optB86b. The removal energies are given in \unit{eV} per 1\(\times\)1 surface cell. \(a_{Al}\) is the Al bulk lattice parameter, whereas \(a_{Al/TiN}\) corresponds to the modified Al/TiN interface.}
\begin{ruledtabular}
\begin{tabular}{lccc}
& (001) & (011) & (111) \\
PBE (\(a_{Al}\)) & 1.08 & 1.78 & 0.78 \\
PBE (\(a_{Al/TiN}\)) & 1.16 & 1.79 & 0.87 \\
LDA (\(a_{Al/TiN}\)) & 1.33 & 2.02 & 1.02 \\
vdW (\(a_{Al/TiN}\)) & 1.27 & 1.89 & 1.00
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.9\linewidth]{fig4.pdf}
\caption{PBE energy costs to remove the top Al layer for the (001), (011), and (111) surface orientations. The lateral effects of stretching and compression of the 1\(\times\)1 surface cell on the removal energies are shown. The Al bulk lattice constant is used as a reference value at 0\%. The vertical line indicates the intermediate Al/TiN interface lattice parameter, while the other solid lines are given to guide the eye.}
\label{fig:Al-xy-removal}
\end{figure}
\subsection{Lateral Alignments at the Al/TiN Interface}\label{subsec:alignments}
Effects of various lateral alignments of the slabs at the interface (see Fig.~\ref{fig:bond-sites}) were investigated for the different surface orientations. These studies revealed the strong dependence of equilibrium properties such as adhesion energies and the equilibrium distances on the chosen configuration. The calculated interaction energies~[Eq.~\eqref{equ:interaction-energy}] of relaxed interfaces are shown in Figs.~\ref{fig:bond-sites-pec}(a)--(d) for slab heights around the energy minima, which are equivalent to the adhesion energies for each alignment. In general, the top Al atoms prefer the proximity of N atoms over Ti. The bonding situation will be discussed in more detail in the following paragraphs. From an energetical point of view material transfer between the slabs should be possible only if the energy cost to remove layers is compensated for. Thus, the energy gain due to adhesion has to be larger than the energy cost to remove one or more layers. This argument is sketched in Figs.~\ref{fig:bond-sites-pec}(a)--(d) by including a horizontal line at the negative value of the Al removal energy for each surface orientation. It has been observed experimentally that metal-ceramic interfaces with weak and strong interfacial adhesion break upon stress at the interface and in bulk areas, respectively~\cite{howe_bonding_1993-1, ernst_metal-oxide_1995}. We find that the four surfaces investigated exhibit essentially different behavior. The adhesion energies and the equilibrium distances, i.e., the interface distances at the minimum of each energy curve, depend strongly on the surface orientation as well as on the alignment at the interface. In the case of the (111) surfaces all configurations should lead to the removal of at least one Al layer. For the (011) surfaces this is the case only for three alignments (see Fig.~\ref{fig:bond-sites}), Al/N~(top), Al/TiN~(hollow), and Al/N~(bridge). In contrast, for the (001) surfaces no material transfer should occur since for all cases studied the energy to remove one Al layer is larger than the adhesion energy.
\begin{figure*}[hbt]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig5a.pdf}
\caption{(111) Ti-terminated}
\label{fig:111-Titerm-bond-site}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig5b.pdf}
\caption{(111) N-terminated}
\label{fig:111-Nterm-bond-site}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig5c.pdf}
\caption{(011)}
\label{fig:110-bond-site}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig5d.pdf}
\caption{(001)}
\label{fig:100-bond-site}
\end{subfigure}
\caption{Calculated PBE interaction energies of the relaxed Al/TiN interface for the (111) Ti-terminated, (111) N-terminated, (011), and (001) surface orientations. Various lateral alignments of the two slabs are considered (see Fig.~\ref{fig:bond-sites}). The horizontal, dashed orange lines give the energy costs to remove at least one layer from an Al slab of the corresponding surface orientation.}
\label{fig:bond-sites-pec}
\end{figure*}
As mentioned above, in reality, surfaces with a bulk lattice mismatch are usually not perfectly aligned at an interface. Consequently, not all atoms are placed on the same contact site; therefore, the interfacial properties such as the adhesion energy are an average of the actually occupied sites. The configurations presented here, however, constitute limiting cases of perfectly aligned systems, such that the properties of real interfaces should be found within these boundaries.
Generally, relaxation effects have to be accounted for to obtain the correct equilibrium values of the adhesion energy and the interface distance as well as to predict the occurrence of material transfer. A comparison between the relaxed and static results is given in Fig.~\ref{fig:bond-sites-stc-rlx} for the (111) surfaces. For rather closed TiN surfaces, such as the (001) and Ti-terminated (111) orientations [see Fig.~\ref{fig:bond-sites-stc-rlx}(a)], relaxations typically cause only small changes in the equilibrium quantities of the interface. Hence, computationally ``cheap'' static calculations give good estimates, unless pronounced changes in the structure of the Al slab occur. This is, for example, the case for the Al/Ti~(hollow) alignment of the (111)~Al/TiN (Ti-terminated) interface, since the interfacial Al atom relaxes towards the energetically more favorable fcc contact site. In the case of the more open (011) surface, relaxations show more pronounced effects for all alignments and should be taken into account. Nevertheless, the energy hierarchy and the prediction of the occurrence of material transfer are not affected for all alignments with the exception of the Al/TiN~(hollow) case. Again, the Al/TiN~(hollow) interface behaves differently because the relaxed structure of the Al slab is modified by the approaching TiN slab. In more detail the interfacial Al layer is moved to the Al/N~(bridge) site, which is the most favorable alignment. This movement of about \unit[0.8]{\AA} occurs mainly in the lateral plane. The free subinterface layers are shifted to gradually compensate the change in the stacking between the fixed layers at the bottom of the slab and the interfacial layer. These shifts range approximately between \unit[0.2]{} and \unit[0.6]{\AA}. For the cases discussed so far, except for the hollow alignments, relaxations showed rather small effects on the equilibrium quantities. In contrast, all alignments of the (111)~Al/TiN (N-terminated) interface are crucially affected by relaxations [see Fig.~\ref{fig:bond-sites-stc-rlx}(b)]. The adhesion energies are strongly increased, and the energetical hierarchy of the alignments is altered. Furthermore, while static calculations suggest the absence of material transfer, relaxations predict its occurrence for all tested alignments.
\begin{figure*}[hbt]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig6a.pdf}
\caption{(111) Ti-terminated}
\label{fig:111-Titerm-bond-site-stc-rlx}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig6b.pdf}
\caption{(111) N-terminated}
\label{fig:111-Nterm-bond-site-stc-rlx}
\end{subfigure}
\caption{Calculated PBE interaction energies of the Al/TiN interface for the (111) Ti- and N-terminated surfaces. Solid and dashed lines indicate results of relaxed and static calculations, respectively. The slab heights on the lower x axis are valid for static and relaxed calculations, whereas the interface distances on the upper x axis refer only to the static calculations. Various lateral alignments of the two slabs are considered (see Fig.~\ref{fig:bond-sites}). The horizontal solid orange lines give the energy costs to remove at least one layer from an Al slab.}
\label{fig:bond-sites-stc-rlx}
\end{figure*}
For a better understanding of the energetically preferred configurations at the interface, layer-projected densities of state (DOSs) and differences in charge densities are examined. Layer-projected DOSs are displayed in Fig.~\ref{fig:DOS-110} for the two alignments Al/N~(bridge) and Al/Ti~(top) as well as the isolated slabs of the (011) surface orientation. This surface orientation has been chosen because it exhibits a large spread in adhesion energies for different alignments. Additionally, the occurrence of material transfer should depend on the alignment. In Fig.~\ref{fig:DOS-110} ``interface (surface) layers'' indicate the first layers of Al, Ti, and N immediately at the interface (surface), whereas ``subinterface (subsurface) layers'' mean the next layers of Al, Ti, and N moving deeper into both materials. Further layers are not presented because they exhibit only minor differences with respect to the subinterface layers. The DOSs of the shown alignments display distinct features. For the Al/Ti~(top) case, where Ti is the next interfacial neighbor of the top Al atom, the Al DOS is almost not affected by the interface. Only a small accumulation of sp states just below the Fermi energy and a depletion of s states at the edges of the DOS are found for the interface layers with respect to the other layers. The N sp states are shifted closer to the Fermi energy for the interfacial layer, and in particular, the Ti d states exhibit more occupied states at the Fermi energy. These changes indicate a weakly covalent bonding between the Al sp states and the Ti d states. Furthermore, the DOS is very similar to the case of the isolated Al and TiN slabs. This also shows the weak interaction for the Al/Ti~(top) interface. On the other hand, for the Al/N~(bridge) configuration, where the uppermost Al atoms are closer to N across the interface, the Al DOS is changed in a more pronounced way. The sp states in the interface layers are partially shifted to lower energies, resulting in a pronounced peak at about \unit[-8]{eV} and a few minor ones around \unit[-7]{eV}. The N p states around \unit[-5]{eV} are broadened in the interfacial layer, resulting in common peaks with Al states roughly between \unit[-6]{eV} and \unit[-8]{eV}. These effects at the interface indicate a hybridization of Al and N sp states and explain the stronger adhesion due to covalent interaction. The interfacial Ti states are only slightly affected, exhibiting a few more occupied states at the Fermi level.
\begin{figure*}[hbt]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig7a.pdf}
\caption{Al \& TiN~(isolated): surface layers}
\label{fig:DOS-al-tin-iso-l1}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig7b.pdf}
\caption{Al \& TiN~(isolated): sub-surface layers}
\label{fig:DOS-al-tin-iso-l2}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig7c.pdf}
\caption{Al/Ti~(top): interface layers}
\label{fig:DOS-al-t-ti-2.8-l1}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig7d.pdf}
\caption{Al/Ti~(top): sub-interface layers}
\label{fig:DOS-al-t-ti-2.8-l2}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig7e.pdf}
\caption{Al/N~(bridge): interface layers}
\label{fig:DOS-al-b-n-1.8-l1}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig7f.pdf}
\caption{Al/N~(bridge): sub-interface layers}
\label{fig:DOS-al-b-n-1.8-l2}
\end{subfigure}
\caption{Layer-projected DOSs from PBE calculations of the isolated Al and TiN slabs as well as of the (011) Al/TiN interface for the Al/Ti~(top) and Al/N~(bridge) alignments. The Fermi energy is shifted to \unit[0]{eV}.}
\label{fig:DOS-110}
\end{figure*}
In addition to the DOS, charge densities at the interfaces are investigated and presented for the same alignments of the (011) surface. For a better visualization the differences of charge densities \(\rho_{diff}\) between the Al/TiN interface and the isolated, independent Al and TiN slabs are presented in Fig.~\ref{fig:chg-diff-011}. The charge-density difference \(\rho_{diff}\) is defined as
\begin{equation}
\centering
\rho_{diff}=\rho_{Al/TiN} - (\rho_{Al} + \rho_{TiN}),
\label{equ:charge-diff}
\end{equation}
where \(\rho_{Al/TiN}\) is the charge density of the interface, while \(\rho_{Al}\) and \(\rho_{TiN}\) represent the charge densities of the isolated slabs. Both displayed alignments result in a rather continuous charge accumulation between Al and Ti at the interface, suggesting a bonding [see Figs.~\ref{fig:chg-diff-011}(a) and~\ref{fig:chg-diff-011}(c)]. For the Al/N~(bridge) configuration an additional charge buildup occurs between the interfacial Al and N atoms, which indicates covalent contributions to the bonding due to the more localized and directional character of the accumulation [see Fig.~\ref{fig:chg-diff-011}(d)]. These findings support the DOS arguments made in the previous paragraph.
\begin{figure}[hbt]
\centering
\begin{subfigure}[b]{0.2\linewidth}
\includegraphics[width=\linewidth]{fig8a.png}
\caption{\\ Al/Ti (top): Ti-plane}
\label{fig:chg-al-t-ti-x100}
\end{subfigure}
\hspace{0.02\linewidth}
\begin{subfigure}[b]{0.2\linewidth}
\includegraphics[width=\linewidth]{fig8b.png}
\caption{\\ Al/Ti (top): N-plane}
\label{fig:chg-al-t-ti-x50}
\end{subfigure}
\hspace{0.05\linewidth}
\begin{subfigure}[b]{0.2\linewidth}
\includegraphics[width=\linewidth]{fig8c.png}
\caption{\\ Al/N (bridge): Ti-plane}
\label{fig:chg-al-b-n-x100}
\end{subfigure}
\hspace{0.02\linewidth}
\begin{subfigure}[b]{0.2\linewidth}
\includegraphics[width=\linewidth]{fig8d.png}
\caption{\\ Al/N (bridge): N-plane}
\label{fig:chg-al-b-n-x50}
\end{subfigure}
\caption{Charge-density differences \(\rho_{diff}\) [see Eq.~\eqref{equ:charge-diff}] of the (011) Al/TiN interface. \(\rho_{diff}\) was obtained from PBE calculations for the relaxed equilibrium configurations of (a) and (b) the Al/Ti~(top) alignment and (c) and (d) the Al/N~(bridge) alignment. The charge-density difference of each alignment is plotted for the Ti plane and the N plane (recall Fig.~\ref{fig:bond-sites}) for values from -0.2 (solid blue, deficit) to 0.2 (solid red, accumulation) electrons/\unit[]{\AA\(^3\)}. Color code: Al, orange; Ti, violet; N, cyan.}
\label{fig:chg-diff-011}
\end{figure}
\subsection{Approach and Separation of Al and TiN Slabs}\label{subsec:interface-loop}
The energetical argument on material transfer presented above can be tested by ``slowly'', i.e., using small discrete steps, approaching and subsequently separating the slabs. The energetical results of such loops are depicted in Figs.~\ref{fig:pecs}(a)--(d) for different configurations; the respective energies are presented in Table~\ref{tab:loop-data}. The green curves with their data points indicated by pluses in Figs.~\ref{fig:pecs}(a)--(d) give static potential-energy curves, where all atoms were kept rigid for each selected interface distance. For large interface distances this curve shows the limiting case of separated, independent slabs. For ever-shorter distances the effect of relaxation becomes important, and the actual energies deviate from the green curves. The blue curves with their data points marked by crosses show the interaction energies of the approaching slabs including atomic relaxations after each discrete step. For some of these cases we find rather large jumps which are either due to material transfer between the slabs, namely, from Al to TiN, or due to the Al slab expanding into the space between the slabs. Finally, the red curves with their data points displayed by circles indicate the interaction energies of the subsequent separation of the slabs, again including atomic relaxations after each step. These curves are also not completely smooth but display some kinks or smaller jumps mainly due to the breaking apart of the Al/TiN slab into two separated ones. When material transfer takes place, these curves, of course, do not approach the green ones, even at large slab separations.
\begin{figure*}[hbt]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig9a.pdf}
\caption{(111) Ti-terminated}
\label{fig:111-pec}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig9b.pdf}
\caption{(111) N-terminated}
\label{fig:111-nterm-pec}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig9c.pdf}
\caption{(011)}
\label{fig:110-pec}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig9d.pdf}
\caption{(001)}
\label{fig:100-pec}
\end{subfigure}
\caption{Calculated PBE interaction energies [see Eq.~\ref{equ:interaction-energy}] for the approach and subsequent separation of Al and TiN slabs for (111) Ti-terminated, (111) N-terminated, (011), and (001) surface orientations. The alignments follow the definitions in Fig.~\ref{fig:bond-sites}.}
\label{fig:pecs}
\end{figure*}
\begin{table*}[hbt]
\caption{\label{tab:loop-data} Equilibrium interface distances, adhesion energies, energy costs to remove layers from the Al slab, and number of transferred Al layers for various interface configurations. For the (111) orientation Al/Ti and Al/N denote the Ti- and N-terminated surfaces, respectively.}
\begin{ruledtabular}
\begin{tabular}{lcccc}
& Equilibrium & Adhesion energy & Removal energies & Material transfer \\
& interface distance [\unit{\AA}] & [\unit{eV}/interface cell] & [\unit{eV}/interface cell] & [Al layers] \\
(001) Al/N~(top) & 2.06 & -0.61 & 1.16 & 0 \\
(011) Al/N~(bridge) & 1.39 & -2.09 & 1.35 & 2 \\
(011) Al/Ti~(top) & 2.77 & -0.73 & 1.35 & 0 \\
(111) Al/Ti~(hcp) & 2.22 & -1.78 & 0.80 & 2 \\
(111) Al/Ti~(top) & 2.67 & -0.94 & 0.80 & 1 \\
(111) Al/N~(hcp) & 1.04 & -1.90 & 0.80 & 2 \\
(111) Al/N~(top) & 1.87 & -1.38 & 0.80 & 1
\end{tabular}
\end{ruledtabular}
\end{table*}
For the Ti-terminated (111) surface orientation potential-energy curves are presented in Fig.~\ref{fig:pecs}(a) for the two extremal alignments, Al/Ti~(hcp) and Al/Ti~(top), which show the highest and lowest adhesion energies. As expected from the energetics, material transfer occurs during separation, and both systems end up in an energetically more favorable configuration compared to the initial setup. In particular, one and two Al layer(s) for Al/Ti~(top) and Al/Ti~(hcp), respectively, are transferred. This discrepancy in the number of transferred layers cannot be explained from the energetics but could stem from the different equilibrium interface distances. Compared to that of the hcp alignment, this distance is significantly increased by almost 20\% for the top configuration, hindering the interaction between TiN and the subinterface Al layer. For the Al/Ti~(hcp) configuration snapshots of the structures during approach and separation are presented in Fig.~\ref{fig:111-app-sep}. During the approach at a slab height of about \unit[33.6]{\AA} a large drop in interaction energy occurs for the Al/Ti~(hcp) alignment [see Fig.~\ref{fig:pecs}(a)] due to material transfer of the topmost Al layer to the TiN slab [see Fig.~\ref{fig:111-app-sep}(b)]. This is not the ground state since a transfer of two layers would yield an even lower total energy. At this distance, the transfer of the second Al layer is hindered by an energy barrier of about \(E_{b2} \approx\)~\unit[324]{meV}, which is significantly larger than for the first layer alone, \(E_{b1} \approx\)~\unit[122]{meV}. Upon further approaching, at a slab height of about \unit[32.8]{\AA}, a slight kink occurs [see Fig.~\ref{fig:pecs}(a)] because the Al slab is expanded into the space between the slabs [see Fig.~\ref{fig:111-app-sep}(c)]. For a further approach, the interaction energy follows an essentially parabolic curve until the minimum energy is reached [see Figs.~\ref{fig:pecs}(a) and~\ref{fig:111-app-sep}(d)]. The subsequent separation is started from the equilibrium structure at the interaction energy minimum. At first the red interaction energy curve lies on top of the blue one [see Fig.~\ref{fig:pecs}(a)], meaning that the Al slab becomes extended again [see Fig.~\ref{fig:111-app-sep}(f)]. At a slab height of about \unit[33.1]{\AA} the two curves for approach and separation start to deviate [see Fig.~\ref{fig:pecs}(a)] when the Al/TiN compound separates [see Fig.~\ref{fig:111-app-sep}(g)]. Two Al layers stick to the TiN slab and form a stable configuration. This behavior during the complete loop is typical for all cases exhibiting material transfer. While the Al slab is strongly affected by the approach of the TiN slab, almost no changes in the TiN structure are observed. The more pronounced impact on the Al slab is not surprising when considering that TiN forms a much more rigid lattice than Al. This claim is not entirely valid for the N-terminated (111) TiN slab, which will be discussed in the following paragraph. Using the finally stable state (TiN plus two Al layers) as a starting configuration for a new loop of approach and separation versus an Al slab yields a reversible cycle. This should be kept in mind when one is interpreting, for example, AFM experiments. Upon the first contact between the tip and a particular spot on a surface material transfer might occur, which in turn changes the contact properties and forces between the tip and the surface. However, further encounters on the same spot should then be within the reversible cycle and lead to the same response.
\begin{figure}[hbt]
\centering
\begin{subfigure}[b]{0.167\linewidth}
\includegraphics[width=\linewidth]{fig10a.png}
\caption{\\ \unit[33.8]{\AA}}
\label{fig:111-in1}
\end{subfigure}
\hspace{0.02\linewidth}
\begin{subfigure}[b]{0.11\linewidth}
\includegraphics[width=\linewidth]{fig10b.png}
\caption{\unit[33.6]{\AA}}
\label{fig:111-in2}
\end{subfigure}
\hspace{0.02\linewidth}
\begin{subfigure}[b]{0.11\linewidth}
\includegraphics[width=\linewidth]{fig10c.png}
\caption{\unit[32.8]{\AA}}
\label{fig:111-in3}
\end{subfigure}
\hspace{0.02\linewidth}
\begin{subfigure}[b]{0.11\linewidth}
\includegraphics[width=\linewidth]{fig10d.png}
\caption{\unit[30.8]{\AA}}
\label{fig:111-in4}
\end{subfigure}
\vspace{0.5cm}
\begin{subfigure}[b]{0.167\linewidth}
\includegraphics[width=\linewidth]{fig10e.png}
\caption{\\ \unit[30.8]{\AA}}
\label{fig:111-out1}
\end{subfigure}
\hspace{0.02\linewidth}
\begin{subfigure}[b]{0.11\linewidth}
\includegraphics[width=\linewidth]{fig10f.png}
\caption{\unit[33.0]{\AA}}
\label{fig:111-out2}
\end{subfigure}
\hspace{0.02\linewidth}
\begin{subfigure}[b]{0.11\linewidth}
\includegraphics[width=\linewidth]{fig10g.png}
\caption{\unit[33.1]{\AA}}
\label{fig:111-out3}
\end{subfigure}
\hspace{0.02\linewidth}
\begin{subfigure}[b]{0.11\linewidth}
\includegraphics[width=\linewidth]{fig10h.png}
\caption{\unit[34.6]{\AA}}
\label{fig:111-out4}
\end{subfigure}
\caption{(a)--(d) Approach and (e)--(h) separation of (111) Al and (111) TiN slabs. Al, Ti and N are colored in red, blue, and green, respectively. (d) and (e) show the structure at the relaxed equilibrium distance. The steps are defined by the slab height.}
\label{fig:111-app-sep}
\end{figure}
The N-terminated (111) orientation is, in some respects, very similar to the Ti-terminated one. As predicted, both configurations yield material transfer for all tested alignments (see Fig.~\ref{fig:pecs}). However, as explained above, static calculations completely fail to describe the equilibrium quantities of the N-terminated case, whereas they result in good estimates for the other orientations. This discrepancy is due to the behavior of the interfacial N layer for N-terminated (111) TiN. In the absence of the counter Al slab, the surface N layer is closely bound to the next Ti layer at a distance of \unit[0.84]{\AA}, while in contact with an Al slab the distance grows to \unit[1.47]{\AA} at the equilibrium configuration. This behavior is crucial for the energetics and can be captured only when relaxations are included. The interfacial N layer is actually closer to the next Al layer with a distance of \unit[1.11]{\AA} than to the next Ti one. Due to this result the possibility of a diffusion of the interfacial N layer into the Al slab was investigated. For all alignments with the exception of Al/N (top), no energetically favorable configurations were found. However, for the Al/N (top) alignment the exchange of the interfacial Al and N layers and a subsequent relaxation of the system yield a favorable state by about \unit[683]{meV}, which is also about \unit[235]{meV} lower than the previously found minimum for the Al/N (hcp) alignment. In this favorable configuration an Al-N-Al trilayer is formed, showing the wurtzite structure, which is typically observed in aluminum nitride crystals. From the thermodynamical point of view diffusion seems to be possible. Of course, for the full picture reaction paths and energy barriers have to be considered.
The (011) surface orientation also presents an interesting case because due to the energetic results [see Fig.~\ref{fig:bond-sites-pec}(c)], material transfer is expected for only the alignments Al/N~(bridge), Al/N~(top), and Al/TiN~(hollow). As an example, one can see from the loops given in Fig.~\ref{fig:pecs}(c) that Al/N~(bridge) shows a favorable configuration after separation corresponding to the transfer of two Al layers, whereas the Al/Ti~(top) case is reversible upon approach and separation without any material transfer.
Finally, for the (001) surface orientation material transfer is not expected for any of the alignments. Among all cases Al/N~(top) has the largest adhesion energy; therefore, if a material transfer occurs, it will happen for this case. However, since the energy cost for the removal of an Al layer exceeds the adhesion energy, no material transfer is observed [Fig.~\ref{fig:pecs}(d)]. The deviation of the curves for approach and separation around \unit[26.5]{\AA} slab height occurs due to the expansion of the Al slab upon separation until the interface breaks apart and relaxes into the initial Al and TiN slabs.
In the literature some publications on tensile test simulations of Al/TiN interfaces can be found, where the separation is achieved by increasing the size of the whole simulation cell in one direction in discrete steps including interim relaxations. Liu et al.~\cite{liu_first-principles_2005} and Zhang et al.~\cite{zhang_first-principles_2007} investigated the Al/TiN (111) and (001) interfaces, respectively. Liu et al. obtained similar results with respect to material transfer for the hcp alignment at the (111) surface for both terminations but did not examine any further alignments of Al and TiN slabs at the interface. Zhang et al. studied the Al/N~(top) configuration of the (001) interface and, in contrast to our work, found a material transfer of the top Al layer. This discrepancy could stem from the different simulation approaches and computational details. However, we repeated these calculations using a setup for the separation of the slabs similar to that of Zhang et al. and did not find any material transfer. Additional simulations for the different setups used by Zhang et al. and in the present investigation testing the influence of varying step sizes also did not lead to a material transfer.
\subsection{Comparison of Surface Energies}\label{subsec:surf-energy}
The behavior of the different surface orientations can also be discussed from the surface energy's point of view. The surface energies for Al and TiN slabs are presented in Table~\ref{tab:surf-energy}. It has to be noted that for (111) TiN the surface energy depends on the termination and the chemical potential of nitrogen. Here the lowest possible value for the surface energy is used, which is achieved by the N-terminated surface at \(\Delta\mu_N = 0\) (see Fig.~\ref{fig:TiN-term}). The value of the surface energy for the Ti-terminated surface at \(\Delta\mu_N = 0\) is about three times larger, but its minimum is comparable to that of the N-terminated case. For the (001) and (011) orientations the surface energy is independent of the chemical potential~\cite{wang_surface_2010}. As shown in Table~\ref{tab:surf-energy}, the Al surfaces always exhibit a smaller surface energy than the TiN ones. The differences between Al and TiN are pronounced for the (011) and (111) orientations. For (001), however, the surface energies are rather comparable. Material transfer can be seen as a measure of surface-energy minimization by creating a new energetically ``cheap'' surface and covering an ``expensive'' one with it. This argument provides a hint about which surface orientations may favor material transfer. However, for the full picture also other contributions such as the interaction energy, which is influenced additionally by the alignment of the slabs, also have to be considered. For example, the surface-energy argument would suggest the possibility of material transfer for (001) and cannot explain why only some (011) configurations exhibit this feature.
\begin{table}[hbt]
\caption{\label{tab:surf-energy} Surface energies (in \unit{eV/\AA\(^2\)}) of the Al and TiN slabs for the (001), (011), and (111) surface orientations. In the case of the (111) TiN surface the N-terminated one at \(\Delta\mu_N = 0\) is given here because it exhibits the lowest surface energy of all (111) TiN surfaces (see Fig.~\ref{fig:TiN-term}).}
\begin{ruledtabular}
\begin{tabular}{lccc}
& (001) & (011) & (111) \\
Al & 0.058 & 0.064 & 0.052 \\
TiN & 0.087 & 0.174 & 0.094
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Assessment of Computed Results}\label{subsec:add-tests}
To validate the information presented above various additional tests were performed. The results will be presented for the Ti-terminated (111) Al/Ti~(hcp) configuration. First of all, finite-size effects are a major concern. Thus, the size of the simulation cell was increased laterally up to a 3\(\times\)3 surface cell and vertically up to a 19-layer Al slab. Also, intermediate Al slab thicknesses were examined. The TiN slab was not extended vertically because it is almost not affected by the approach of the Al slab. In the case of laterally magnified simulation cells the number of \textbf{k} points was decreased accordingly, e.g., for a 3\(\times\)3 surface cell, a 5\(\times\)5\(\times\)1 mesh was used. For all tested systems the equilibrium interface distances, adhesion energies, and energy costs to remove Al layers were found within about 2\% of the values given above. The energies are referenced to 1\(\times\)1 surface cells. Particularly, the results on material transfer were not affected, meaning that the number of transferred Al layers was not altered. For the 3\(\times\)3 surface cell the effect of fluctuations at the surface was tested by moving one atom out of the surface plane at several interface distances before material transfer occurs. These tests resulted in the transfer of entire layers too since the shifted atom either relaxed back into its originating slab or was transferred together with the rest of the layer.
Furthermore, the effect of the chosen lattice parameters was investigated. The simulations were repeated using the lattice constants of pure Al and TiN for the lateral lattice parameters of the simulation cell. The equilibrium interface distance again changed by only about 2\%. Although the adhesion energies were altered by about 4\%, the removal energies were affected in a similar way, resulting in the same material transfer. Moreover, the influence of other approximations for the exchange correlation functional was tested as already discussed for the removal energies above. The results are presented in Fig.~\ref{fig:111-vgl-xc}. The adhesion energies were enhanced similar to the removal energies, again producing the same results for material transfer. Using the vdW functional, the interaction between the slabs started at larger interface distances. This behavior is expected because of the nonlocal correction added in the vdW functional.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.9\linewidth]{fig11.pdf}
\caption{Comparison of calculated interaction energies for the Ti-terminated (111) Al/Ti~(hcp) configuration and various exchange correlation functionals, namely, PBE, LDA, and opbB86b. The relaxed energies represent the separation of the slabs starting from the equilibrium configuration. The differences between static and relaxed curves at large heights occur due to material transfer.}
\label{fig:111-vgl-xc}
\end{figure}
\section{Conclusion and Outlook}\label{sec:conclusion}
Al/TiN interfaces were examined in detail by investigating the contact between Al and TiN slabs showing low-index surface orientations within the framework of density functional theory. Moreover, these contacts were established for various lateral alignments of the slabs at the interface. It was shown that interfacial properties such as the adhesion energy and the equilibrium structure sensitively depend on the given configuration. This behavior can be qualitatively explained by comparing the densities of state and the charge densities of different configurations because distinct bond situations are revealed at the interface. Furthermore, the approach and subsequent separation of Al and TiN slabs were simulated to study the effect on the slabs, especially the possibility of material transfer. The transfer of material from an Al toward a TiN slab was observed for interfacial configurations, which exhibited a larger adhesion energy than the energy cost to remove layers from the Al slab. This is in agreement with the observation that metal-ceramic interfaces break at the interface or in bulk areas according to their interfacial adhesion~\cite{howe_bonding_1993-1, ernst_metal-oxide_1995}. The removal energy for Al layers was found to depend on tensile or compressive stress. In all systems showing material transfer one or two layers of Al stick to the TiN slab after the separation and form an energetically favorable compound with respect to the initial configuration. The differences in surface energies between the slabs are not sufficient to explain the occurrence of material transfer because the given alignment at the interface has to be considered as well. All results were tested for various computational setups such as different sizes of the investigated system or several approximations for the exchange correlation functional. While properties such as the removal and adhesion energies depend on these settings to some degree, the results for material transfer are not affected.
The method used in this work can be, in principle, applied to any pair of materials. However, complex materials or pairs with an unfavorable bulk lattice mismatch may need very large simulation cells to be considered, which means high computational demands, in order to preserve the translational symmetry and to keep the distortions at an acceptable level. Furthermore, larger cells also allow the inclusion of additional features. For example, the distortions due to the lattice mismatch can be minimized, dislocations as well as quasi-incommensurate contacts can be modeled, and even roughness could be included to some degree, e.g., by using stepped surfaces or a regular grid of small asperities.
\section*{Acknowledgments}\label{sec:acknow}
The authors thank G. Vorlaufer for fruitful discussions. G.F., M.W., P.O.B., P.M. and J.R. acknowledge the support by the Austrian Science Fund (FWF): F4109 SFB ViCoM. Part of this work was funded by the the Austrian COMET-Program (project K2 XTribology, No. 824187 and No. 849109) via the Austrian Research Promotion Agency (FFG) and the Province of Nieder\"osterreich, Vorarlberg and Wien. This work has been carried out within the ``Excellence Center of Tribology'' (AC2T research GmbH) and at the Vienna University of Technology. Part of this work was supported by the European Cooperation in Science and
Technology (COST; Action MP1303). The authors also appreciate the ample support of computer resources by the Vienna Scientific Cluster (VSC). Figs.~\ref{fig:Al-TiN-initial} and \ref{fig:bond-sites} were created employing \textsc{VESTA}~\cite{momma_vesta_2011}, Fig.~\ref{fig:chg-diff-011} utilizing \textsc{VisIt}~\cite{childs_visit:_2012} and Fig.~\ref{fig:111-app-sep} using \textsc{VMD}~\cite{humphrey_vmd:_1996}.
|
1,477,468,750,656 | arxiv |
\section{Introduction}
\label{sec:introduction}
\input{sections/introduction}
\section{Related Work}
\label{sec:related_work}
\input{sections/related_work}
\section{The Fishbowl Dataset}
\label{sec:dataset}
\input{sections/dataset}
\section{A multi-stage approach for unsupervised scene modelling}
\label{sec:method}
\input{sections/method}
\section{Experiments}
\label{sec:experiments}
\input{sections/experiments}
\section{Discussion}
\label{sec:discussion}
\input{sections/discussion}
\ificlrfinal
\section*{Acknowledgements}
The authors thank Martin Engelcke and Ingmar Posner for help with training the GENESIS-v2 model.
This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039A and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation): Germany’s Excellence Strategy – EXC 2064/1 – 390727645 and SFB 1233, Robust Vision: Inference Principles and Neural Mechanisms, TP3/TP4, project number: 276693517.
The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Matthias Tangemann.
\fi
\subsubsection*{Stage 1: motion segmentation—obtaining candidate objects from videos}
As a first step, we use unsupervised motion segmentation to obtain candidate segmentations of the input videos.
We build on the minimum cost multicut method by \cite{keuper2015multicuts}, which tracks a subset of the pixels through the video using optical flow and then, inspired by the {\em Common Fate Principle} mentioned earlier, clusters the trajectories based on pairwise motion affinities.
We use the original implementation of the authors, but replace the postprocessing required to obtain a dense segmentation with a simpler but faster non-parametric watershed algorithm~\citep{beucher1993morphological} followed by computing spatiotemporal connected components~\citep{silversmith2021cc3d}.
The quality of the motion segmentation critically depends on the quality on the optical flow estimation, so we explore different models for that step. ARFlow~\citep{liu2020analogy} is the current state of the art self-supervised optical flow method that combines a common warping based objective with self-supervision using various augmentations. We use the published pretrained models as well as a variant trained on the Fishbowl dataset (see supplement for details). Similar augmentations as used by ARFlow can alternatively be used to synthesize training data for supervised methods, as done for generating the FlyingChairs and FlyingThings datasets~\citep{dosovitskiy2015flownet,mayer2016large}. We experiment with FlowNet 2.0~\citep{ilg2017flownet2} and the more recent RAFT~\citep{teed2020raft} trained on those two datasets.
To obtain background masks for training the background model, it is not necessary to differentiate between multiple object instances.
We aim for a low rate of foreground pixels being mistaken for background pixels, while background pixels mistaken for foreground are of less concern. Hence, we use an ensemble of different background-foreground segmentation models from the bgslibrary \citep{bgslibrary}. Based on early experiments, we used the PAWKS \citep{stcharles2016pawks}, LOBSTER \citep{stcharles2014lobster}, $\Sigma-\Delta$ estimation \citep{manzanera2007sigmadelta} and static frame differences and label every pixel detected as foreground by either of the methods as a foreground pixel.
We found that this rather simple model can faithfully remove the foreground objects in most of the cases.
We provide additional details in the appendix.
\subsubsection*{Stage 2A: object model---learning to generate unoccluded, masked objects}
\textbf{Object extraction.}
We use the bounding boxes of the candidate segmentation to extract object crops from the original videos and rescale them to a common size of $128\times64\text{px}$.
We filter out degenerate masks by ignoring all masks with an area smaller than 64 pixels and only considering bounding boxes with a minimum distance of 16px to the frame boundary.
Accordingly, we extract the candidate \textit{segmentation masks} $\mb_0, \dots, \mb_K$ for each crop.
\looseness-1 For notational convenience, we take $\mb_0$ and $\mb_1$ to correspond to the background and the object of interest (i.e., that whose bounding box was used to create the crop), respectively, so that $\mb_k$ with $k\geq2$ correspond to the masks of other objects.
\textbf{Task.} We use the segmented object crops for training a $\beta$-VAE-based generative object model \citep{higgins2017betavae}. Input to the model is the object crop without the segmentation, output is the reconstructed object appearance including the binary object mask. We train the model with the standard $\beta$-VAE loss with an adapted reconstruction term including both the appearance and the mask. For an input batch, let $\cb$ and $\mb_{0:K}$ be the ground truth crops with candidate segmentations, and $\cbh$ and $\mbh$ the reconstructed object appearances (RGB values for each pixel) and shapes (foreground probabiliy for each each pixel). The reconstruction loss $\Lcal_R$ for these objects is then the weighted sum of the pixel-wise MSE for the appearance and the pixel-wise binary cross entropy for the mask:
\begin{align*}
\label{eq:objective_object_model}
\textstyle \Lcal_{R,\text{appear.}} &=
\sum_i\Big(\sum_{u,v} \mb^{(i)}_1(u,v) \norm{\cb^{(i)}(u,v) - \cbh^{(i)}(u,v)}_2^2
/ \sum_{u,v} \mb^{(i)}_1(u,v)\Big),\\
\Lcal_{R,\text{mask}} &=
\sum_i\Big(\sum_{u,v} [\mb_0^{(i)} + \mb_1^{(i)}](u,v) \cdot \mathrm{BCE} \big[ \mb_1^{(i)}(u,v), \mbh^{(i)}(u,v)]
/ \sum_{u,v} [\mb_0^{(i)} + \mb_1^{(i)}](u,v)\Big).
\end{align*}
As the task for the object model is to only represent the central object in each crop, we restrict the appearance loss to the candidate mask of the object~($\mb_1$) and the mask loss to the union of the candidates masks of the object and the background~($\mb_0+\mb_1$).
Importantly, the reconstruction loss is not evaluated for pixels belonging to other objects according to the candidate masks.
Therefore, the object model is not penalized for completing object parts that are occluded by another object.
\looseness-1\textbf{Learning object completion via artificial occlusions.}
To encourage the model to correctly complete partial objects, we use artificial occlusions as an augmentation
during training.
Similar to a denoising autoencoder~\citep{vincent2008extracting}, we compute the reconstruction loss using the unaugmented object crop.
We consider two types of artificial occlusions: first, we use a \textit{cutout} augmentation \citep{devries2017cutout} placing a variable number of grey rectangles on the input image. As an alternative, we use the candidate segmentation to place another, randomly shifted object from the same input batch onto each crop.
\textbf{Model.}
We use a $\beta$-VAE with 128 latent dimensions.
The encoder is a ten layer CNN, the appearance decoder is a corresponding CNN using transposed convolutions~\citep{dumoulin2018guide} and one additional convolutional decoding layer.
We use a second decoder with the same architecture but only a single output channel do decode the object masks.
During each epoch, we use crops from two random frames from every object.
We train our model for 60 epochs using Adam~\citep{kingma2015adam} with a learning rate of $10^{-4}$, which we decrease by a factor of 10 after 40 epochs.
We chose the optimal hyperparameters for this architecture using grid searches.
\looseness-1 More details regarding the model architecture and the hyperparameters are provided in the supplement.
\subsubsection*{Stage 2B: background model---learning to generate unoccluded backgrounds
}
\looseness-1 \textbf{Task.}
We use an ensemble of background extraction techniques outlined above to estimate background scenes for each frame. We train a $\beta$-VAE on these backgrounds using the appearance loss $\Lcal_{R,\text{appear.}}$ with the inferred background mask, \looseness-1 without any additional cutout or object augmentation.
\textbf{Architecture.}
The $\beta$-VAE has the same architecture as the object model, but only uses a single decoder for the background appearance.
We do not focus on a detailed reconstruction of background samples and limit the resolution to $96\times64\text{px}$. When sampling scenes, the outputs are upsampled to the original resolution of $480\times320\text{px}$ using bilinear interpolation.
\subsubsection*{Stage 3: scene model---learning to generate coherent scenes}
\begin{table}[tb]
\begin{minipage}[t]{0.34\textwidth}
\vspace{-2em}
\centering
\newcommand{3.5em}{3.5em}
\newcommand{1.5em}{1.5em}
\newcommand{\nodesize}{2.5em}
\resizebox{0.75 \textwidth}{!}{%
\begin{tikzpicture}[baseline=(current bounding box.north)]
\centering
\node (background_latent) [latent, minimum size=\nodesize] {$\zb^\bg$};
\node (background_image) [det, below=of background_latent, xshift=1.5*3.5em, minimum size=\nodesize, yshift=1.5em] {$\xb_0$};
\node (object_latent) [latent, below=of background_latent, xshift=-1.5*3.5em, minimum size=\nodesize, yshift=1.5em] {$\zb^\text{app}_k$};
\node (object_image) [det, below=of object_latent, minimum size=\nodesize, yshift=1.5em] {$\ob_k$};
%
\node (object_position) [latent, below=of background_latent, xshift=-0.5*3.5em, minimum size=\nodesize, yshift=1.5em] {$\zb_k^\text{pos}$};
%
\node (object_scale) [latent, below=of background_latent, xshift=0.5*3.5em, minimum size=\nodesize, yshift=1.5em] {$\zb_k^\text{scale}$};
\node (scene) [det, below=of background_image, minimum size=\nodesize, yshift=1.5em] {$\xb$};
\edge{background_latent}{background_image, object_latent, object_position, object_scale};
\edge{object_latent}{object_image};
\edge{object_image, background_image, object_position, object_scale}{scene};
\plate[inner sep=0.1em,
yshift=0.1em] {objects}{(object_latent) (object_position) (object_scale) (object_image)}{};
%
\node (plate_caption) [const, yshift=-9em, xshift=0.5em] {$k=1,...,K$};
\end{tikzpicture}
}
\vspace{-0.25cm}
\captionof{figure}{\label{fig:causal_graph_scene_model}
\small
Causal graph for our scene model; circles and diamonds denote random and deterministic quantities.}
\end{minipage}
\hfill
\begin{minipage}[t]{0.65\textwidth}
\vspace{-1.5em}
\footnotesize
\centering
\captionof{table}{\small \looseness-1 Segmentation performance of the unsupervised motion segmentation~\citep{keuper2015multicuts} for different optical flow estimators.
}
\label{tbl:motion-segmentation}
\vspace{-0.5em}
\resizebox{0.99\textwidth}{!}{
\begin{tabular}{llcccc}
\toprule
\multicolumn{2}{c}{\textbf{Optical flow}} & \multicolumn{2}{c}{\textbf{IoU}} & \multicolumn{2}{c}{\textbf{Recall}} \\
\cmidrule(r){1-2} \cmidrule(r){3-4} \cmidrule(r){5-6}
Estimator & Training data & Background & Objects & @0.0 & @0.5 \\
\midrule
ARFlow & KITTI & 0.874 & 0.246 & 0.828 & 0.199 \\
ARFlow & Sintel & 0.890 & 0.243 & 0.809 & 0.213 \\
ARFlow & Fishbowl & 0.873 & 0.248 & 0.842 & 0.204 \\
RAFT & FlyingThings & 0.930 & 0.318 & 0.663 & 0.351 \\
FlowNet 2 & FlyingThings & \textbf{0.934} & \textbf{0.365} & \textbf{0.674} & \textbf{0.416} \\
\bottomrule
\end{tabular}
}
%
\end{minipage}
\vspace{-1.5em}
\end{table}
In the final stage, we combine the object and background model into a scene model that allows sampling novel scenes.
As the scene model can reuse the decoders from the previous stages, its main task is to model the parameters defining the scene composition such as object counts, locations and dependencies between the background and the object latents.
Compared to an end-to-end approach, the complexity of the learning problem is greatly reduced in this setting.
It is straightforward to generalize the scene model beyond the training distribution: E.g., it is easy to sample more objects than observed in the input scenes.
We use a scene model following the causal graph depicted in~\cref{fig:causal_graph_scene_model}:
First, we sample a background latent $\zb^\bg$ which describes global properties of the scene such as its composition and illumination; $\zb^\bg$ is then decoded by the background model into a background image $\xb_0$.
Conditioned on the background latent, we sequentially sample $K$ tuples $(\zb^\text{app}_k, \zb^\text{pos}_k,\zb^\text{scale}_k)$ of latents encoding appearance, position, and scale of object $k$, respectively; the number of objects $K$ is sampled conditional on $\zb^\bg$ as well.
Each appearance latent $\zb^\text{app}_k$ is decoded by the object model into a masked object $\ob_k=(\mb_i,\xb_i)$, which is subsequently re-scaled by $\zb^\text{scale}_k$ and placed in the scene at position $\zb^\text{pos}_k$ according to a dead-leaves model (i.e., occluding previously visible pixels at the same location).
Due to the formulation of the model, we are flexible in specifying the conditional and prior distributions needed to generate samples.
A particular simple special case is to sample all latents (indicated as circles in Fig.~\ref{fig:causal_graph_scene_model}) independently. This can be done by informed prior distributions, or by leveraging the training dataset.
In the former case, we sample $\zb^\bg$ and all $\zb^\text{app}_k$ from the standard normal prior of the $\beta$-VAE, but reject objects for which the binary entropy of the mask (averaged across all pixels) exceeds a threshold (for figures in the main paper, 100 bits).
We found that empirically, this entropy threshold can be used to trade diversity of samples for higher-quality samples (cf. supplement).
For the coordinates, a uniform prior within the image yields reasonable samples, and scales can be sampled from a uniform distribution between $64\times 32$px and $192\times 96$px and at fixed $2:1$ ratio.
Alternatively, all distributions can be fit based on values obtained from the motion segmentation (object and background latents, distribution of sizes, distribution of coordinates). We provide a detailed analysis in the supplement.
\subsection{GAN Baseline}
As a baseline method for generating novel scenes for the \textsc{Fishbowl} dataset, we consider the GAN used by~\cite{mescheder2018which} for the CelebA-HQ dataset~\citep{karras2018progressive}.
We used the official implementation available at \href{https://github.com/LMescheder/GAN_stability}{https://github.com/LMescheder/GAN\_stability}, and only changed the resolution of the generated images to $192\times128\text{px}$.
For training, we used every $16$th frame from each input video in the training set rescaled to the resolution of the GAN, resulting in $160$k training images.
We trained the model for 90 epochs.
\cref{fig:gan-baseline-input-and-samples} shows samples from the model in comparison to training frames.
Overall, the GAN is able to generate images of convincing quality resembling the training images well.
A particular strength of the GAN in comparison to our method, is it's ability to generate backgrounds with many details---which we explicitly left for future work.
Several fish generated by the GAN however look distorted, in agreement with previous works concluding that GANs are good at generating ``stuff'', but have difficulties generating objects~\citep{bau2019seeing}.
This becomes especially apparent when visualizing interpolations in the latent space between samples, as done in~\cref{fig:gan-interpolation}.
Many differences between the samples generated by the GAN and our method, respectively, stem from the GAN being able to learn the overall statistics of the scenes well, but not learning a notion of objects.
Compared to the GAN baseline, our object-centric approach offers several conceptual advantages:
(1) The background and the objects in each scene are represented individually by our method, which makes it straightforward to intervene on the generated samples in a meaningful way (\cref{fig:scene-model-intervention}).
While directions in the latent space of the GAN that correspond to semantically meaningful changes in the image might well exist, the GAN framework does not offer a principled way to find those directions without supervision.
(2) While the GAN is able to generate novel scenes, it cannot be used to infer the composition of input scenes.
(3) An optimally trained GAN perfectly captures the statistics of the input scene---making it impossible to generate samples beyond the training distribution.
As our model explicitly represents scene parameters such as the number and position of fish, our model can be used for controlled out-of-distribution sampling (\cref{fig:scene-model-intervention}).
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{supplement_figures/gan_baseline_input_and_samples.png}
\caption{%
\small
Samples from a baseline GAN~\citep{mescheder2018which} trained on frames from the training set of the \textsc{Fishbowl} dataset.
\textit{Top:} Input frames from the dataset.
\textit{Bottom:} Samples generated by the GAN.
}
\label{fig:gan-baseline-input-and-samples}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{supplement_figures/gan_baseline_interpolation.png}
\caption{%
\small
Samples from the baseline GAN, obtained by interpolating the latent vectors between random samples.
}
\label{fig:gan-interpolation}
\end{figure}
\subsection{SPACE~\texorpdfstring{\citep{lin2020space}}{}}
SPACE~\citep{lin2020space} is an object-centric representation learning method following the Attend-Infer-Repeat (AIR) framework~\citep{eslami2016air, crawford2019spair}. Different from the AIR model, the detection of objects and inference of their latent representation is fully parallelized to make the approach faster and more scalable. We trained SPACE on the Fishbowl dataset using the implementation provided by the authors (\href{https://github.com/zhixuan-lin/SPACE}{https://github.com/zhixuan-lin/SPACE}). We used the default hyperparameters and trained two variants using a 4x4 and 8x8 grid of object detectors, respectively. As for the GAN baseline in~\cref{sec:gan_baseline}, we used every 16th frame for training this model to keep the training time reasonable (160k frames in total). Despite the subsampling, this is still substantially more data than the model was trained on in the original paper (60k images). SPACE expects the input images to have equal width and height, therefore we used a central square crop from every frame.
Training the model on a single Nvidia RTX 2080ti GPU is relatively fast (14h for the 4x4 grid and 18h for the 8x8 grid), confirming the performance advantage of the parallel object detectors. As the results in the~\cref{fig:space-on-fishbowl} show, the object reconstructions from the model overall look reasonable. Most structure in the background is missed by the model, however we conjecture that this might be solvable by adapting the capacity of the background network. The visualization of the object detections however reveals a more fundamental failure mode due to the grid structure used by the object detector in SPACE: Larger fish are split across several cells, even when using only 4x4 cells. As each cell can only handle at most one object, decreasing the cell count further is not expected to yield sensible results, as this limits the number of objects too much. \cite{lin2020space} mentioned this problem and introduced a boundary loss to address it, however on the Fishbowl dataset this does not resolve the problem. We hypothesize that an approach based on a fixed grid, while adequate in some cases and having substantial performance advantages, doesn’t work well with objects showing large scale variations as present in our newly proposed dataset. We believe this is a limitation of SPACE that cannot be resolved with further model tuning.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{supplement_figures/space_on_fishbowl.png}
\caption{%
\small
Scenes from the Fishbowl dataset reconstructed by SPACE~\citep{lin2020space}. The model is trained using the official implementation provided by the authors with the default parameters. The two variants shown use grid sizes of 4x4 and 8x8, respectively.
}
\label{fig:space-on-fishbowl}
\end{figure}
\subsection{GENESIS-v2~\texorpdfstring{\citep{engelcke2021genesis}}{}}
We trained GENESIS-v2 on the Fishbowl dataset using the official implementation by the authors (\href{https://github.com/applied-ai-lab/genesis}{https://github.com/applied-ai-lab/genesis}) with default hyperparameters except for the image resolution, number of object slots and batch size. We modified the model code to work for rectangular images and used a resolution of 128x192 pixels. We trained GENESIS-v2 having 5 object slots on 4 Nvidia RTX 2080Ti GPUs using a batch size of 64. Initial experiments with 10 object slots lead to the background being split up into multiple slots. As for the GAN baseline before, we used every 16th frame for training this model (160k frames in total).
In~\cref{fig:genesisv2-on-fishbowl} we show qualitative results from GENESIS-v2 on the Fishbowl dataset. The reconstructions of the model look somewhat blurry but capture the major structure in the input images well. Importantly, the visualization of the segmentation map and the individual slots reveal that the model succeeds in learning to decompose the scenes into background and objects. Sampling objects and scenes however fails with this model configuration. Most likely this happens due to the GECO objective~\citep{rezende2018taming}, that decreases the weight of the KL term in the loss function as long as the log likelihood of the input sample is below the target value. Training GENESIS-v2 with the original VAE objective instead of GECO leads to better scene samples, the decomposition of the scene however fails with this objective~\cref{fig:genesisv2-nogeco-on-fishbowl}.
As a comparison to the end-to-end training within GENESIS-v2, we trained a variant of our object model using the architecture of the GENESIS-v2 component decoder. As encoder, we use a CNN constructed symmetrically to the decoder, using regular convolutions instead of the transposed convolutions. We trained the model with the cutout augmentation and using the same loss and training schedule as for the object model described in the main paper.
The results in~\cref{tbl:genesisv2-object-model} and~\cref{fig:genesisv2-object-model} show that the model generally performs well, but worse than our original object model. This can most likely be explained by the larger capacity of our object model (10 vs 5 layers, 128 vs 64 latent dimensions) which seems to be necessary to model the visually more complex objects in our setting. Also when trained separately, the object model has difficulties with sampling novel fish, which can be addressed at the cost of worse reconstructions (\cref{fig:genesisv2-object-model-beta0.001}).
Overall we conclude that our modular object learning approach scales much better to visually more complex scenes than end-to-end training as done by GENESIS-v2.
Even when using the GENESIS-v2 component decoder within our framework, the necessary trade-off between reconstruction and generation capabilities seems to be more favorable when using our modular approach as opposed to the end-to-end training.
We remark that this comparison comes with a grain of salt: We neither adapted the hyperparameters of GENESIS-v2 nor of our method and the same decoder is used within GENESIS-v2 for the background, too.
The strong qualitative difference in sample quality however makes it unlikely that this explains all of the difference.
Moreover, our approach only addresses learning generative object models whereas GENESIS-v2 is also capable to infer the decomposition of static input scenes.
For the future, we therefore see much potential in combining the respective strengths of both methods.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{supplement_figures/genesis2_on_fishbowl.png}
\caption{%
\small
Qualitative results of GENESIS-v2 applied on the \textsc{Fishbowl} dataset. The reconstruction in the second row look somewhat blurry, but capture all major structure in the input images shown in the first row. The visualization of the reconstructed segmentation shows that the model succeeds in decomposing the input image into the background and the different objects. Sampling from the model however fails in this setting.
}
\label{fig:genesisv2-on-fishbowl}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{supplement_figures/genesis2_nogeco_on_fishbowl.png}
\caption{%
\small
Qualitative results of GENESIS-v2 trained on the Fishbowl dataset using the default VAE objective instead of the GECO objective. Sampling from the model works much better in this setting; the model fails however to segment the scene into the indiviual objects.
}
\label{fig:genesisv2-nogeco-on-fishbowl}
\end{figure}
\begin{table}[H]
\footnotesize
\centering
\caption{\small Comparison of the reconstructions from the object model based on the GENESIS-v2 component decoder with our original object model using the same metrics as in~\cref{tbl:results-object-model}. Both models are trained using cutout augmentation.}
\label{tbl:genesisv2-object-model}
\vspace{0.5em}
\begin{tabular}{llllll}
\toprule
\textbf{Training data} & \textbf{Architecture} & \textbf{IoU} $\uparrow$ & \textbf{MAE} $\downarrow$ & \textbf{IoU@0.5} $\uparrow$ & \textbf{MAE@0.5} $\downarrow$ \\
\midrule
\multirow{2}{2cm}{Motion Segmentation} & GENESIS-v2 & 0.779$\pm$0.002 & 15.3$\pm$0.063 & 0.661$\pm$0.003 & 25.7$\pm$0.114 \\
& Ours & 0.822$\pm$0.001 & 13.0$\pm$0.032 & 0.677$\pm$0.002 & 24.0$\pm$0.017 \\
\midrule
\multirow{2}{2cm}{Ground Truth Segmentation} & GENESIS-v2 & 0.844$\pm$0.001 & 13.7$\pm$0.109 & 0.722$\pm$0.001 & 19.8$\pm$0.062 \\
& Ours & 0.887$\pm$0.001 & 12.3$\pm$0.035 & 0.743$\pm$0.003 & 17.3$\pm$0.087 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{supplement_figures/genesis2_object_model.png}
\caption{\small Qualitative results when using the GENESIS-v2 object decoder as object model within our modular training approach using the same loss and training schedule. Reconstructions and samples look worse than with the original object model, hinting at the larger capacity of our object model being necessary for our dataset.}
\label{fig:genesisv2-object-model}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{supplement_figures/genesis2_object_model_beta0.001.png}
\caption{\small Qualitative results when using the GENESIS-v2 object decoder as object model within our modular training approach using a larger weight of the KL divergence in the VAE training loss. At the prize of worse reconstructions, the samples from the model can be substantially improved this way.}
\label{fig:genesisv2-object-model-beta0.001}
\end{figure}
\subsection{Comparison on the RealTraffic dataset}
To evaluate how well our method transfers to other settings, we trained our model on the RealTraffic dataset~\citep{ehrhardt2020relate}. As the resolution of the images is smaller in the RealTraffic dataset, we reduced the object size threshold to 64px and the minimal distance to the boundary to 8px for the object extraction. We trained the object model with the same architecture as used for the Fishbowl dataset. Due to the smaller dataset size, we trained the model for 600 epochs and reduced the learning rate only after 400 epochs. As the videos in the dataset were all recorded from the same stationary traffic camera, we did not train a background model but used the mean-filtered background directly. In contrast to the aquarium dataset, the object positions and sizes are not distributed uniformly for this dataset. For the scene model, we therefore sample directly from the empirical joint distribution of object positions and sizes (extracted from the object masks obtained by the motion segmentation).
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{supplement_figures/comparison_on_realtraffic.png}
\caption{%
\small
Results from our model trained on the RealTraffic dataset compared to results from other models trained on this dataset: GENESIS~\citep{engelcke2020genesis}, RELATE~\citep{ehrhardt2020relate}, BlockGAN~\citep{nguyen2020blockgan} and SPACE~\citep{lin2020space}.
}
\label{fig:comparison-on-realtraffic}
\end{figure}
In \cref{fig:comparison-on-realtraffic}, we compare samples from our model to other models for which results on the RealTraffic dataset have been reported by \cite{ehrhardt2020relate}\footnote{http://geometry.cs.ucl.ac.uk/projects/2020/relate/}. Overall, our model transfers well to this real world setting, given that our model is used largely unchanged. In comparison to the GAN-based RELATE and BlockGAN, the samples from our model look slightly more blurry. However the results from our method clearly improve over the VAE-based GENESIS model.
The adaptation of the scene statistics as described above works well for the object position, as the cars are correctly positioned on the street only. We consider the possibility of this straight-forward adaptation to novel scene statistics to be a nice advantage of our object-centric modeling approach. In the future, this could be improved further by, e.g., learning to sample latent variables conditioned on the object positions.
\subsection{Training ARFlow on Fishbowl}
We build on the official implementation of ARFlow provided in \href{https://github.com/lliuz/ARFlow}{https://github.com/lliuz/ARFlow}. To train the model on the Fishbowl dataset, we make the following adaptations to the original configuration used for training on Sintel:
\begin{itemize}
\item The internal resolution of the model is set to 320x448 pixels.
\item For training, we use the first 200 videos of the Fishbowl dataset. This amounts to 25.4K frame pairs which is substantially more than the Sintel dataset (pretraining: 14,570, main training: 1041), which is the largest dataset on which ARFlow was originally trained. Initial experiments using the first 1000 videos did not lead to an improvement in the training objective compared to only using 200 videos.
\item We train the model using a batch size of 24 and use 300 random batches per epoch. We perform both the pretraining and main training stage, but using the same data for both stages. The pretraining stage is shortened to 100 epochs, after which the training loss did not improve further.
\end{itemize}
We selected above parameters by pilot experiments using the final training loss as criterion.
All other hyperparameters, in particular regarding the losses, the augmentations and the model architecture, are used unchanged.
We remark that the hyperparameters are chosen differently for the two datasets used in the original paper, so we conjecture that the performance on this model most likely improves when closely adapting the training scheme to our setting.
\subsection{Object model}
We loosely build on the $\beta$-VAE implementation by~\cite{Subramanian2020}.
We reimplemented the training script and modified architecture details like the latent dimensionality due to differences in the image size.
We use a CNN with 10 layers as an encoder. Each layer consists of a $3\times3$ convolution, followed by layer normalization~\citep{ba2016layer} and a leaky ReLU nonlinarity with a negative slope of $0.01$. The decoders are built symmetrically by using the reversed list of layer parameters and transposed convolutions. The decoders both use an additional convolutional decoding layer, without normalization and nonlinearity. The detailed specification of the default hyperparameters used by the object model is given in the following table:
\begin{table}[H]
\footnotesize
\centering
\caption{\small Default hyperparameters used for the object model.}
\label{tbl:object-model-parameters}
\begin{tabular}{ll}
\toprule
Parameter & Value \\
\midrule
sample size & $128\times64$ \\
hidden layers: channels & 32, 32, 64, 64, 128, 128, 256, 256, 512, 512 \\
hidden layers: strides & 2, 1, 2, 1, 2, 1, 2, 1, 2, 1 \\
latent dimensions & 128 \\
prior loss weight ($\beta$) & 0.0001 \\
mask loss weight ($\gamma$) & 0.1 \\
learning rate, epoch 1-40 & 0.0001 \\
learning rate, epoch 41-60 & 0.00001 \\
\bottomrule
\end{tabular}
\end{table}
We chose the parameters regarding the architecture based on early experiments by qualitatively evaluating the sample quality. The learning rate and mask loss weight $\gamma$ where determined using grid searches with the IoU of the mask as selection criterion. However, we noticed a high degree of consistency between the rankings in terms of the mask IoU and appearance MAE. The best reconstruction quality was obtained when not regularizing the prior (i.e., $\beta=0$). We chose the final value of $\beta=0.0001$ as a compromise between reconsutruction and sampling capabilites based on visual inspection of the results.
For the mask and appearance losses defined in the main paper, we use an implementation based on the following PyTorch-like pseudo code:
\begin{lstlisting}[language=Python]
def object_model_loss(image, mask_fg, mask_bg, gamma = 1., beta = 0.001):
# image (b,c,h,w), mask_fg (b,h,w), mask_bfg (b,h,w)
latents = encode(image)
mask_pred, img_pred = mask_decoder(latents), img_decoder(latents)
L_img = (mask_fg * mse(img_pred, image)).sum()
L_mask = ((mask_fg + mask_bg) * bce(mask_pred, mask_fg)).sum()
L_reg = kl_divergence(latents)
Z_img, Z_mask = mask_fg.sum(), (mask_fg + mask_bg).sum()
return L_img / Z_img + gamma * L_mask / Z_mask + beta * L_reg
\end{lstlisting}
\subsection{Background model}
\paragraph{Training details}
We use a similar $\beta$-VAE and training objective for the background model as for the object model.
Different to the object model, we do not predict the mask and instead only reconstruct a non-occluded background sample.
We sweep over learning rates in $\{1\cdot10^{-3}, 5\cdot 10^{-4}, 1\cdot10^{-4}\}$ and $\beta$ in $\{10^{-2}, 10^{-3}, 10^{-4}, 10^{-3}\}$ and select the model with $\beta = 10^{-3}$ and learning rate $10^{-4}$ which obtained lowest reconstruction and prior loss. A higher value for $\beta$ caused the training to collapse (low prior loss, but high reconstruction loss).
As noted in the main text, the background model performance (and resolution) can be very likely improved by using larger models, more hyperparameter tuning and better decoders suited to the high resolution. As noted in the main paper, we omit these optimizations within the scope of this work and rather focus on the object and scene models.
\paragraph{Implementation}
The background model loss is similar to the reconstruction loss of the foreground model. We omit reconstructing the mask. In the input image for the background model, all foreground pixels are replaced by the average background RGB value to avoid distorting the input color distribution, refer to the example images in \cref{fig:background_inputs}.
The loss can be implemented as follows:
\begin{lstlisting}[language=Python]
def background_model_loss(image, mask_bg, beta = 0.001):
# image (b,c,h,w), mask_fg (b,h,w), mask_bfg (b,h,w)
latents = encode(image)
img_pred = img_decoder(latents)
L_img = (mask_bg * mse(img_pred, image)).sum()
L_reg = kl_divergence(latents)
Z_img = mask_bg.sum()
return L_img / Z_img + beta * L_reg
\end{lstlisting}
\subsection{Scene model}
\paragraph{Mask temperature}
By default, the mask is computed by passing the logits obtained from the object mask decoder through a sigmoid non-linearity for obtaining probabilities.
During scene generation, we added sharpness to the samples by varying the temperature $\tau$, yielding an object mask
\begin{equation}
\mb = \frac{1}{1 + \exp(-\xb / \tau)},
\end{equation}
where $\xb$ is the logit output by the mask decoder and the exponential is applied element-wise.
``Cooling'' the model to values around $\tau=0.1$ yields sharper masks and better perceived sample quality.
Note that the entropy threshold introduced in the next section depends on the value of $\tau$.
\paragraph{Entropy filtering}
When sampling objects during scene generation, we filter the samples according to the model ``confidence''.
A simple form of rejection sampling is used by computing the mask for a given object latent, and then computing the entropy of that mask.
Given a mask $\mb(\tau)$, we adapt the sampling process considering the average entropy across all pixels in the mask,
\begin{equation}
H_2(\mb) = - \frac{1}{WH} \sum_{u=1}^W\sum_{v=1}^H \mb(u,v) \log_2 \mb(u,v) + (1-\mb(u,v)) \log_2 (1-\mb(u,v))
\end{equation}
and reject samples where $H_2(\mb)$ exceeds a threshold.
Reasonable values for a mask temperature of $\tau=0.1$ are around 100 to 200 bits.
It is possible to trade sample quality and sharpness for an increased variability in the samples by increasing the threshold.
\subsection{Objects extracted by the motion segmentation}
The following figure shows examples of objects extracted from the original videos using motion segmentation and the ground truth occluded masks, respectively.
The figure reveals typical failure modes, such as multiple fish that are segmented jointly, and parts of the background contained in the object mask.
Most of the fish however, are segmented very accurately.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{supplement_figures/objects_moseg.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{supplement_figures/objects_groundtruth.png}
\end{subfigure}
\caption{%
\small
Objects extracted from the training videos that are used for training the object model. \textit{Left:} Objects extracted using the motion segmentation.
\textit{Right:} Objects extracted using the ground truth occluded segmentation masks.
}
\label{fig:moseg-extracted-objects}
\end{figure}
\clearpage
\subsection{Object model: additional reconstructions}
In~\cref{fig:reconstruction_by_occlusion_level} we show additional reconstruction from the object model trained on motion segmentation and ground truth occluded masks, respectively.
Moderate occlusion levels up to $30\%$ are handled well by both variants.
At higher noise levels however, only the variant of the object model trained on the ground truth masks is able to correctly complete the partial objects.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{supplement_figures/reconstruction_by_occlusion_level.png}
\caption{%
\small
Reconstructions from the object model for input crops with different occlusion levels ($0.0$ = no occlusion, $1.0$ = fully occluded). For each occlusion level, the input images and the respective reconstructions are shown. Both model variants are trained using another fish as artificial occlusion during training.
\textit{Left:} Object model trained using the motion segmentation masks.
\textit{Right:} Object model trained using the ground truth unoccluded masks.
}
\label{fig:reconstruction_by_occlusion_level}
\end{figure}
\clearpage
\subsection{Object model: additional samples}
The following figure shows additional samples from the object models trained using another object as artificial occlusion (the same models as used for~\cref{fig:results-object-model} in the main paper).
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{supplement_figures/object_model_moseg_samples.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{supplement_figures/object_model_groundtruth_samples.png}
\end{subfigure}
\caption{%
\small
Samples from the object model using another input object as augmentation during training. These are the same models as used for~\cref{fig:results-object-model} in the main paper.
\textit{Left:} Object model trained on the motion segmentation.
\textit{Right:} Object model trained on the ground truth occluded masks.
}
\label{fig:object-model-additional-samples}
\end{figure}
\subsection{Background Model: Inputs}
The background model is trained on images pre-processed by an ensembling of foreground-background algorithms from \cite{bgslibrary}. We use $\Sigma-\Delta$ \citep{manzanera2007sigmadelta}, static frame differences, LOBSTER \citep{stcharles2014lobster} and PAWKS \citep{stcharles2016pawks} with standard hyperparameters set in \cite{bgslibrary}.
The goal behind this choice is to detect as many objects as possible and remove the amount of erroneously included foreground pixels --- it might be possible to even further improve this pre-processing step with different algorithms and hyperparameter tuning.
We give a qualitative impression of the background input samples in \cref{fig:background_inputs}. Note that all foreground pixels were replaced by the average color value obtained by averaging over background pixels to avoid inputting unrealistic colors into the $\beta$-VAE.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{supplement_figures/background_overview.png}
\caption{%
Input samples to the background $\beta$-VAE. Objects were removed by applying an ensemble of foreground-background segmentation algorithms. Images are resized to $96\times 64$px to trade-off training speed for sample quality.
}
\label{fig:background_inputs}
\end{figure}
\subsection{Additional samples from the scene model}
\begin{figure}[H]
\centering
\includegraphics[width=.8\textwidth]{supplement_figures/two_object_interventions.png}
\caption{Two object ``reconstructions''. Object latents are obtained from a reference sample. Our modular scene model makes it straightforward to vary locations of single objects, exchanging single objects, or changing the background without affecting the output sample (top to bottom).}
\label{fig:scene-model-manipulation}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{supplement_figures/additional_scene_samples_unconstrained.png}
\caption{Additional samples from the scene model.
Depicted samples use different (reconstructed) backgrounds, samples from the object model are obtained from the standard normal prior and filtered with 150\,bit entropy threshold at $\tau=0.2$, sample sizes are constrained on a reference training sample, object positions are sampled independently from a uniform prior. Samples are not cherrypicked.}
\label{fig:scene-model-additional-samples}
\end{figure}
\clearpage
\subsection{Conditional sampling from the scene model}
We present conditional samples from the scene model.
As a simple baseline, we use a k-nearest neighbour approach for conditionally sampling object latents based on background latents.
First, we extract a paired dataset of background and foreground latents from the training dataset.
Second, we sample background latents from the standard Normal distribution (prior of the background model).
Third, we compute the 2\% nearest neighbouring videos based on the $L^2$ distance from the background latent.
Fourth, we randomly sample a subset of foreground latents extracted by the motion segmentation.
Finally, we reconstruct the scene using the background latents along with the chosen subset of foreground latents. We reject foreground latents with an entropy larger than 150 bit.
Samples (non-cherrypicked) are depicted in \cref{fig:conditional_samples}
\begin{figure}[H]
\centering
\includegraphics[width=.8\textwidth]{supplement_figures/conditional_samples.png}
\caption{
Conditional samples from the model. Fish appearances (qualitatively, mainly the brightness) now vary according to the background sample.
}
\label{fig:conditional_samples}
\end{figure}
\clearpage
Similarly, we can constrain other latents like the x and y positions to a particular background.
In \cref{fig:conditional_samples_xy} we show examples of constraining object locations based on a reference sample for both the ground truth and motion segmentation object model.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{supplement_figures/conditional_xy.png}
\caption{%
Conditional sampling of object locations based on a reference scene.
For matching latents sampling, we extract the object latents and positions from the reference scene using the motion segmentation model.
Samples obtained by matched sampling are more similar to the reference scene than samples obtained by unconstrained sampling from the model priors.
}
\label{fig:conditional_samples_xy}
\end{figure}
\clearpage
\subsection{Entropy sampling}
\begin{figure}[H]
\centering
\includegraphics[width=.8\textwidth]{supplement_figures/entropy_sampling.png}
\caption{%
\small
Details of entropy filtering when using the object model.
(A1) Distribution of mask entropies for a given foreground model, estimated over 25,000 randomly sampled objects. The distribution typically peaks around 100 to 200 bits, makeing this a reasonable range for picking the cut-off.
(A2) The 128d latent vector of objects is reasonably correlated with the entropy ($R^2=0.45$), making it possible to alter the prior to sample from to encourage low entropy samples.
(B) 64 samples with the lowest entropy after drawing 1000 random samples from the object models and sorting according to entropy. While this strategy encourages some non-plausible, low entropy objects (row 4 and 6), it generally filters the dataset for samples with sharp boundaries and good visual quality.
(C) 64 samples with the highest entropy after drawing 1000 random samples from the object models and sorting according to entropy. With a few exceptions of plausible samples (e.g. in row 2), most of the samples should be rejected in the scene model.
}
\label{fig:entropy}
\end{figure}
|
1,477,468,750,657 | arxiv | \section{Introduction}\label{scn:intro}
The use of computational complexity theory to study the inherent difficulty of computational problems has proven remarkably fruitful over the last decades. For example, the theory of NP-completeness~\cite{C72,L73,K72} has helped classify the worst-case complexity of hundreds of computational problems which elude efficient classical algorithms. In the quantum setting, the study of a quantum analogue of NP, known as Quantum Merlin Arthur (QMA), was started in 1999 by the seminal ``quantum Cook-Levin theorem'' of Kitaev~\cite{KSV02}, which showed that estimating the ground state energy of a given $k$-local Hamiltonian is QMA-complete for $k\geq 5$. Since then, a number of physically motivated problems have been shown complete for QMA (see, e.g.,~\cite{Boo14} and~\cite{GHLS14} for surveys), a number of which focus on estimating ground state energies of local Hamiltonians.
In recent years, however, new directions in quantum complexity theory involving other physically motivated properties of local Hamiltonians have appeared. For example, Brown, Flammia and Schuch~\cite{BFS11} (see also Shi and Zhang~\cite{SZ}) introduced a quantum analogue of $\class{\#P}$, denoted $\class{\#BQP}$, and showed that computing the ground state degeneracy or density of states of local Hamiltonians is $\class{\#BQP}$-complete. Gharibian and Kempe~\cite{GK12} introduced the class cq-${\rm\Sigma_2}$, a quantum generalization of $\Sigma_2^p$, and showed that determining the smallest subset of interaction terms of a given local Hamiltonian which yields a frustrated ground space is cq-${\rm\Sigma_2}$-complete (and additionally, cq-${\rm\Sigma_2}$-hard to approximate). Gharibian and Sikora~\cite{GS14} showed that determining whether the ground space of a local Hamiltonian has an ``energy barrier'' is QCMA-complete, where QCMA~\cite{AN02} is Merlin-Arthur (MA) with a classical proof and quantum prover. Finally, and most relevant to this work, Ambainis~\cite{A14} introduced $\class{P}^{\class{QMA}[\class{log}]}$, which is the class of decision problems decidable by a polynomial time Turing machine with logarithmically many queries to a QMA oracle (i.e. a quantum analogue of $\class{P}^{\class{NP}[\class{log}]}$). He showed that $\class{P}^{\class{QMA}[\class{log}]}$ captures the complexity of a very natural physical problem: ``Simulating'' a local measurement against the ground state of a local Hamiltonian (more formally, computing the expectation value of a given local observable against the ground state).
It is worth noting here that, indeed, given a local Hamiltonian, often one is not necessarily interested in a description of the \emph{entire} ground state~\cite{GHLS14}. Rather, one may be interested in local quantities such as the evaluation of a local observable or of a correlation function. This makes $\class{P}^{\class{QMA}[\class{log}]}$ an arguably well-motivated complexity class, whose study we thus continue here.
\paragraph{Our results.} Our findings are summarized under three headings below.\\
\noindent \emph{1. $\class{P}^{\class{QMA}[\class{log}]}$-completeness for estimating local quantities.} We begin with the study of two physically motivated problems. The first (discussed above), \prob{APX-SIM}, was formalized by Ambainis~\cite{A14} and is roughly as follows (formal definitions given in Section~\ref{scn:preliminaries}): Given a $k$-local Hamiltonian $H$ and an $l$-local observable $A$, estimate $\langle A\rangle:=\bra{\psi}A\ket{\psi}$, where $\ket{\psi}$ is a ground state of $H$. The second problem, which we introduce here and denote \prob{APX-2-CORR}, is defined similarly to \prob{APX-SIM}, except now one is given two observables $A$ and $B$, and the goal is to estimate the \emph{two-point correlation function} $\langle A\otimes B\rangle -\langle A \rangle\langle B\rangle$.
In previous work, Ambainis~\cite{A14} showed that \prob{APX-SIM}\ is $\class{P}^{\class{QMA}[\class{log}]}$-complete for $O(\log n)$-local Hamiltonians and $O(\log n)$-local observables. From a physical standpoint, however, it is typically desirable to have $O(1)$-local Hamiltonians and observables, and whether $\class{P}^{\class{QMA}[\class{log}]}$-hardness holds in this regime was left as an open question. We thus first ask: \emph{Is \prob{APX-SIM}\ still hard for an $O(1)$-local Hamiltonian and $1$-local observables?}
\emph{A priori}, one might guess that simulating $1$-local measurements might not be so difficult --- for example, the ground state energy of a $1$-local Hamiltonian can trivially be estimated efficiently. Yet this intuition is easily seen to be incorrect. Since one can embed a $3$-SAT instance $\phi$ into a $3$-local Hamiltonian, the ability to repeatedly locally measure observable $Z$ against single qubits of the ground state allows one to determine a solution to $\phi$! Thus the $1$-local observable case is at least NP-hard. Indeed, here we show it is much harder, resolving Ambainis's open question in the process.
\begin{theorem}\label{thm:main1}
Given a $5$-local Hamiltonian $H$ on $n$ qubits and a $1$-local observable $A$, estimating $\langle A\rangle $ (i.e. \prob{APX-SIM}) is $\class{P}^{\class{QMA}[\class{log}]}$-complete.
\end{theorem}
\noindent Thus, measuring just a \emph{single} qubit of a local Hamiltonian $H$'s ground state with a fixed observable $A$ (in our construction, $A$ is independent of $H$) is harder than QMA (assuming $\class{QMA}\neq \class{P}^{\class{QMA}[\class{log}]}$, which is likely as otherwise $\class{QMA}=\class{co-QMA}$).
Using similar techniques, we also show \prob{APX-2-CORR}\ is $\class{P}^{\class{QMA}[\class{log}]}$-complete.
\begin{theorem}\label{thm:main2}
Given a $5$-local Hamiltonian $H$ on $n$ qubits and a pair of $1$-local observables $A$ and $B$, estimating $\langle A\otimes B\rangle -\langle A \rangle\langle B\rangle$ (i.e. \prob{APX-2-CORR}) is $\class{P}^{\class{QMA}[\class{log}]}$-complete.
\end{theorem}
\noindent\emph{2. An upper bound on the power of $\class{P}^{\class{QMA}[\class{log}]}$.} Since $\class{P}^{\class{QMA}[\class{log}]}$ captures the complexity of natural physical problems, and since it is thought of as ``slightly harder'' than QMA (and in particular, $\class{QMA}\subseteq \class{P}^{\class{QMA}[\class{log}]}$), we next ask the question: \emph{How much harder than QMA is $\class{P}^{\class{QMA}[\class{log}]}$?} Recall that $\class{QMA} \subseteq \class{PP}$~\cite{KW00,Vy03,MW05} (note \cite{Vy03} actually shows the stronger containment $\class{QMA}\subseteq \class{A}_0\class{PP}$). Here, PP is the set of promise problems solvable in probabilistic polynomial time with \emph{unbounded} error. Our next result shows that $\class{P}^{\class{QMA}[\class{log}]}$ is ``not too much harder'' than QMA in the following rigorous sense.
\begin{theorem}\label{thm:inPP}
$\class{P}^{\class{QMA}[\class{log}]}\subseteq\class{PP}$.
\end{theorem}
\noindent\emph{3. Estimating spectral gaps and oracles for promise problems.} A central theme in this work is the subtlety involved in the study of oracle classes in which the oracle solves a \emph{promise} problem (such as $\class{P}^{\class{QMA}[\class{log}]}$), as opposed to a decision problem (such as $\class{P}^{\class{NP}[\class{log}]}$, where $\class{P}^{\class{NP}[\class{log}]}$ is defined as $\class{P}^{\class{QMA}[\class{log}]}$ except with an NP oracle). As discussed further in ``Proof techniques and discussions below'', the issue here is that a P machine \emph{a priori} cannot in general determine if the query it makes to a QMA oracle satisfies the promise gap of the oracle. For queries which violate this promise, the oracle is allowed to give an arbitrary answer. We observe that this point appears to have been missed in~\cite{A14}, rendering a claimed proof that determining the spectral gap of a given $O(\log n)$-local Hamiltonian $H$ is $\class{P}^{\class{UQMA}[\class{log}]}$-hard incorrect. (Here, $\class{P}^{\class{UQMA}[\class{log}]}$ is defined as $\class{P}^{\class{QMA}[\class{log}]}$ except with a Unique QMA oracle.) Our last result both shows how to overcome this difficulty (at the expense of obtaining a ``slightly weaker'' hardness claim involving a Turing reduction, whereas~\cite{A14} claimed hardness under a mapping reduction), and improves the locality of $H$ to $O(1)$.
\begin{theorem}\label{thm:spgap}
Given a $4$-local Hamiltonian $H$, estimating its spectral gap (i.e. $\prob{SPECTRAL-GAP}$) is $\class{P}^{\class{UQMA}[\class{log}]}$-hard under polynomial time Turing reductions (i.e. Cook reductions).
\end{theorem}
\paragraph{Proof techniques and discussion.}~\\
\noindent \emph{1. $\class{P}^{\class{QMA}[\class{log}]}$-completeness for estimating local quantities.} The proofs of our first two $\class{P}^{\class{QMA}[\class{log}]}$-hardness results (Theorem~\ref{thm:main1} and Theorem~\ref{thm:main2}) are similar, so we focus on \prob{APX-SIM}\ here. Intuitively, our aim is simple: To design our local Hamiltonian $H$ so that its ground state encodes a so-called history state~\cite{KSV02} $\ket{\psi}$ for a given $\class{P}^{\class{QMA}[\class{log}]}$ instance, such that measuring observable $Z$ on the designated ``output qubit'' of $\ket{\psi}$ reveals the answer of the computation. At a high level, this is achieved by combining a variant of Kitaev's circuit-to-Hamiltonian construction~\cite{KSV02} (which forces the ground state to follow the P circuit) with Ambainis's ``query Hamiltonian''~\cite{A14} (which forces the ground state to correspond to correctly answered queries to the QMA oracle). Making this rigorous, however, requires developing a few ideas, including: A careful analysis of Ambainis's query Hamiltonian's ground space when queries violating the promise gap of the oracle are allowed (Lemma~\ref{l:amborig}; more on this below), a simple but useful corollary (Cor.~\ref{cor:kkr}) of Kempe, Kitaev, and Regev's Projection Lemma~\cite{KKR06} (Corollary~\ref{cor:kkr}, showing that any low energy state of $H$ must be close to a valid history state), and application of Kitaev's unary encoding trick\footnote{In~\cite{KSV02}, this trick was used to reduce the locality of the clock register.}~\cite{KSV02} to bring the locality of the Hamiltonian $H$ down to $O(1)$ (Lemma~\ref{l:amb}).
Next, to show containment of \prob{APX-2-CORR}\ in $\class{P}^{\class{QMA}[\class{log}]}$ (Theorem~\ref{thm:main2}), a natural approach would be to run Ambainis's $\class{P}^{\class{QMA}[\class{log}]}$ protocol for \prob{APX-SIM}\ independently for each term $\langle A\otimes B\rangle$, $\langle A\rangle$, and $\langle B\rangle$. However, if a cheating prover does not send the \emph{same} ground state $\ket{\psi}$ for each of these measurements, soundness of the protocol can be violated. To circumvent this, we exploit a trick of Chailloux and Sattath~\cite{CS11} from the setting of QMA(2): we observe that the correlation function requires only knowledge of the two-body reduced density matrices $\set{\rho_{ij}}$ of $\ket{\psi}$. Thus, a prover can send classical descriptions of the $\set{\rho_{ij}}$ along with a ``consistency proof'' for the QMA-complete Consistency problem~\cite{L06}.\\
\noindent\emph{2. An upper bound on the power of $\class{P}^{\class{QMA}[\class{log}]}$.} We now move to our third result, which is perhaps the most technically involved. To show $\class{P}^{\class{QMA}[\class{log}]}\subseteq\class{PP}$ (Theorem~\ref{thm:inPP}), we exploit the technique of \emph{hierarchical voting}, used by Beigel, Hemachandra, and Wechsung~\cite{BHW89} to show $\class{P}^{\class{NP}[\class{log}]}\subseteq \class{PP}$, in conjunction with the QMA strong amplification results of Marriott and Watrous~\cite{MW05}. The intuition is perhaps best understood in the context of $\class{P}^{\class{NP}[\class{log}]}$~\cite{BHW89}. There, the PP machine first attempts to \emph{guess} the answers to each NP query by picking random assignments to the SAT formula $\phi_i$ representing query $i$, in the hope of guessing a satisfying assignment for $\phi_i$. Since such a guess can succeed only if $\phi_i$ is satisfiable, it can be seen that the lexicographically \emph{largest} string $y^*$ attainable by this process must be the correct query string (i.e. string of query answers). The scheme then uses several rounds of ``hierarchical voting,'' in which lexicographically smaller query strings reduce their probability of being output to the point where $y^*$ is guaranteed to be the ``most likely'' query string output. While the quantum variant of this scheme which we develop is quite natural, its analysis is markedly more involved than the classical setting due to both the bounded-error nature of QMA and the possibility of ``invalid queries'' violating the QMA promise gap. (For example, it is no longer necessarily true that the lexicographically largest obtainable $y^*$ is a ``correct'' query string.)\\
\noindent\emph{3. Estimating spectral gaps and oracles for promise problems.} Finally, let us move to our fourth result and the theme of ``invalid queries''. Let us assume that all calls by the $\class{P}^{\class{QMA}[\class{log}]}$ machine to the QMA oracle $Q$ are for an instance $(H,a,b)$ of the Local Hamiltonian Problem (LH): Is the ground state energy of $H$ at most $a$ (YES case), or at least $b$ (NO case), for $b-a\geq1/\textup{poly}(n)$? Unfortunately, a P machine cannot in general tell whether the instance $(H,a,b)$ it feeds to $Q$ satisfies the promise conditions of LH (i.e.~the ground state energy may lie in the interval $(a,b)$). If the promise is violated, we call such a query \emph{invalid}, and in this case $Q$ is allowed to either accept or reject. This raises the issue of how to ensure a YES instance (or NO instance) of a $\class{P}^{\class{QMA}[\class{log}]}$ problem is well-defined. To do so, we stipulate (see, e.g., Definition 3 of Goldreich~\cite{G06}) that the P machine must output the \emph{same} answer regardless of how any invalid queries are answered by the oracle. As mentioned earlier, this point appears to have been missed in~\cite{A14}, where all queries were assumed to satisfy the LH promise. This results in the proofs of two key claims of~\cite{A14} being incorrect. The first claim was used in the proof of $\class{P}^{\class{QMA}[\class{log}]}$-completeness for \prob{APX-SIM}\ (Claim 1 in~\cite{A14}); we provide a corrected statement and proof in Lemma~\ref{l:amborig} (which suffices for the $\class{P}^{\class{QMA}[\class{log}]}$-hardness results in~\cite{A14} regarding \prob{APX-SIM}\ to hold).
The error in the second claim (Claim 2 of~\cite{A14}), wherein $\class{P}^{\class{UQMA}[\class{log}]}$-hardness of determining the spectral gap of a local Hamiltonian is shown, appears arguably more serious. The construction of~\cite{A14} requires a certain ``query Hamiltonian'' to have a spectral gap, which indeed holds if the $\class{P}^{\class{QMA}[\class{log}]}$ machine makes no invalid queries. However, if the machine makes invalid queries, this gap can close, and it is not clear how one can recover $\class{P}^{\class{QMA}[\class{log}]}$-hardness under mapping reductions. To overcome this, we introduce a technique of ``query validation'', which is perhaps reminiscent of property testing: Given a query to the QMA oracle, we would like to determine if the query is valid or ``far'' from valid. While it is not clear how a P machine alone can solve this ``property testing'' problem, we show how to use a SPECTRAL GAP oracle to do so, essentially allowing us to eliminate ``sufficiently invalid'' queries. Carefully combining this idea with Ambainis's original construction~\cite{A14}, coupled with application of Kitaev's unary encoding trick, we show Theorem~\ref{thm:spgap}, i.e. $\class{P}^{\class{UQMA}[\class{log}]}$-hardness for $\prob{SPECTRAL-GAP}$ for $O(1)$-local Hamiltonians. Since our query validation approach requires a polynomial number of calls to the $\prob{SPECTRAL-GAP}$ oracle, this result requires a polynomial-time \emph{Turing} reduction. Whether this can be improved to a mapping reduction is left as an open question.
\paragraph{Organization.} This paper is organized as follows: In Section~\ref{scn:preliminaries}, we give notation, formal definitions, and a corollary of the Projection Lemma. In Section~\ref{scn:lemmas}, we show various lemmas regarding Ambainis's query Hamiltonian. In Section~\ref{scn:1local} and Section~\ref{scn:corr}, we show Theorem~\ref{thm:main1} and Theorem~\ref{thm:main2}, respectively. Section~\ref{scn:PP} proves Theorem~\ref{thm:inPP}. Theorem~\ref{thm:spgap} is given in Section~\ref{scn:spectralGap}. We conclude in Section~\ref{scn:conclusions} and pose open questions.
\section{Preliminaries}\label{scn:preliminaries}
\paragraph{Notation.} For $x\in\set{0,1}^n$, $\ket{x}\in({\mathbb C}^2)^{\otimes n}$ denotes the computational basis state labeled by $x$. Let $\spa{X}$ be a complex Euclidean space. Then, $\lin{\spa{X}}$ and $\density{\spa{X}}$ denote the sets of linear and density operators acting on $\spa{X}$, respectively. For subspace $\spa{S}\subseteq\spa{X}$, $\spa{S}^\perp$ denotes the orthogonal complement of $\spa{S}$. For Hermitian operator $H$, $\lambda(H)$ and $\lambda(H|_{\spa{S}})$ denote the smallest eigenvalue of $H$ and the smallest eigenvalue of $H$ restricted to space $\spa{S}$, respectively. The spectral and trace norms are defined $\snorm{A} := \max\{\norm{A\ket{v}}_2 : \norm{\ket{v}}_2 = 1\}$ and $\trnorm{A}:=\tr{\sqrt{A^\dagger A}}$, respectively, where $:=$ denotes a definition. We set $[m]:=\set{1,\ldots,m}$.
\paragraph{Definitions and lemmas.} The class PP~\cite{G77} is the set of promise problems for which there exists a polynomial-time probabilistic Turing machine which accepts any YES instance with probability strictly greater than $1/2$, and accepts any NO instance with probability at most $1/2$.
The class $\class{P}^{\class{QMA}[\class{log}]}$, defined by Ambainis~\cite{A14}, is the set of decision problems decidable by a polynomial-time deterministic Turing machine with the ability to query an oracle for a QMA-complete problem (e.g. the $2$-local Hamiltonian problem ($2$-LH) \cite{KKR06}) $O(\log n)$ times, where $n$ is the size of the input. $2$-LH is defined as follows: Given a $2$-local Hamiltonian $H$ and inverse polynomially separated thresholds $a,b\in{\mathbb R}$, decide whether $\lambda(H)\leq a$ (YES-instance) or $\lambda(H)\geq b$ (NO-instance). Note that the P machine is allowed to make queries which violate the promise gap of $2$-LH, i.e. with $\lambda(H)\in(a,b)$; in this case, the oracle can output either YES or NO. The P machine is nevertheless required to output the same final answer (i.e. accept or reject) regardless of how such ``invalid'' queries are answered~\cite{G06}.
For any P machine $M$ making $m$ queries to a QMA oracle, we use the following terminology throughout this article. A \emph{valid} (\emph{invalid}) query satisfies (violates) the promise gap of the QMA oracle. A \emph{correct} query string $y\in\set{0,1}^m$ encodes a sequence of correct answers to all of the $m$ queries. Note that for any invalid query of $M$, any answer is considered ``correct'', yielding the possible existence of multiple correct query strings. An \emph{incorrect} query string is one which contains at least one incorrect query answer.
We now recall the definition of \prob{APX-SIM}.
\begin{definition}[$\prob{APX-SIM}(H,A,k,l,a,b,\delta)$ (Ambainis~\cite{A14})]
Given a $k$-local Hamiltonian $H$, an $l$-local observable $A$, and real numbers $a$, $b$, and $\delta$ such that $a-b\geq n^{-c}$ and $\delta\geq n^{-c'}$, for $n$ the number of qubits $H$ acts on and $c,c'>0$ some constants, decide:
\begin{itemize}
\item If $H$ has a ground state $\ket{\psi}$ satisfying $\bra{\psi}A\ket{\psi}\leq a$, output YES.
\item If for any $\ket{\psi}$ satisfying $\bra{\psi}H\ket{\psi}\leq \lambda(H)+\delta$, it holds that $\bra{\psi}A\ket{\psi}\geq b$, output NO.
\end{itemize}
\end{definition}
Next, we briefly review Kitaev's circuit-to-Hamiltonian construction \cite{KSV02}. Given a quantum circuit $U=U_L\cdots U_1$ consisting of $1$- and $2$-qubit gates $U_i$ and acting on registers $Q$ (proof register) and $W$ (workspace register),
this construction maps $U$ to a $5$-local Hamiltonian $H=H_{\rm in}+H_{\rm out}+H_{\rm prop}+H_{\rm stab}$. Here, we use two key properties of $H_{\rm in}+H_{\rm prop}+H_{\rm stab}$. First, the null space of $H_{\rm in}+H_{\rm prop}+H_{\rm stab}$ is spanned by \emph{history states}, which for any $\ket{\psi}$ have form
\begin{equation}\label{eqn:hist}
\ket{\psi_{\rm hist}}=\sum_{t=0}^LU_t\cdots U_1\ket{\psi}_Q\ket{0\cdots 0}_W\ket{t}_C,
\end{equation}
where $C$ is a clock register keeping track of time \cite{KSV02}. Second, we use the following lower bound\footnote{This bound is stated as $\Omega(\Delta/L^3)$ in \cite{GK12}; the constant $\pi^2/64$ can be derived from the analysis therein, though the exact value of the constant is not crucial in this work.} on the smallest non-zero eigenvalue of $H_{\rm in}+H_{\rm prop}+H_{\rm stab}$: \begin{lemma}[Lemma 3 (Gharibian, Kempe \cite{GK12})]\label{l:GKgap}
The smallest non-zero eigenvalue of $\Delta(H_{\rm in}+H_{\rm prop}+H_{\rm stab})$ is at least $\pi^2\Delta/(64L^3)\in\Omega(\Delta/L^3)$,~for $\Delta\in{\mathbb R}^+$ and $L\geq 1$
\end{lemma}
A useful fact for arbitrary complex unit vectors $\ket{v}$ and $\ket{w}$ is (see, e.g., Equation~1.33 of~\cite{G13}):
\begin{equation}\label{eqn:enorm}
\trnorm{\ketbra{v}{v}-\ketbra{w}{w}}=2\sqrt{1-\abs{\brakett{v}{w}}^2}\leq 2\enorm{\ket{v}-\ket{w}}.
\end{equation}
Next, two points on complexity classes: First, let $V$ denote a QMA verification circuit acting on $M$ proof qubits, and with completeness $c$ and soundness $s$. If one runs $V$ on ``proof'' $\rho=I/2^M$, then for a YES instance, $V$ accepts with probability at least $c/2^M$ (since $I/2^M$ can be viewed as ``guessing'' a correct proof with probability at least $1/2^M$), and in a NO instance, $V$ accepts with probability at most $s$ (see, e.g.,~\cite{MW05,W09_2}). Second, the class PQP is defined analogously to BQP, except in the YES case, the verifier accepts with probability strictly larger than $1/2$, and in the NO case, the verifier accepts with probability at most $1/2$.
For clarity, throughout this article a ``local'' Hamiltonian is with respect to the number of qubits each local interaction term acts on, not with respect to geometric locality.
\paragraph{A corollary of the Projection Lemma.} Finally, we show a simple but useful corollary of the Projection Lemma of Kempe, Kitaev, Regev~\cite{KKR06}
\begin{lemma}[Kempe, Kitaev, Regev~\cite{KKR06}]\label{l:proj}
Let $H=H_1+H_2$ be the sum of two Hamiltonians operating on some Hilbert space $\spa{H}=\spa{S}+\spa{S}^\perp$. The Hamiltonian $H_1$ is such that $\spa{S}$ is a zero eigenspace and the eigenvectors in $\spa{S}^\perp$ have eigenvalue at least $J>2\snorm{H_2}$. Then,
\[
\lambda(H_2|_{\spa{S}})-\frac{\snorm{H_2}^2}{J-2\snorm{H_2}}\leq \lambda(H)\leq \lambda(H_2|_{\spa{S}}).
\]
\end{lemma}
\begin{corollary}\label{cor:kkr}
Let $H$, $H_1$, $H_2$, $\spa{S}$ be as stated in Lemma~\ref{l:proj}, and define $K:=\snorm{H_2}$. Then, for any $\delta\geq0$ and vector $\ket{\psi}$ satisfying $\bra{\psi}H\ket{\psi}\leq \lambda(H)+\delta$, there exists a $\ket{\psi'}\in \spa{S}$ such that
\[
\abs{\brakett{\psi}{\psi'}}^2\geq {1-\left(\frac{K+\sqrt{K^2+\delta(J-2K)}}{J-2K}\right)^2}.
\]
\end{corollary}
\begin{proof}
Consider arbitrary $\ket{\psi}$ such that $\bra{\psi}H\ket{\psi}\leq \lambda(H)+\delta$. We can write $\ket{\psi}=\alpha_1\ket{\psi_1}+\alpha_2\ket{\psi_2}$ for $\ket{\psi_1}\in \spa{S}$, $\ket{\psi_2}\in \spa{S}^\perp$, $\alpha_1,\alpha_2\in{\mathbb R}$, $\alpha_1,\alpha_2\geq 0$, and $\alpha_1^2+\alpha_2^2=1$. The proof of Lemma~\ref{l:proj} yields
\begin{equation}\label{eqn:1}
\bra{\psi}H\ket{\psi}\geq \lambda(H_2|_{\spa{S}})+(J-2K)\alpha_2^2-2K\alpha_2.
\end{equation}
For completeness, we reproduce the steps from~\cite{KKR06} to derive this inequality as follows:
\begin{eqnarray*}
\bra{\psi}H\ket{\psi}&\geq& \bra{\psi}H_2\ket{\psi}+J\alpha_2^2\\
&=&(1-\alpha_2^2)\bra{v_1}H_2\ket{v_1}+2\alpha_1\alpha_2\operatorname{Re}\bra{\psi_1}H_2\ket{\psi_2}+\\
&&\alpha_2^2\bra{\psi_2}H_2\ket{\psi_2}+J\alpha_2^2\\
&\geq&\bra{v_1}H_2\ket{v_1}-K(\alpha_2^2+2\alpha_2+\alpha_2^2)+J\alpha_2^2\\
&=&\bra{v_1}H_2\ket{v_1}+(J-2K)\alpha_2^2-2K\alpha_2\\
&\geq&\lambda(H_2|_{\spa{S}})+(J-2K)\alpha_2^2-2K\alpha_2.
\end{eqnarray*}
Since by assumption $\bra{\psi}H\ket{\psi}\leq \lambda(H)+\delta$, Equation~(\ref{eqn:1}) implies
$
\lambda(H)+\delta\geq \lambda(H_2|_{\spa{S}})+(J-2K)\alpha_2^2-2K\alpha_2.
$
Combining this with Lemma~\ref{l:proj}, we have
\[
0\geq\lambda(H)- \lambda(H_2|_{\spa{S}})\geq (J-2K)\alpha_2^2-2K\alpha_2-\delta,
\]
which holds only if
$
\abs{\alpha_2}\leq \frac{K+\sqrt{K^2+\delta(J-2K)}}{J-2K}.
$
Thus, setting $\ket{\psi'}=\ket{\psi_1}$ yields the claim.
\end{proof}
\section{Ambainis's Query Hamiltonian}\label{scn:lemmas}
In this section, we show various results regarding Ambainis's ``query Hamiltonian''~\cite{A14}, which intuitively aims to have its ground space contain correct answers to a sequence of QMA queries. Let $U$ be a $\class{P}^{\class{QMA}[\class{log}]}$ computation, and let $H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}$ be the $2$-local Hamiltonian corresponding to the $i$th query made by $U$ given that the answers to the previous $i-1$ queries are given by $y_{1}\cdots y_{i-1}$. (Without loss of generality, we may assume $H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}\succeq 0$ by adding multiples of the identity and rescaling.) The oracle query made at step $i$ corresponds to an input $(H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}} , \epsilon, 3\epsilon)$ to $2$-LH, for $\epsilon>0$ a fixed inverse polynomial. Then, Ambainis's~\cite{A14} $O(\log(n))$-local query Hamiltonian $H$ acts on $\spa{X}\otimes\spa{Y}$, where $\spa{X}=(\spa{X}_i)^{\otimes m}=({\mathbb C}^{2})^{\otimes m}$ and $\spa{Y}=\otimes_{i=1}^m\spa{Y}_i$, such that $\spa{X}_i$ is intended to encode the answer to query $i$ with $\spa{Y}_i$ encoding the ground state of the corresponding query Hamiltonian $H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}$. Specifically,
\begin{eqnarray}
H &=& \sum_{i=1}^m\frac{1}{4^{i-1}}\sum_{y_1,\ldots,y_{i-1}}\bigotimes_{j=1}^{i-1}\ketbra{y_j}{y_j}_{\spa{X}_{j}}\otimes\left(2\epsilon \ketbra{0}{0}_{\spa{X}_{i}}\otimes I_{\spa{Y}_i} + \ketbra{1}{1}_{\spa{X}_{i}}\otimes H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}\right)\nonumber\\
&=:&\sum_{i=1}^m\frac{1}{4^{i-1}}\sum_{y_1,\ldots,y_{i-1}}M_{y_1 \cdots y_{i-1}}.\label{eqn:amb1}
\end{eqnarray}
Recall from Section~\ref{scn:preliminaries} that we call a sequence of query answers $y=y_1\cdots y_m\in\set{0,1}^m$ \emph{correct} if it corresponds to a possible execution of $U$. Since $U$ can make queries to its QMA oracle which violate the QMA promise gap, the set of correct $y$ is generally {not} a singleton. However, we henceforth assume without loss of generality that $U$ makes at least one valid query (i.e. which satisfies the QMA promise gap). For if not, then a P machine can solve such an instance by simulating the $\class{P}^{\class{QMA}[\class{log}]}$ machine on all possible (polynomially many) query strings $y\in\set{0,1}^m$. If $U$ corresponds to a YES (NO) instance, then \emph{all} query strings lead to accept (reject), which the P machine can verify.
We now prove the following about $H$.
\begin{lemma}\label{l:amborig}
Define for any $x\in\set{0,1}^m$ the space
$
\spa{H}_{x_1\cdots x_m} := \bigotimes_{i=1}^m \ketbra{x_i}{x_i}\otimes \spa{Y}_i.
$
Then, there exists a correct query string $x\in\set{0,1}^m$ such that the ground state of $H$ lies in $\spa{H}_{x_1\cdots x_m}$. Moreover, suppose this space has minimum eigenvalue $\lambda$. Then, for any incorrect query string $y_1\cdots y_m$, any state in $\spa{H}_{y_1\cdots y_m}$ has energy at least $\lambda+\frac{\epsilon}{4^{m}}$.
\end{lemma}
\noindent As discussed in Section~\ref{scn:intro}, Claim 1 of~\cite{A14} proved a similar statement under the assumption that the correct query string $x$ is unique. In that setting,~\cite{A14} showed that the ground state of $H$ is in $\spa{H}_{x}$, and that for \emph{all} other query strings $y\neq x$, the space $\spa{H}_{y}$ has energy at least $\lambda+\frac{\epsilon}{4^{m-1}}$. However, in general invalid queries must be allowed, and in this setting this claim no longer holds --- two distinct correct query strings can have eigenvalues which are arbitrarily close if they contain queries violating the promise gap. The key observation we make here is that even in the setting of non-unique $x$, a spectral gap between the ground space and all \emph{incorrect} query strings can be shown, which suffices for our purposes. (In other words, note that Lemma~\ref{l:amborig} does not yield a spectral gap between $\lambda$ and the minimum eigenvalue in spaces $\spa{H}_{y_1\cdots y_m}$ for \emph{correct} query strings $y\neq x$.)
\begin{proof}[Proof of Lemma~\ref{l:amborig}]
Observe first that $H$ in Equation~(\ref{eqn:amb1}) is block-diagonal with respect to register $\spa{X}$, i.e. to understand the spectrum of $H$, it suffices to understand the eigenvalues in each of the blocks corresponding to fixing $\spa{X}_i$ to some string $y\in\set{0,1}^m$. Thus, we can restrict our attention to spaces $\spa{H}_{y}$ for $y\in\set{0,1}^m$. To begin, let $x\in\set{0,1}^m$ denote a correct query string which has lowest energy among all \emph{correct} query strings against $H$, i.e. the block corresponding to $x$ has the smallest eigenvalue among such blocks. (Note that $x$ is well-defined, though it may not be unique; in this latter case, any such $x$ will suffice for our proof.) For any $y\in\set{0,1}^m$, define $\lambda_y$ as the smallest eigenvalue in block $\spa{H}_y$. We show that for any \emph{incorrect} query string $y=y_1\cdots y_m$, $\lambda_y\geq\lambda_x+\epsilon/(4^m)$.
We use proof by contradiction, coupled with an exchange argument. Suppose there exists an incorrect query string $y=y_1\cdots y_m$ such that $\lambda_y <\lambda_x+\epsilon/(4^m)$. Since $y$ is an incorrect query string, there exists an $i\in[m]$ such that $y_i$ is the wrong answer to a valid query $H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}$. Let $i$ denote the first such position. Now, consider operator $M_{y_1 \cdots y_{i-1}}$, which recall is defined as
\[
M_{y_1 \cdots y_{i-1}}= \bigotimes_{j=1}^{i-1}\ketbra{y_j}{y_j}_{\spa{X}_{j}}\otimes\left(2\epsilon \ketbra{0}{0}_{\spa{X}_{i}}\otimes I_{\spa{Y}_i} + \ketbra{1}{1}_{\spa{X}_{i}}\otimes H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}\right)
\]
and let $\lambda_{y_1\cdots y_{i-1}\overline{y}_i}$ denote the smallest eigenvalue of $M_{y_1 \cdots y_{i-1}}$ restricted to space $\spa{H}_{y_1\cdots y_{i-1}\overline{y}_i}$, where string $y_1\cdots y_{i-1}\overline{y}_i$ is a correct query string with $\overline{y}_i$ the correct answer to query $i$. Then, any state $\ket{\phi}\in \spa{H}_{y_1\cdots y_i}$ satisfies \begin{equation}\label{eqn:bound1}
\bra{\phi}M_{y_1\cdots y_{i-1}}\ket{\phi}\geq \lambda_{y_1\cdots y_{i-1}\overline{y}_i}+\epsilon/4^{i-1}.
\end{equation}
This is because constrained to space $\spa{H}_{y_1\cdots y_{i-1}}$, $M_{y_1\cdots y_{i-1}}$ reduces to operator $M':=2\epsilon \ketbra{0}{0}_{\spa{X}_{i}}\otimes I_{\spa{Y}_i} + \ketbra{1}{1}_{\spa{X}_{i}}\otimes H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}$. If query $i$ is a YES-instance, the smallest eigenvalue of $M'$ lies in the block corresponding to setting $\spa{X}_i$ to (the correct query answer) $\ket{1}$, and is at most $\epsilon$. On the other hand, the block with $\spa{X}_i$ set to $\ket{0}$ has all eigenvalues equalling $2\epsilon$. A similar argument shows that in the NO-case, the $\ket{0}$-block has eigenvalues equalling $2\epsilon$, and the $\ket{1}$-block has eigenvalues at least $3\epsilon$. Combining this with the $1/4^{i-1}$ factor in Equation~(\ref{eqn:amb1}) yields Equation~(\ref{eqn:bound1}). We conclude that flipping query bit $i$ to the correct query answer $\overline{y}_i$ allows us to choose an assignment from $\spa{H}_{y_1\cdots y_{i-1}\overline{y}_i}$ so that we ``save'' an energy penalty of $\epsilon/4^{i-1}$ against $M_{y_1,\ldots,y_{i-1}}$.
To complete the exchange argument, let $\widehat{M}_{y_1\cdots y_t}$ denote the set of terms from Equation~(\ref{eqn:amb1}) which are consistent with prefix $y_1\ldots y_t$ (e.g. $M_{y_1\ldots y_t}$, $M_{y_1\ldots y_t0}$, $M_{y_1\ldots y_t 1}$, etc). Fix each of the bits $y_{i+1}\cdots y_{m}$ to a new tail of bits $y'_{i+1}\cdots y'_{m}$ so that $y':=y_1\cdots \overline{y}_i y'_{i+1}\cdots y'_m$ is a correct query string. Care is required here; the new query bits $y'_{i+1}\cdots y'_m$ may lead to different energy penalties than the previous string $y_{i+1}\cdots y_m$ against the Hamiltonian terms in set $\widehat{M}_{y_1\cdots \overline{y}_i}$. In other words, we must upper bound any possible energy penalty \emph{increase} when mapping $y_1\cdots \overline{y}_i y_{i+1}\cdots y_m$ to $y'$. To do so, recall that all Hamiltonian terms in Equation~(\ref{eqn:amb1}) are positive semidefinite. Thus, for any state $\ket{\psi}$ in space $\spa{H}_{y_1\cdots \overline{y}_i}$, the energy obtained by $\ket{\psi}$ against terms in $\widehat{M}_{y_1\cdots \overline{y}_i}$ is at least $0$. Conversely, in the worst case, since each term in $\widehat{M}_{y_1\cdots \overline{y}_i}$ has minimum eigenvalue at most $2\epsilon$, the eigenvector $\ket{\psi}$ of smallest eigenvalue in block $H_{y'}$ incurs an additional penalty for queries $i+1$ through $m$ of at most
\[
2\epsilon\sum_{k=i}^\infty \frac{1}{4^k}=\frac{2\epsilon}{3\cdot4^{i-1}}.
\]
We conclude that
\[
\lambda_{y'}\leq \lambda_y-\frac{\epsilon}{4^{i-1}}+\frac{2\epsilon}{3\cdot4^{i-1}} < \left(\lambda_x+\frac{\epsilon}{4^m}\right)-\frac{\epsilon}{4^{i-1}}+\frac{2\epsilon}{3\cdot4^{i-1}}<\lambda_x
\]
where the first inequality follows by the assumption $\lambda_y<\lambda_x+{\epsilon}/{4^m}$. This is a contradiction.
\end{proof}
Our next step is to convert $H$ from an $O(\log n)$-local Hamiltonian to an $O(1)$-local one.
\begin{lemma}\label{l:amb}
For any $x\in\set{0,1}^m$, let $\hat{x}$ denote its unary encoding. Then, for any $\class{P}^{\class{QMA}[\class{log}]}$ circuit $U$ acting on $n$ bits and making $m\geq 1$ queries to a QMA oracle, there exists a mapping to a $4$-local Hamiltonian $H'$ acting on space $({\mathbb C}^2)^{\otimes 2^m-1}\otimes\spa{Y}$ such that there exists a correct query string $x=x_1\cdots x_m$ satisfying:
\begin{enumerate}
\item The ground state of $H'$ lies in subspace $\ketbra{\hat{x}}{\hat{x}}\otimes \spa{Y}$.
\item For any state $\ket{\psi}$ in subspace $\ketbra{\hat{x}'}{\hat{x}'}\otimes \spa{Y}$ where either $\hat{x}'$ is not a unary encoding of a binary string $x'$ or $x'$ is an incorrect query string, one has $\bra{\psi}H'\ket{\psi}\geq \lambda(H')+\epsilon/4^{m}$, for inverse polynomial $\epsilon$.
\item For all strings $x'\in\set{0,1}^m$, $H'$ acts invariantly on subspace $\ketbra{\hat{x}'}{\hat{x}'}\otimes \spa{Y}$.
\item The mapping can be computed in time polynomial in $n$ (recall $m\in O(\log n)$).
\end{enumerate}
\end{lemma}
\begin{proof}
We show how to improve the $O(\log(n))$-local construction $H$ of Lemma~\ref{l:amborig} to $4$-local $H'$. Specifically, recall that $H$ from Equation~(\ref{eqn:amb1}) acts on $(\spa{X}\otimes\spa{Y})$. Using a trick of Kitaev~\cite{KSV02}, we encode the $\spa{X}=\spa{X}_1\otimes\cdots\otimes \spa{X}_m$ register in unary. Specifically, we can write
\begin{eqnarray*}
M_{y_1\cdots y_{i-1}}&=&\sum_{y_{i+1},\ldots,y_{m}}2\epsilon\bigotimes_{j=1}^{i-1}\ketbra{y_j}{y_j}_{\spa{X}_{j}}\otimes\ketbra{0}{0}_{\spa{X}_{i}}\bigotimes_{k=i+1}^{m}\ketbra{y_k}{y_k}_{\spa{X}_{k}}\otimes I_{\spa{Y}} +\\ &&\mbox{\hspace{3mm}}\bigotimes_{j=1}^{i-1}\ketbra{y_j}{y_j}_{\spa{X}_{j}}\otimes\ketbra{1}{1}_{\spa{X}_{i}}\bigotimes_{k=i+1}^{m}\ketbra{y_k}{y_k}_{\spa{X}_{k}}\otimes H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}.
\end{eqnarray*}
We now replace register $\spa{X}_1\otimes\cdots\otimes \spa{X}_m$ with register $\spa{X}'=({\mathbb C}^2)^{\otimes 2^m-1}$ and encode each binary string $x\in\set{0,1}^m$ as the unary string $\hat{x}=\ket{1}^{\otimes \abs{x}}\ket{0}^{\otimes 2^m-\abs{x}-1}$, where $\abs{x}$ is the non-negative integer corresponding to string $x$. In other words, for $M_{x_1\cdots x_{i-1}}$, we replace each string $\ketbra{x}{x}_{\spa{X}_1\otimes\cdots\otimes\spa{X}_m}$ with $\ketbra{1}{1}_{\spa{X}_{1}\otimes\cdots\otimes\spa{X}_{\abs{x}}}\otimes\ketbra{0}{0}_{\spa{X}_{\abs{x}+1}\otimes\cdots\otimes\spa{X}_{2^m-1}}$. Denote the resulting Hamiltonian as $H_1$.
To ensure states in $\spa{X}'$ follow this encoding, add a weighted version of Kitaev's~\cite{KSV02} penalty Hamiltonian,
\[
H_{\rm stab}=3\epsilon\sum_{j=1}^{2^m-2}\ketbra{0}{0}_j\otimes\ketbra{1}{1}_{j+1},
\]
i.e., our final Hamiltonian is $H'=H_1+H_{\rm stab}$. To show that $H'$ satisfies the same properties as $H$ as stated in the claim, we follow the analysis of Kitaev~\cite{KSV02}. Namely, partition the space $\spa{X}'\otimes\spa{Y}$ into orthogonal spaces $\spa{S}$ and $\spa{S}^\perp$ corresponding to the space of valid and invalid unary encodings of $\spa{X}'$, respectively. Since $H_1$ and $H_{\rm stab}$ act invariantly on $\spa{S}$ and $\spa{S}^\perp$, we can consider $\spa{S}$ and $\spa{S}^\perp$ separately. In $\spa{S}$, $H'$ is identical to $H$, implying the claim. In $\spa{S}^\perp$, the smallest non-zero eigenvalue of $H_{\rm stab}$ is at least $3\epsilon$. Thus, since $H_1\succeq 0$, if we can show that the smallest eigenvalue of $H$ is at most $3\epsilon-\epsilon/4^m$, we have shown the claim (since, in particular, we will have satisfied statement 2 of our claim). To show this bound on the smallest eigenvalue, suppose $x$ is all zeroes, i.e. set register $\spa{X}_1\otimes\cdots\otimes\spa{X}_m$ for $H$ to all zeroes. Then, each term $M_{0_1\cdots 0_{i-1}}$ yields an energy penalty of exactly $2\epsilon$, yielding an upper bound on the smallest eigenvalue of $H$ of
$
2\epsilon\sum_{k=0}^{m-1}\frac{1}{4^k}\leq \frac{8}{3}\epsilon=3\epsilon-\epsilon/3.
$
\end{proof}
\section{Measuring $1$-local observables}\label{scn:1local}
We now restate and prove Theorem~\ref{thm:main1}.
\begin{reptheorem}{thm:main1}
$\prob{APX-SIM}$~is $\class{P}^{\class{QMA}[\class{log}]}$-complete for $k=5$ and $l=1$, i.e., for $5$-local Hamiltonian $H$ and $1$-local observable $A$.
\end{reptheorem}
\begin{proof}
Containment in $\class{P}^{\class{QMA}[\class{log}]}$ was shown for $k,l\in O(\log n)$ in \cite{A14}; we show $\class{P}^{\class{QMA}[\class{log}]}$-hardness here. Let $U'$ be an arbitrary $\class{P}^{\class{QMA}[\class{log}]}$ circuit corresponding to instance $\Pi$, such that $U'$ acts on workspace register $W$ and query result register $Q$. Suppose $U'$ consists of $L'$ gates and makes $m=c\log(n)$ queries, for $c\in O(1)$ and $n$ the input size. Without loss of generality, $U'$ can be simulated with a similar unitary $U$ which treats $Q$ as a \emph{proof} register which it does not alter at any point: Namely, $U$ does not have access to a $\class{QMA}$ oracle, but rather reads bit $Q_i$ whenever it desires the answer to the $i$th query. Thus, if a correct query string $y_1\cdots y_m$ corresponding to an execution of $U'$ on input $x$ is provided in $Q$ as a ``proof'', then the output statistics of $U'$ and $U$ are identical. We can also assume without loss of generality that $Q$ is encoded not in binary, but in unary. Thus, $Q$ consists of $2^m-1\in\textup{poly}(n)$ bits. For simplicity in our discussion, however, we will continue to speak of $m$-bit query strings $y=y_1\cdots y_m$ in register $Q$.
Next, we map $U$ to a $5$-local Hamiltonian $H_1$ via a modification of the circuit-to-Hamiltonian construction of Kitaev~\cite{KSV02}, such that $H_1$ acts on registers $W$ (workspace register), $Q$ (proof register), and $C$ (clock register). Recall from Section~\ref{scn:preliminaries} that Kitaev's construction outputs Hamiltonian terms $H_{\rm in}+H_{\rm prop}+H_{\rm stab}+H_{\rm out}$. Set $H_1=\Delta(H_{\rm in}+H_{\rm prop}+H_{\rm stab})$ for $\Delta$ to be set as needed. It is crucial that $H_{\rm out}$ be omitted from $H_1$, as we require our final Hamiltonian $H$ to enforce a certain structure on the ground space \emph{regardless} of whether the computation should accept or reject. The job of ``checking the output'' is instead delegated to the observable $A$. More formally, note that $H_1$ has a non-trivial null space, which is its ground space, consisting of history states $\ket{\psi_{\rm hist}}$ (Equation~(\ref{eqn:hist})) which simulate $U$ on registers $W$ and $Q$. These history states correctly simulate $U'$ \emph{assuming that} $Q$ is initialized to a correct proof.
To thus enforce that $Q$ be initialized to a correct proof, let $H_2$ be our variant of Ambainis's query Hamiltonian from Lemma~\ref{l:amb}, such that $H_2$ acts on registers $Q$ and $Q'$ (where for clarity $Q=({\mathbb C}^2)^{\otimes 2^m-1}$ (recall $m\in O(\log n)$) and $Q'=\spa{Y}$ from Lemma~\ref{l:amb}).
Hence, our final Hamiltonian is $H=H_1+H_2$, which is $5$-local since $H_1$ is $5$-local. Suppose without loss of generality that $U$'s output qubit is $W_1$, which is set to $\ket{0}$ until the final time step, in which the correct output is copied to it. Then, set observable $A=(I+Z)/2$ such that $A$ acts on qubit $W_1$. Set $a=1-1/(L+1)$, and $b=1-1/2L$ for $L$ the number of gates in $U$. Fix $\eta\geq\max(\snorm{H_2},1)$ (such an $\eta$ can be efficiently computed by applying the triangle inequality and summing the spectral norms of each term of $H_2$ individually). Set $\Delta= L^3\eta\gamma$ for $\gamma$ a monotonically increasing polynomial function of $L$ to be set as needed. Finally, set $\delta=1/\Delta$. This completes the construction.
\paragraph{Correctness.} Suppose $\Pi$ is a YES instance. Then, by Lemma~\ref{l:amb}, the ground space of $H_2$ is the span of states of the form $\ket{\hat{x}}_Q\otimes\ket{\phi}_{Q'}$ where $\hat{x}$ is a correct query string encoded in unary. Fix an arbitrary such ground state $\ket{\hat{x}}_Q\otimes\ket{\phi}_{Q'}$. Note that setting $Q$ to $\hat{x}$ in this manner causes $U$ to accept with certainty. Consider the history state $\ket{\psi_{\rm hist}}$ on registers $W$, $C$, $Q$, and $Q'$ ($Q$ and $Q'$ together are the ``proof register'', and the contents of $Q'$ are not accessed by $U$), which lies in the ground space of $H_1$. Since $U$ can read but does not alter the contents of $Q$, the history state has the tensor product form $\ket{\psi_{\rm hist}'(x)}_{W,C}\otimes\ket{\hat{x}}_Q\otimes \ket{\phi}_{Q'}$ for some $\ket{\psi_{\rm hist}'(x)}_{W,C}$, i.e. the action of $H_2$ on the history state is unaffected. We conclude that $\ket{\psi_{\rm hist}'(x)}_{W,C}\otimes \ket{\hat{x}}_Q\otimes\ket{\phi}_{Q'}$ is in the ground space of $H$. Moreover, since $U$ accepts $\hat{x}$, the expectation of this state against $A$ is $1-1/(L+1)$.
Conversely, suppose we have a NO instance $\Pi$, and consider any $\ket{\psi}$ satisfying $\bra{\psi}H\ket{\psi}\leq \lambda(H)+\delta$. By Lemma~\ref{l:GKgap}, the smallest non-zero eigenvalue of $\Delta H_1$ is at least $J=\pi^2\Delta/(64L^3)= \pi^2 \eta\gamma/64$. Recalling that $\delta=1/\Delta$, apply Corollary~\ref{cor:kkr} to obtain that there exists a valid history state $\ket{\psi'}$ on $W$, $C$, $Q$, and $Q'$ such that $\abs{\brakett{\psi}{\psi'}}^2\geq 1-O(\gamma^{-2}L^{-6})$, which by Equation~(\ref{eqn:enorm}) implies
\begin{equation}\label{eqn:2}
\trnorm{\ketbra{\psi}{\psi}-\ketbra{\psi'}{\psi'}}\leq\frac{c}{\gamma L^3}
\end{equation}
for some constant $c>0$. By definition, such a history state $\ket{\psi'}$ simulates $U$ given ``quantum proof'' $\ket{\phi}_{Q,Q'}$ in registers $Q$ and $Q'$, i.e. $\ket{\psi'}=\sum_{t} U_t\cdots U_1 \ket{0\cdots 0}_W\ket{t}_C\ket{\phi}_{Q,Q'}$. By Equation~(\ref{eqn:2}) and the H\"{o}lder inequality,
\[
\abs{\tr(H\ketbra{\psi}{\psi})-\tr(H\ketbra{\psi'}{\psi'})}\leq \frac{c}{\gamma L^3}\snorm{H}=:\gamma'.
\]
Thus, $\bra{\psi'}H\ket{\psi'}\leq\lambda(H)+(\delta+\gamma')$.
We now analyze the structure of $\ket{\phi}_{Q,Q'}$. By Lemma~\ref{l:amb}, the ground space $\spa{G}$ of $H_2$ is contained in the span of states of the form $\ket{\hat{x}}_Q\otimes\ket{\phi'}_{Q'}$ where $\hat{x}$ is a correct query string encoded in unary. Since the ground spaces of $H_1$ and $H_2$ have non-empty intersection, i.e. history states acting on ``quantum proofs'' from $\spa{G}$ (which lie in the null space of $H_1$ and obtain energy $\lambda(H_2)$ against $H_2$), we know $\lambda(H)=\lambda(H_2)$. Thus, since $H_1\succeq 0$,
\begin{equation}\label{eqn:3}
\bra{\psi'}H_2\ket{\psi'}\leq \bra{\psi'}H\ket{\psi'}\leq\lambda(H_2)+(\delta+\gamma').
\end{equation}
Write $\ket{\phi}=\alpha\ket{\phi_1}+\beta\ket{\phi_2}$ for $\alpha ,\beta\in {\mathbb C} , \abs{\alpha}^2+\abs{\beta}^2=1$ and for unit vectors
\begin{eqnarray*}
\ket{\phi_1}&\in&\operatorname{Span}\set{\ket{\hat{x}}_Q\otimes\ket{\phi'}_{Q'}\mid \text{correct query string } x},\\ \ket{\phi_2}&\in&\operatorname{Span}\set{\ket{\hat{x}}_Q\otimes\ket{\phi'}_{Q'}\mid \text{incorrect query string } x}.
\end{eqnarray*}
Since any history state $\ket{\psi'}$, for any amplitudes $\alpha_{x}$ and unit vectors $\ket{\phi'_x}$, has the form
\begin{eqnarray*}
\ket{\psi'}=\sum_{t,x}\alpha_{x}U_t\cdots U_1 \ket{0\cdots 0}_W\ket{t}_C\ket{\hat{x}}_{Q}\ket{\phi'_x}_{Q'}=\sum_{x}\alpha_{x}\ket{\psi_{\rm hist}'(x)}_{W,C}\ket{\hat{x}}_{Q}\ket{\phi'_x}_{Q'}
\end{eqnarray*}
(i.e. for any fixed $x$, $\ket{\hat{x}}_Q$ is not altered), and since $H_2$ is block-diagonal with respect to strings in $Q$, by Equation~(\ref{eqn:3}) and Lemma~\ref{l:amb} we have
\begin{eqnarray*}
\lambda(H_2)+(\delta+\gamma')&\geq& \bra{\psi'}H_2\ket{\psi'}\\
&=& \abs{\alpha}^2\bra{\phi_1}H_2\ket{\phi_1}+\abs{\beta}^2\bra{\phi_2}H_2\ket{\phi_2}\\
&\geq&\abs{\alpha}^2\lambda(H_2)+\abs{\beta}^2\left(\lambda(H_2)+\frac{\epsilon}{4^m}\right),
\end{eqnarray*}
which implies $\abs{\beta}^2\leq 4^m(\delta+\gamma')/\epsilon$. Thus, defining $\ket{\psi''}$ as the history state for ``proof'' $\ket{\phi_1}_{Q,Q'}$, we have that $\trnorm{\ketbra{\psi}{\psi}-\ketbra{\psi''}{\psi''}}$ is at most
\begin{equation}
\trnorm{\ketbra{\psi}{\psi}-\ketbra{\psi'}{\psi'}}+\trnorm{\ketbra{\phi}{\phi}-\ketbra{\phi_1}{\phi_1}}
\leq\frac{c}{\gamma L^3}+2\sqrt{\frac{4^m(\delta+\gamma')}{\epsilon}},\label{eqn:4}
\end{equation}
which follows from the triangle inequality and the structure of the history state. Observe now that increasing $\gamma$ by a polynomial factor decreases $\delta+\gamma'$ by a polynomial factor. Thus, set $\gamma$ as a large enough polynomial in $L$ such that
\begin{equation}\label{eqn:5}
\frac{c}{\gamma L^3}+2\sqrt{\frac{4^m(\delta+\gamma')}{\epsilon}}\leq \frac{1}{2L}.
\end{equation}
Since $U$ rejects any correct query string (with certainty) in the NO case, and since $\ket{\psi''}$ is a valid history state whose $Q$ register is a superposition over correct query strings (all of which must lead to reject), we conclude that $\bra{\psi''}A\ket{\psi''}=1$. Moreover,
\[
\abs{\tr(A\ketbra{\psi}{\psi})-\tr(A\ketbra{\psi''}{\psi''})}\leq
\snorm{A}\trnorm{\ketbra{\psi}{\psi}-\ketbra{\psi''}{\psi''}}\leq \frac{1}{2L},
\]
where the first inequality follows from H\"{o}lder's inequality, and the second by Equations~(\ref{eqn:4}) and~(\ref{eqn:5}). We conclude that $\bra{\psi}A\ket{\psi}\geq 1-1/(2L)$, completing the proof.
\end{proof}
\section{Estimating two-point correlation functions}\label{scn:corr}
We now define \prob{APX-2-CORR}\ and show that it is $\class{P}^{\class{QMA}[\class{log}]}$-complete using similar techniques to Section~\ref{scn:1local}. For brevity, define $f(\ket{\psi},A,B):= \bra{\psi}A\otimes B \ket{\psi}-\bra{\psi}A\ket{\psi}\bra{\psi}B\ket{\psi}$.
\begin{definition}[\prob{APX-2-CORR}$(H,A,B,k,l,a,b,\delta)$]
Given a $k$-local Hamiltonian $H$, $l$-local observables $A$ and $B$, and real numbers $a$, $b$, and $\delta$ such that $a-b\geq n^{-c}$ and $\delta\geq n^{-c'}$, for $n$ the number of qubits $H$ acts on and $c,c'\geq 0$ some constants, decide:
\begin{itemize}
\item If $H$ has a ground state $\ket{\psi}$ satisfying $f(\ket{\psi},A, B)\geq a$, output YES.
\item If for any $\ket{\psi}$ satisfying $\bra{\psi}H\ket{\psi}\leq \lambda (H)+\delta$ it holds that $f(\ket{\psi}, A, B)\leq b$, output NO.
\end{itemize}
\end{definition}
\noindent We now prove Thm~\ref{thm:main2} by showing $\class{P}^{\class{QMA}[\class{log}]}$-hardness in Lemma~\ref{lem:2-PHard} and containment in $\class{P}^{\class{QMA}[\class{log}]}$ in Lemma~\ref{lem:2-PIn}.
\begin{lemma} \label{lem:2-PHard}
\prob{APX-2-CORR}\ is $\class{P}^{\class{QMA}[\class{log}]}$-hard for $k=5$ and $l=1$, i.e., for $5$-local Hamiltonian $H$ and $1$-local observables $A$ and $B$.
\end{lemma}
\begin{proof}
For an arbitrary $\class{P}^{\class{QMA}[\class{log}]}$ circuit $U'$, define $U$ as in the proof of Theorem \ref{thm:main1}, consisting of $L$ one- and two-qubit gates. We modify $U$ as follows. Let $U$'s output qubit be denoted $W_1$. We add two ancilla qubits, $W_2$ and $W_3$, which are set to $\ket{00}$ throughout $U$'s computation. We then append to $U$ a sequence of six 2-qubit gates which, controlled on $W_1$, map $\ket{00}$ in $W_2W_3$ to $\ket{\phi^+}=(\ket{00}+\ket{11})/\sqrt{2}$, e.g. apply a controlled Hadamard gate and the 5-gate Toffoli construction from Figure~4.7 of~\cite{NC00}. Appending six identity gates on $W_1$, we obtain a circuit $V=V_{L+12}\cdots V_1$ which has $L+12$ gates. Finally, we construct $H=H_1+H_2$ as in the proof of Theorem \ref{thm:main1}, mapping $V$ to a 5-local Hamiltonian $H_1$ on registers $W$, $Q$, and $C$, and we set $A={Z}_{W_2}$ and $B={Z}_{W_3}$ for Pauli $Z$. Similar to the proof of Theorem~\ref{thm:main1}, set $\Delta= L^3\eta\gamma$ and $\delta=1/\Delta$, for $\gamma$ large enough so that
\begin{equation}\label{eqn:6}
\frac{c}{\gamma L^3}+2\sqrt{\frac{4^m(\delta+\gamma')}{\epsilon}}\leq \frac{1}{2(L+13)},
\end{equation
for $\gamma '$ as defined in the proof of Theorem~\ref{thm:main1}. Set $a=3/(L+13)$ and $b=1/(L+13)$. This completes the construction.
To set up the correctness proof, consider history state $\ket{\psi_{\rm hist}}$ for $V$ given quantum proof $\ket{\phi}_{Q,Q'}$, and define for brevity $\ket{\phi_t}:=V_t\cdots V_1\ket{\phi}_{Q,Q'}\ket{0\cdots 0}_W\ket{00}_{W_2W_3}$. Then,
\begin{equation} \label{eq:firstTerm}
\bra{\psi_{\rm hist}}Z_{W_2}\otimes Z_{W_3}\ket{\psi_{\rm hist}} = \frac{1}{L+13} \sum_{t=0}^{L+12}\tr\left((\ketbra{\phi_t}{\phi_t}_{Q,Q',W}\otimes\ketbra{t}{t}_C) Z_{W_2}\otimes Z_{W_3}\right),
\end{equation}
since $Z_{W_2}\otimes Z_{W_3}$ acts invariantly on the clock register. Defining $\ket{v}:=\sum_{t=L+1}^{L+12}\ket{\phi_t}_{Q,Q',W}\ket{t}_C$, we have that since $W_2W_3$ is set to $\ket{00}$ for times $0\leq t\leq L$, Equation~(\ref{eq:firstTerm}) simplifies to
$
((L+1) + \bra{v}Z_{W_2}\otimes Z_{W_3}\ket{v}))/(L+13).
$
Thus, via similar reasoning $f(\ket{\psi_{\rm hist}},Z_{W_2},Z_{W_3})$ equals
\begin{eqnarray} \label{eqn:reduced}
\frac{1}{L+13}\left[(L+1) + \bra{v}Z_{W_2}\otimes Z_{W_3}\ket{v}\right] - \frac{1}{(L+13)^2} \left[(L+1) + \bra{v}Z_{W_2}\ket{v}\right]\left[(L+1) + \bra{v}Z_{W_3}\ket{v})\right] .
\end{eqnarray}
Suppose now that $\Pi$ is a YES instance. Then there exists a history state $\ket{\psi_{\rm hist}}$ in the ground space of $H$ (i.e. with quantum proof $\ket{\phi}_{Q,Q'}=\ket{\hat{x}}_Q\otimes\ket{\phi'}_{Q'}$ for a correct query string $x$) for which $W_2W_3$ is set to $\ket{\phi ^+}$ in the final seven timesteps (since $U'$ is deterministic). Since $\bra{\phi^+}Z\otimes Z\ket{\phi^+}=1$ and $\bra{\phi^+}Z\otimes I\ket{\phi^+}=0$, we can lower bound Equation~(\ref{eqn:reduced}) by
\[
\frac{(L+1)-5+7}{L+13}-\frac{((L+1)+5)^2}{(L+13)^2} = \frac{1}{L+13}\left(4-\frac{49}{L+13}\right) ,
\]
where the $\pm5$ terms correspond to timesteps $t=L+1,\ldots,L+5$ and use the fact that $\snorm{Z}=1$.
Conversely, suppose $\Pi$ is a NO instance, and consider any $\ket{\psi}$ satisfying $\bra{\psi}H\ket{\psi}\leq \lambda(H)+\delta$. Then, as argued in the proof of Theorem~\ref{thm:main1}, there exists a history state $\ket{\psi''}$ on ``proof'' $\ket{\phi_1}_{Q,Q'}$ (consisting of a superposition of correct query strings) satisfying
$
\trnorm{\ketbra{\psi}{\psi}-\ketbra{\psi''}{\psi''}}
\leq (2(L+13))^{-1},
$
by Equations~(\ref{eqn:4}), (\ref{eqn:5}) and~(\ref{eqn:6}). Since the history state $\ket{\psi''}$ has $W_2W_3$ set to $\ket{00}$ in all time steps, using Equation~(\ref{eqn:reduced}) and applying the H\"{o}lder inequality to each term of $f(\ket{\psi},Z_{W_2},Z_{W_3})$ yields upper bound
\[
1-\left( 1-\frac{1}{2(L+13)}\right)^2 = \frac{1}{L+13}\left(1-\frac{1}{4(L+13)}\right) .
\]
\end{proof}
\begin{lemma} \label{lem:2-PIn} \prob{APX-2-CORR}\ is in $\class{P}^{\class{QMA}[\class{log}]}$.
\end{lemma}
\begin{proof}
The proof combines ideas from Ambainis's original proof of $\prob{APX-SIM}\in\class{P}^{\class{QMA}[\class{log}]}$~\cite{A14} (see Theorem 6 therein) and a trick of Chailloux and Sattath~\cite{CS11} from the study of $\class{QMA(2)}$. We give a proof sketch here. Specifically, let $\Pi=(H,A,B,k,l,a,b,\delta)$ be an instance of \prob{APX-2-CORR}. Similar to~\cite{A14}, the $\class{P}^{\class{QMA}[\class{log}]}$ verification procedure proceeds, at a high level, as follows:
\begin{enumerate}
\item Use logarithmically many QMA oracle queries to perform a binary search to obtain an estimate $\gamma\in{\mathbb R}$ satisfying $\lambda(H)\in[\gamma ,\gamma +\frac{\delta}{2}]$.
\item Use a single QMA oracle query to verify the statement: ``There exists $\ket{\psi}$ satisfying (1) $\bra{\psi}H\ket{\psi}\leq \lambda(H)+\delta$ and (2) $f(\ket{\psi},A,B)\geq a$.''
\end{enumerate}
The first of these steps is performed identically to the proof of $\prob{APX-SIM}\in\class{P}^{\class{QMA}[\class{log}]}$~\cite{A14}; we do not elaborate further here. The second step, however, differs from~\cite{A14} for the following reason. Intuitively,~\cite{A14} designs a QMA protocol which takes in many copies of a proof $\ket{\psi}$, performs phase estimation on each copy, postselects to ``snap'' each copy of $\ket{\psi}$ into a low-energy state $\ket{\psi_i}$ of $H$, and subsequently uses states $\set{\ket{\psi_i}}$ to estimate the expectation against an observable $A$. If the ground space of $H$ is degenerate, the states $\set{\ket{\psi_i}}$ may not all be identical. This does not pose a problem in~\cite{A14}, as there soundness of the protocol is guaranteed since {all} low energy states have high expectation against $A$. In our setting, however, if we use this protocol to individually estimate each of the terms $\bra{\psi}A\otimes B \ket{\psi}$, $\bra{\psi}A \ket{\psi}$, and $\bra{\psi} B \ket{\psi}$, soundness \emph{can} be violated if each of these three terms are not estimated using the same state $\ket{\psi_i}$, since the promise gap of the input does not necessarily say anything about the values of each of these three terms individually.
To circumvent this, we observe that to evaluate $f(\ket{\psi},A,B)$, we do not need the ground state $\ket{\psi}$ itself, but only a classical description of its local reduced density matrices (a similar idea was used in~\cite{CS11} to verify the energy of a claimed product state proof against a local Hamiltonian in the setting of QMA(2)). Specifically, suppose $\Pi$ consists of a $k$-local Hamiltonian $H$ acting on $n$ qubits. Then, the prover sends classical descriptions of $k$-qubit density matrices $\set{\rho_S}$ for each subset $S\subseteq[n]$ of size $\abs{S}=k$, along with a QMA proof that the states $\set{\rho_S}$ are consistent with a global $n$-qubit pure state $\ket{\psi}$ (recall the problem of verifying consistency is QMA-complete~\cite{L06}). The verifier runs the QMA circuit for consistency, and assuming this check passes it uses the classical $\set{\rho_S}$ to classically verify that (1) $\bra{\psi}H\ket{\psi}\leq \lambda(H)+\delta$ and (2) $f(\ket{\psi},A,B)\geq a$ (since both of these depend only on the local states $\set{\rho_S}$).
\end{proof}
\section{$\class{P}^{\class{QMA}[\class{log}]}$ is in PP}\label{scn:PP}
We now restate and prove Theorem~\ref{thm:inPP}. Our approach is to develop a variant of the hierarchical voting scheme used in the proof of $\class{P}^{\class{NP}[\class{log}]}\subseteq \class{PP}$~\cite{BHW89} which uses the strong error reduction technique of Marriott and Watrous~\cite{MW05}. We also require a more involved analysis than present in~\cite{BHW89}, since QMA is a class of promise problems, not decision problems.
\begin{reptheorem}{thm:inPP}
$\class{P}^{\class{QMA}[\class{log}]}\subseteq\class{PP}$.
\end{reptheorem}
\begin{proof}
Let $\Pi$ be a P machine which makes $m=c\log n$ queries to an oracle for $2$-LH, for $c\in O(1)$ and $n$ the input size.
Without loss of generality, we assume all queries involve Hamiltonians on $M$ qubits (for $M$ some fixed polynomial in $n$). Define $q:= (M+2)m$. We give a PQP computation simulating $\Pi$; since $\class{PQP}=\class{PP}$~\cite{W09_2}, this suffices to show our claim. Let $V$ denote the verification circuit for $2$-LH. The PQP computation proceeds as follows (intuition to follow):
\begin{enumerate}
\item For $i$ from $1$ to $m$:
\begin{enumerate}
\item Prepare $\rho=I/2^M\in\density{({\mathbb C}^2)^{\otimes M}}$.
\item Run $V$ on the $i$th query Hamiltonian $H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}$ (see Equation~(\ref{eqn:amb1})) and proof $\rho$, and measure the output qubit in the standard basis. Set bit $y_i$ to the result.
\end{enumerate}
\item Let $y=y_1\cdots y_m$ be the concatenation of bits set in Step 1(b).
\item For $i$ from $1$ to $n^c-1$:
\begin{enumerate}
\item If $\abs{y}< i$, then with probability $1-2^{-q}$, set $y=\#$, and with probability $2^{-q}$, leave $y$ unchanged.
\end{enumerate}
\item If $y=\#$, output a bit in $\set{0,1}$ uniformly at random. Else, run $\Pi$ on query string $y$ and output $\Pi$'s answer.
\end{enumerate}
\textbf{Intuition.} In Step 1, one tries to determine the correct answer to query $i$ by guessing a satisfying quantum proof for verifier $V$. Suppose for the moment that $V$ has zero error, i.e. has completeness $1$ and soundness $0$, and that $\Pi$ only makes valid queries. Then, if Step 1(b) returns $y_i=1$, one knows with certainty that the query answer should be $1$. And, if the correct answer to query $i$ is $0$, then Step 1(b) returns $y_i=0$ with certainty. Thus, analogous to the classical case of an NP oracle (as done in~\cite{BHW89}), it follows that the lexicographically \emph{largest} query string $y^*$ obtainable by this procedure must be the (unique) correct query string (note that $y^*\neq 1^m$ necessarily, since a $1$ in query $i$ is only possible if query $i$ is a YES instance of $2$-LH). Thus, ideally one wishes to obtain $y^*$, simulate $\Pi$ on $y^*$, and output the result. To this end, Step 3 ensures that among all values of $y\neq \#$, $y^*$ is more likely to occur than all other $y\neq y^*$ combined. We now make this intuition rigorous (including in particular the general case where $V$ is not zero-error and $\Pi$ makes invalid queries).\\
\noindent \textbf{Correctness.} To analyze correctness of our PQP computation, it will be helpful to refine our partition of the set of query strings $\set{0,1}^m$ into three sets:
\begin{itemize}
\item \textbf{(Correct query strings)} Let $A\subseteq\set{0,1}^m$ denote the set of query strings which correspond to correctly answering each of the $m$ queries. Note we may have $\abs{A}>1$ if invalid queries are made.
\item \textbf{(Incorrect query strings)} Let $B\subseteq\set{0,1}^m$ denote the set of query strings such that for any $y\in B$, any bit of $y$ encoding an incorrect query answer is set to $0$ (whereas the correct query answer would have been $1$, i.e. we failed to ``guess'' a good proof for this query in Step 1).
\item \textbf{(Strongly incorrect query strings)} Let $C=\set{0,1}^m\setminus (A\cup B)$ denote the set of query strings such that for any $y\in C$, at least one bit corresponding to an incorrect query answer is set to $1$ (whereas the correct query answer would have been $0$). Such an error can only arise due to the bounded-error of our QMA verifier in Step 1(b).
\end{itemize}
Let $Y$ be a random variable corresponding to the query string $y$ obtained at the end of Step 3. To show correctness, we claim that it suffices to show that
\begin{equation}\label{eqn:goal}
\Delta:=\operatorname{Pr}[Y\in A]-\operatorname{Pr}[Y\in B\cup C]>0.
\end{equation}
To see this, let $p_1$, $p_2$, and $p_3$ denote the probability that after Step 3, $y=\#$, $y\in A$, and $y\in B\cup C$, respectively. Then, $p_1+p_2+p_3=1$, and let $p_2-p_3=\Delta>0$. Suppose now that the input to $\Pi$ is a YES instance. Then, our protocol outputs $1$ with probability at least
\begin{equation}\label{eqn:Delta1}
\frac{p_1}{2}+p_2=\frac{1-p_2-p_3}{2}+p_2=\frac{1+\Delta}{2}>\frac{1}{2}.
\end{equation}
If the input is a NO instance, the protocol outputs $1$ with probability at most
$
\frac{p_1}{2}+p_3=\frac{1-\Delta}{2}<\frac{1}{2}.
$
We hence have a PQP computation, as desired. We thus now show that Equation~(\ref{eqn:goal}) holds.
To ease the presentation, we begin by making two assumptions (to be removed later): (i) $V$ is zero-error and (ii) $\Pi$ makes only valid queries. In this case, assumption (i) implies $C=\emptyset$ (i.e. all incorrect query strings belong to $B$), and (ii) implies $A$ is a singleton (i.e. there is a unique correct query string $y^*$). Thus, here $\Delta=\operatorname{Pr}[Y\in A]-\operatorname{Pr}[Y\in B]$.
To begin, note that for any $y\in\set{0,1}^m$, we have
\begin{equation}\label{eqn:prob}
\operatorname{Pr}[Y=y]=\operatorname{Pr}[y \text{ chosen in Step 2 }] \cdot \left(\frac{1}{2^q}\right)^{(n^c-1)-\abs{y}},
\end{equation}
where $\abs{y}$ denotes the non-negative integer represented by string $y$. Let ${\rm HW}(x)$ denote the Hamming weight of $x\in\set{0,1}^m$. Since each query corresponds to a verifier on $M$ proof qubits, we have for (the unique) $y^*\in A$ that
\begin{equation}\label{eqn:LB}
\operatorname{Pr}[y^* \text{ chosen in Step 2 }]\geq 2^{-M\cdot{\rm HW}(y^*)}\geq 2^{-Mm}
\end{equation}
(recall from Section~\ref{scn:preliminaries} that setting $\rho=I/2^M$ simulates ``guessing'' a correct proof with probability at least $1/2^M$).
It follows by Equations~(\ref{eqn:prob}) and~(\ref{eqn:LB}) that
\begin{eqnarray}
\Delta&\geq& \left(\frac{1}{2^q}\right)^{(n^c-1)-\abs{y^*}}\left[\frac{1}{2^{Mm}}-\sum_{y\in B} \left(\frac{1}{2^q}\right)^{\abs{y^*}-\abs{y}}\right],\nonumber\\
&\geq&\left(\frac{1}{2^q}\right)^{(n^c-1)-\abs{y^*}}\left[\frac{1}{2^{Mm}}- (2^m)\left(\frac{1}{2^q}\right)\right],\nonumber\\
&\geq&\left(\frac{1}{2^q}\right)^{(n^c-1)}\frac{1}{2^{Mm}}\left[1- \frac{1}{2^{m}}\right],\label{eqn:zeroerror}
\end{eqnarray}
where the first inequality follows since $\operatorname{Pr}[y \text{ chosen in Step 2 }]\leq 1$, the second inequality since $y\in B$ if and only if $\abs{y}<\abs{y^*}$, and the third inequality since $q=(M+2)m$. Thus, $\Delta>0$ as desired.\\
\noindent\textbf{Removing assumption (i).} We now remove the assumption that $V$ is zero error. In this case, $A$ is still a singleton; let $y^*\in A$. We can now also have strongly incorrect query strings, i.e. $C\neq \emptyset$ necessarily. Assume without loss of generality that $V$ acts on $M$ proof qubits, and by strong error reduction~\cite{MW05} has completeness $c:=1-2^{p(n)}$ and soundness $s:=2^{p(n)}$, for $p$ a polynomial to be chosen as needed. Then, since $V$ can err, Equation~(\ref{eqn:LB}) becomes
\begin{eqnarray}
\operatorname{Pr}[y^*\text{ chosen in Step 2 }]&\geq&\left(\frac{c}{2^M}\right)^{{\rm HW}(y^*)}\left(1-s\right)^{m-{\rm HW}(y^*)}\nonumber\\
&=&\frac{1}{2^M}^{{\rm HW}(y^*)}e^{m\ln(1-\frac{1}{2^p})}\nonumber\\
&\geq& \frac{1}{2^{Mm}}\left(1-\frac{m}{2^p-1}\right),\label{eqn:8}
\end{eqnarray}
where the equality follows by the definitions of $c$ and $s$, and the second inequality by applying the Maclaurin series expansion of $\ln(1+x)$ for $\abs{x}<1$ and the fact that $e^t\geq 1+t$ for all $t\in {\mathbb R}$. Thus, the analysis of Equation~(\ref{eqn:zeroerror}) yields that
\begin{equation}\label{eqn:7}
\operatorname{Pr}[Y\in A]-\operatorname{Pr}[Y\in B]\geq \left(\frac{1}{2^q}\right)^{(n^c-1)}\frac{1}{2^{Mm}}\left[1- \frac{1}{2^{m}}-\frac{m}{2^p-1}\right],
\end{equation}
i.e. the additive error introduced when assumption $(i)$ is dropped scales roughly as $2^{-p}$, where recall $p$ can be set as needed. Note also that Equation~(\ref{eqn:7}) crucially holds for all $y\in B$ even with assumption (i) dropped since the analysis of Equation~(\ref{eqn:zeroerror}) used only the trivial bound $\operatorname{Pr}[y \text{ chosen in Step 2 }]\leq 1$ for any $y\in B$.
Next, we upper bound the probability of obtaining $y\in C$ in Step 2.
For any fixed $y\in C$, suppose the first bit on which $y$ and $y^*$ disagree is bit $j$. Then, bits $j$ of $y$ and $y^*$ must be $1$ and $0$, respectively. This means $0$ is the correct answer for query $j$. By the soundness property of $V$, the probability of obtaining $1$ on query $j$ (and hence that of obtaining $y$ in Step 2) is at most $2^{-p}$. Thus,
\begin{equation}\label{eqn:Delta2}
\Delta\geq\left(\frac{1}{2^q}\right)^{(n^c-1)}\frac{1}{2^{Mm}}\left[1- \frac{1}{2^{m}}-\frac{m}{2^p-1}\right] - \frac{2^m}{2^p}.
\end{equation}
We conclude that setting $p$ to a sufficiently large fixed polynomial ensures $\Delta>0$, as desired.\\
\noindent\textbf{Removing assumption (ii).} We now remove the assumption that $\Pi$ only makes valid queries, which is the most involved step. Here, $A$ is no longer necessarily a singleton. The naive approach would be to let $y^*$ denote the \emph{lexicographically largest} string in $A$, and attempt to run a similar analysis as before. Unfortunately, this no longer necessarily works for the following reason. For any invalid query $i$, we do not have strong bounds on the probability that $V$ accepts in Step 1(b); in principle, this value can lie in the range $(2^{-p},1-2^{-p})$. Thus, running the previous analysis with the lexicographically largest $y^*\in A$ may cause Equation~(\ref{eqn:Delta2}) to yield a negative quantity. This is because if bit $i$ of $y^*$, denoted $b$, was set according to invalid query $i$, then the probability of obtaining bit $b$ in query $i$ may scale as $O(2^{-p})$; thus, both Equation~(\ref{eqn:8}) and Equation~(\ref{eqn:7}) would also scale as $O(2^{-p})$, and Equation~(\ref{eqn:Delta2}) may be negative. We hence require a more delicate analysis.
We begin by showing the following lower bound.
\begin{lemma}\label{l:LB}
Define $\Delta':=\operatorname{Pr}[Y\in A]-\operatorname{Pr}[Y \in B]$. Then,
\[
\Delta'\geq \left(\frac{1}{2^q}\right)^{(n^c-1)}\frac{1}{2^{Mm}}\left[1- \frac{1}{2^{m}}-\frac{m}{2^p-1}\right].
\]
\end{lemma}
\begin{proof}
We introduce the following definitions. For any string $y\in\set{0,1}^m$, let $I_y\subseteq\set{1,\ldots, m}$ denote the indices of all bits of $y$ set by invalid queries. We call each such $i\in I_y$ a \emph{divergence point}. Let $p_{y,i}$ denote the probability that (invalid) query $i$ (defined given answers to queries $1$ through $i-1$) outputs bit $y_i$, i.e. $p_{y,i}$ denotes the probability that at divergence point $i$, we go in the direction of bit $y_i$. We define the \emph{divergence probability} of $y\in\set{0,1}^m$ as $p_y = \Pi_{i\in I_{y}}p_{y,i}$, i.e. $p_y$ is the probability of answering all invalid queries as $y$ did.
The proof now proceeds by giving an iterative process, $\Gamma(i)$, where $1\leq i\leq \abs{A}$ denotes the iteration number. Each iteration defines a $3$-tuple $(y_{i-1}^*, y_i^*, B_{y_i^*})\in \set{0,1}^m\times\set{0,1}^m\times \mathcal{P}(B)$, where $\mathcal{P}(X)$ denotes the power set of set $X$. Set
\begin{eqnarray*}
\Delta'_i:=\operatorname{Pr}[Y\in\set{y_1^*,\ldots,y_i^*}]-\operatorname{Pr}[Y\in B_{y_1^*}\cup\cdots\cup B_{y_i^*}],
\end{eqnarray*}
where it will be the case that $\set{B_{y_i^*}}_{i=1}^{\abs{A}}$ is a partition of $B$. Thus, we have $\Delta'\geq \Delta'_{\abs{A}}$, implying that a lower bound on $\Delta'_{\abs{A}}$ suffices to prove our claim. We hence prove via induction that for all $1\leq i\leq \abs{A}$,
\[
\Delta'_i\geq \left(\frac{1}{2^q}\right)^{(n^c-1)}\frac{1}{2^{Mm}}\left[1- \frac{1}{2^{m}}-\frac{m}{2^p-1}\right].
\]
The definition of process $\Gamma(i)$ is integrated into the induction proof below.\\
\noindent\emph{Base case (i=1).} In this case $y_0^*$ is undefined. Set $y_1^*$ to the string in $A$ with the largest divergence probability $p_1^*$. A key observation is that
\begin{equation}\label{eqn:9}
p_1^* = \prod_{i\in I_{y_1^*}}p_{y_1^*,i}\geq {2^{-\abs{I_{y_1^*}}}},
\end{equation}
since at each divergence point $i$, at least one of the outcomes in $\set{0,1}$ occurs with probability at least $1/2$. (It is important to note that queries are not being made to a QMA oracle here, but rather to a QMA verifier $V$ with a maximally mixed proof as in Step 1(a). Whereas in the former case the output of the oracle on an {invalid} query does not have to consistently output a value with any particular probability, in the latter case, there is some fixed probability $p$ with which $V$ outputs $1$ each time it is run on a fixed proof.) Finally, define $B_{y_1^*}:=\set{y\in B\mid \abs{y}<\abs{y_1^*}}$.
Let $k_*$ denote the number of divergence points of $y_1^*$ (i.e. $k_*=\abs{I_{y_1^*}}$), and $k_0$ ($k_1$) the number of zeroes (ones) of $y_1^*$ arising from valid queries. Thus, $k_*+k_0+k_1=m$. Then, Equation~(\ref{eqn:8}) becomes
\begin{eqnarray}
\operatorname{Pr}[y_1^*\text{ in Step 2 }]\geq\left(\frac{c}{2^M}\right)^{k_1}\left(1-s\right)^{k_0}p_1^*
\geq\left(\frac{1}{2^M}\right)^{k_1}\left(\frac{1}{2}\right)^{k_*}\left(1-\frac{m-k_*}{2^p-1}\right)
\geq \frac{1}{2^{Mm}}\left(1-\frac{m}{2^p-1}\right),\label{eqn:10}
\end{eqnarray}
where the second inequality follows from Equation~(\ref{eqn:9}), and the third since $k_*\geq 0$ and $k_1+k_*\leq m$. Thus, $\Delta'_1$ is lower bounded by the expression in Equation~(\ref{eqn:7}), completing the proof of the base case.\\
\noindent\emph{Inductive step.} Assume the claim holds for $1\leq i-1<\abs{A}$. We show it holds for $i$. Let $y_{i-1}^*$ be the choice of $y^*$ in the previous iteration $i-1$ of our process. Define $A_{y_i^*}:=\set{y\in A\mid \abs{y}>\abs{y_{i-1}^*}}$. Partition $A_{y_i^*}$ into sets $S_{k}$ for $k\in[m]$, such that $S_k$ is the subset of strings in $A_{y_i^*}$ which agrees with $y_{i-1}^*$ on the first $k-1$ bits, but disagrees on bit $k$. Note that if $S_k\neq\emptyset$, then bit $k$ of $y_{i-1}^*$ is $0$ and bit $k$ of any string in $S_k$ is $1$. For each $S_k\neq\emptyset$, choose an arbitrary representative $z_k\in S_k$, and define the \emph{bounded} divergence probability
\[
q_{i}(k):=\prod_{t\in I^{\leq k}_{z_k}}p_{z_k,t}\qquad\qquad\text{ where }\qquad\qquad I^{\leq k}_{z_k}:= \set{t\in I_{z_k}\mid t \leq k}.
\]
Note that $q_i(k)>0$ (since $S_k\neq\emptyset$). Else if $S_k = \emptyset$, set $q_i(k)=0$. Let $q_i^*$ denote the maximum such bounded divergence probability:
\begin{equation}\label{eqn:q}
q_i^*=\max_{k\in[m]} q_i(k) \qquad\qquad\text{and}\qquad\qquad k_i^*=\argmax_{k\in[m]} q_i(k).
\end{equation}
Finally, let $y_i^*$ be the query string in $S_{k_i^*}$ with the maximum divergence probability $p_i^*$ (ties broken by choosing the lexicographically largest such query string). Observe that
\begin{equation}\label{eqn:99}
p_i^* \geq q_i^*\cdot 2^{-\abs{I_{y_i^*}}+\abs{I^{\leq k}_{y_{i}^*}}},
\end{equation}
where the $2^{-\abs{I_{y_i^*}}+\abs{I^{\leq k}_{y_{i}^*}}}$ term arises from an argument similar to Equation~(\ref{eqn:9}) for all invalid queries of $y_i^*$ \emph{after} query $k$. Set $B_{y_i^*}:=\set{y\in B\mid \abs{y^*_{i-1}}<\abs{y}<\abs{y_i^*}}$. The following lemma will be useful.
\begin{lemma}\label{l:LB2}
For any $y\in B_{y_i^*}$, $\operatorname{Pr}[y\text{ chosen in Step 2}]\leq q_i^*$.
\end{lemma}
\begin{proof}
Fix any $y\in B_{y_i^*}$. Since $\abs{y}>\abs{y_{i-1}^*}$, there must be an index $k$ such that the $k$th bit of $y$ is $1$ and that of $y_{i-1}^*$ is $0$. Let $k$ denote the first such index. Since $y\not\in C$ (because $B_{y_i^*}\cap C=\emptyset$), it must be that query $k$ (defined given bits $y_1\cdots y_{k-1}$) is invalid. Thus, bit $k$ is a divergence point of $y_{i-1}^*$, and there exists a correct query string $y'\in S_k$. By Equation~(\ref{eqn:q}), $q_i^*$ was chosen as the maximum over all bounded diverge probabilities. Thus, $q_i^*\geq q_i(k)$, where recall $q_i(k)$ is the bounded divergence probability for $S_k$, where $y'\in S_k$. But since $y$ and $y'$ agree on bits $1$ through $k$ inclusive, we have $\operatorname{Pr}[y\text{ chosen in Step 2}]\leq\prod_{t\in I^{\leq k}_{y}}p_{y,t}=q_i(k)$, from which the claim follows.
\end{proof}
To continue with the inductive step, again consider $k_*$, $k_0$, and $k_1$, now corresponding to $y_i^*$. Then, an argument similar to Equation~(\ref{eqn:10}) yields that $\operatorname{Pr}[y_i^*\text{ chosen in Step 2 }]$ is at least
\begin{equation}
\left(\frac{c}{2^M}\right)^{k_1}\left(1-s\right)^{k_0}p_i^*\geq \left(\frac{1}{2^M}\right)^{k_1}\left(1-\frac{m-k_*}{2^p-1}\right)q_i^*\left(\frac{1}{2}\right)^{{\abs{I_{y_i^*}}-\abs{I^{\leq k}_{y_{i}^*}}}}\geq\frac{q_i^*}{2^{Mm}}\left(1-\frac{m}{2^p-1}\right),\label{eqn:12}
\end{equation}
where the first inequality follows from Equation~(\ref{eqn:99}), and the second since ${\abs{I_{y_i^*}}-\abs{I^{\leq k}_{y_{i}^*}}}\leq k_*$. Now, define $\zeta_i:= \operatorname{Pr}[Y=y_i^*]-\operatorname{Pr}[Y\in B_{y_i^*}]$. Applying the argument of Equation~(\ref{eqn:zeroerror}), we have
\[
\zeta_i\geq \left(\frac{1}{2^q}\right)^{(n^c-1)-\abs{y_i^*}}\left[\frac{q^*_i}{2^{Mm}}\left(1-\frac{m}{2^p-1}\right)-q^*_i\sum_{y\in B_{y^*_i}}\left(\frac{1}{2^q}\right)^{\abs{y_i^*}-\abs{y}}\right]
\]
where the first $q_i^*$ is due to Equation~(\ref{eqn:12}), and the second $q_i^*$ to Lemma~\ref{l:LB2}. Thus, similar to Equation~(\ref{eqn:7}),
\[
\zeta_i \geq \left(\frac{1}{2^q}\right)^{(n^c-1)}\frac{q_i^*}{2^{Mm}}\left[1- \frac{1}{2^{m}}-\frac{m}{2^p-1}\right]>0.
\]
Observing the recurrence that for all $i$, $\Delta'_{i}\geq\Delta'_{i-1}+\zeta_i$, unrolling this recurrence yields $\Delta'_i\geq \Delta_1$, which by the base case gives the claim of Lemma~\ref{l:LB}.
\end{proof}
Combining Lemma~\ref{l:LB} with the following lemma will yield our desired claim.
\begin{lemma}\label{l:last}
$\operatorname{Pr}(Y\in C)\leq \frac{2^m}{2^p}$.
\end{lemma}
\begin{proof}
The argument is similar to that for Equation~(\ref{eqn:Delta2}); we state it formally for clarity. Any $y\in C$ must have a bit $j$ incorrectly set to $1$, whereas the correct query answer (given bits $1$ through $j-1$ of $y$) should have been $0$. The probability of this occurring for bit $j$ in Step 1(b) is at most $2^{-p}$, by the soundness property of $V$. Since $\abs{C}\leq 2^m$, the claim follows.
\end{proof}
To compete the proof, we have that $\operatorname{Pr}[Y\in A]-\operatorname{Pr}[Y \in B\cup C]$ is lower bounded by
\[
\operatorname{Pr}[Y\in A]-\operatorname{Pr}[Y \in B]-\operatorname{Pr}[Y\in C] \geq \left(\frac{1}{2^q}\right)^{(n^c-1)}\frac{1}{2^{Mm}}\left[1- \frac{1}{2^{m}}-\frac{m}{2^p}\right] - \frac{2^m}{2^p},
\]
which follows by Lemma~\ref{l:LB} and Lemma~\ref{l:last}. For sufficiently large fixed $p$, this quantity is strictly positive, yielding the claim of Theorem~\ref{thm:inPP}.
\end{proof}
\section{Estimating spectral gaps}\label{scn:spectralGap}
We now restate and prove Theorem~\ref{thm:spgap}. We begin by defining $\prob{SPECTRAL-GAP}$ and \class{UQMA}.
\begin{definition}[$\prob{SPECTRAL-GAP}(H,\epsilon)$~(Ambainis~\cite{A14})]
Given a Hamiltonian $H$ and a real number $\alpha\geq n^{-c}$ for $n$ the number of qubits $H$ acts on and $c>0$ some constant, decide:
\begin{itemize}
\item If $\lambda_2 - \lambda_1 \leq \alpha$, output YES.
\item If $\lambda_2 - \lambda_1 \geq 2\alpha$, output NO.
\end{itemize}
where $\lambda_2$ and $\lambda_1$ denote the second and first smallest eigenvalues of $H$, respectively.
\end{definition}
\noindent For clarity, if the ground space of $H$ is degenerate, then we define its spectral gap as $0$.
\begin{definition}[Unique QMA (UQMA)~(Aharonov \emph{et al.}~\cite{ABBS08})]
We say a promise problem $A=(\ayes,\ano)$ is in Unique QMA if and only if there exist polynomials $p$, $q$ and a polynomial-time uniform family of quantum circuits $\set{Q_n}$, where $Q_n$ takes as input a string $x\in\Sigma^*$ with $\abs{x}=n$, a quantum proof $\ket{y}\in ({\mathbb C}^2)^{\otimes p(n)}$, and $q(n)$ ancilla qubits in state $\ket{0}^{\otimes q(n)}$, such that:
\begin{itemize}
\item (Completeness) If $x\in\ayes$, then there exists a proof $\ket{y}\in ({\mathbb C}^2)^{\otimes p(n)}$ such that $Q_n$ accepts $(x,\ket{y})$ with probability at least $2/3$, and for all $\ket{\hat{y}}\in ({\mathbb C}^2)^{\otimes p(n)}$ orthogonal to $\ket{y}$, $Q_n$ accepts $(x,\ket{\hat{y}})$ with probability at most $1/3$.
\item (Soundness) If $x\in\ano$, then for all proofs $\ket{y}\in ({\mathbb C}^2)^{\otimes p(n)}$, $Q_n$ accepts $(x,\ket{y})$ with probability at most $1/3$.
\end{itemize}
\end{definition}
The main theorem of this section is the following.
\begin{reptheorem}{thm:spgap}
$\prob{SPECTRAL-GAP}$ is $\class{P}^{\class{UQMA}[\class{log}]}$-hard for $4$-local Hamiltonians $H$ under polynomial time Turing reductions (i.e. Cook reductions).
\end{reptheorem}
\noindent We remark that Ambainis~\cite{A14} showed that $\prob{SPECTRAL-GAP}\in\class{P}^{\class{QMA}[\class{log}]}$, and gave a claimed proof that $\prob{SPECTRAL-GAP}$ is $\class{P}^{\class{UQMA}[\class{log}]}$-hard for $O(\log)$-local Hamiltonians under mapping reductions. ($\class{P}^{\class{UQMA}[\class{log}]}$ is defined as $\class{P}^{\class{QMA}[\class{log}]}$, except with a UQMA oracle in place of a QMA oracle.) As discussed in Section~\ref{scn:intro}, however, Ambainis' proof of the latter result does not hold if the $\class{P}^{\class{UQMA}[\class{log}]}$ machine makes invalid queries (which in general is the case). Here, we build on Ambainis' approach~\cite{A14} to show $\class{P}^{\class{UQMA}[\class{log}]}$-hardness of $\prob{SPECTRAL-GAP}$ under Turing reductions even when invalid queries are allowed, and we also improve the hardness to apply to $O(1)$-local Hamiltonians.
For this, we may assume that all calls to the \class{UQMA}~oracle $Q$ are for an instance $(H,a,b)$ of the Unique-Local Hamiltonian Problem (U-LH)~\cite{A14}: Is the ground state energy of $H$ at most $\epsilon$ with all other eigenvalues at least $3\epsilon$ (YES case), or is the ground state energy at least $3\epsilon$ (NO case), for $\epsilon\geq 1/\textup{poly}(n)$? We begin by showing the following modified version of Lemma \ref{l:amb} tailored to UQMA (instead of QMA).
\begin{lemma}\label{lem:spgap}
For any $x\in\set{0,1}^m$, let $\hat{x}$ denote its unary encoding. Then, for any $\class{P}^{\class{UQMA}[\class{log}]}$ circuit $U$ acting on $n$ bits and making $m$ queries to a \class{UQMA}~oracle, there exists a $4$-local Hamiltonian $H$ acting on space $({\mathbb C}^2)^{\otimes 2^m-1}\otimes\spa{Y}$ such that there exists a correct query string $x=x_1\cdots x_m$ such that:
\begin{enumerate}
\item The {unique} ground state of $H$ lies in subspace $\ketbra{\hat{x}}{\hat{x}}\otimes \spa{Y}$.
\item The spectral gap of $H$ is at least $(\epsilon - \delta )/4^{m}$ for inverse polynomial $\epsilon,\delta$ with $\epsilon-\delta\geq 1/\textup{poly}(n)$.
\item For all strings $x'\in\set{0,1}^m$, $H$ acts invariantly on subspace $\ketbra{\hat{x}'}{\hat{x}'}\otimes \spa{Y}$.
\end{enumerate}
\end{lemma}
\begin{proof}
As done in~\cite{A14}, we begin with $O(\log)$-local Hamiltonian
\begin{equation}\label{eqn:H'}
H' = \sum_{i=1}^m\frac{1}{4^{i-1}}\sum_{y_1,\ldots,y_{i-1}}\bigotimes_{j=1}^{i-1}\ketbra{y_j}{y_j}_{\spa{X}_{j}}\otimes G'_{y_{1}\cdots y_{i-1}},
\end{equation}
where we define
\begin{equation}\label{eqn:HamG}
G'_{y_{1}\cdots y_{i-1}} := \ketbra{0}{0}_{\spa{X}_i} \otimes A_{\spa{Y}_i} + \ketbra{1}{1}_{\spa{X}_i}\otimes H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}
\end{equation}
with $A$ any fixed $2$-local Hermitian operator with unique ground state of eigenvalue $2\epsilon$ and spectral gap $\epsilon$. Our approach is intuitively now as follows. We first run a \emph{query validation} phase, in which we modify $H'$ to obtain a new Hamiltonian $H''$ by {replacing} ``sufficiently invalid'' queries $H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}$ with high-energy dummy queries. This creates the desired spectral gap. We then apply the technique of Lemma~\ref{l:amb} to reduce the locality of $H''$, obtaining a $4$-local Hamiltonian $H$, as desired. Note that our proof shows \emph{existence} of $H$; unlike Lemma~\ref{l:amb}, however, it is not clear how to construct $H$ in polynomial-time given $H'$, as detecting invalid UQMA queries with a P machine seems difficult.
The query validation phase proceeds as follows. Consider any $G'_{y_{1}\cdots y_{i-1}}$ whose spectral gap is at most $\epsilon - \delta$, for some fixed $\delta$ satisfying $\epsilon-\delta\geq 1/\textup{poly}(n)$. We claim this implies $H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}$ is an invalid query. For if $H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}$ were a valid YES query, then $\lambda(G'_{y_{1}\cdots y_{i-1}})\leq \epsilon$ and $\lambda_2(G'_{y_{1}\cdots y_{i-1}})=2\epsilon$ (by the $\ketbra{0}{0}$ block of $G'_{y_{1}\cdots y_{i-1}}$, and since a valid query to U-LH has a spectral gap of $2\epsilon$), where $\lambda_2(X)$ is the second-smallest eigenvalue of operator $X$. Conversely, if $H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}$ were a valid NO query, then $\lambda(G'_{y_{1}\cdots y_{i-1}})=2\epsilon$ (by the $\ketbra{0}{0}$ block of $G'_{y_{1}\cdots y_{i-1}}$) and $\lambda_2(G'_{y_{1}\cdots y_{i-1}})\geq 3\epsilon$. Thus, $H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}$ corresponds to an invalid query. Replace each such $G'_{y_{1}\cdots y_{i-1}}$ with
\begin{equation}\label{eqn:newG}
G_{y_{1}\cdots y_{i-1}} := \ketbra{0}{0}_{\spa{X}_i} \otimes A_{\spa{Y}_i} + \ketbra{1}{1}_{\spa{X}_i}\otimes 3\epsilon I
\end{equation}
in $H'$, denoting the new Hamiltonian as $H''$. Two remarks: First, the validation phase does not catch \emph{all} invalid queries, but only those which are ``sufficiently far'' from being valid. Second, setting $\delta\in \Omega(1/\textup{poly}(n))$ (as opposed to, say, $1/\exp(n)$) is required for our proof of Theorem~\ref{thm:spgap} later.
We now show correctness. Observe first that for any ``sufficiently invalid'' query $i$, replacing $G'_{y_{1}\cdots y_{i-1}}$ with $G_{y_{1}\cdots y_{i-1}}$ in the validation phase ``forces'' query $i$ to become a valid NO query. Thus, henceforth in this proof, a query string which answers YES (i.e. $\ket{1}$) to query $i$ is considered incorrect with respect to $H''$. Crucially, if a string $x$ is a correct query string for $H''$, then it is also a correct query string for $H'$. The converse is false; nevertheless, $H''$ has at least one correct query string (since any sufficiently invalid query would have allowed both $\ket{0}$ and $\ket{1}$ as answers), which suffices for our purposes.
To begin, as in the proof of Lemma~\ref{l:amborig}, observe that $H''$ is block-diagonal with respect to register $\bigotimes_{i=1}^m\spa{X}_i$. Let $x\in\set{0,1}^m$ denote a correct query string which has minimal energy among all \emph{correct} query strings against $H''$, and for any $y\in\set{0,1}^m$, define $\lambda_y$ as the smallest eigenvalue in block $\spa{H}_y$. A similar analysis to that of Lemma~\ref{l:amborig} shows that for any incorrect query string $y$, $\lambda_y\geq \lambda_x+\epsilon/4^m$. This is because replacing the term $2\epsilon I$ in $M_{y_1\cdots y_{i-1}}$ from Lemma~\ref{l:amborig} with $A$ in $G_{y_{1}\cdots y_{i-1}}$ here preserves the property that answering NO on query $i$ yields minimum energy $2\epsilon$.
We now argue that $x$ is in fact \emph{unique}, and all other eigenvalues of $H''$ corresponding to correct query strings have energy at least $\lambda_x+(\epsilon - \delta )/4^{m}$. There are two cases to consider: Eigenvalues arising from different query strings, and eigenvalues arising from the same query string.
\paragraph{Case 1: Eigenvalues from different query strings.} Let $y=y_1\cdots y_m$ be a correct query string for $H''$. Since both $x$ and $y$ are correct strings, there must exist an invalid query $i$ where $x_i \neq y_i$. First consider the case where $G'_{y_{1}\cdots y_{i-1}}$ has spectral gap at most $\epsilon - \delta$. Then, after the validation phase, query $i$ is replaced with a valid NO query $G_{y_{1}\cdots y_{i-1}}$. Thus, whichever of $x$ or $y$ has a $1$ as bit $i$ is an incorrect string for $H''$, and from our previous analysis has energy at least $\lambda_x + \epsilon / 4^m$ against $H''$. (This, in particular, implies $x_i=0$ and $y_i=1$.) Alternatively, suppose $G'_{y_{1}\cdots y_{i-1}} = G_{y_{1}\cdots y_{i-1}}$ has spectral gap at least $\epsilon - \delta$. By construction of $A$ (which has spectral gap $\epsilon$), it follows that $\lambda(H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}})$ is at most $\epsilon+\delta$ or at least $3\epsilon-\delta$. In other words, query $i$ is ``approximately'' valid, and $y$ must be ``approximately'' incorrect on query $i$. A similar analysis as for Lemma~\ref{l:amborig} hence yields $\lambda_y\geq \lambda_x+(\epsilon-\delta)/4^m$.
\paragraph{Case 2: Eigenvalues from the same query string.} In block $\spa{H}_x$, $H''$ is equivalent to operator
\[
\sum_{i=1}^m\frac{1}{4^{i-1}} B_{x_{1}\cdots x_{i-1}},
\]
where $B_{x_{1}\cdots x_{i-1}} = A$ if $x_i=0$ and $B_{x_{1}\cdots x_{i-1}}$ can equal either $H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}$ or $3\epsilon I$ (depending on how the validation phase proceeded) if $x_i=1$. In particular, $B_{x_{1}\cdots x_{i-1}}$ acts non-trivially only on space $\spa{Y}_i$ and has spectral gap at least $\epsilon-\delta$. Clearly, the ground state of $H''$ of form $\ket{\hat{x}}\ket{\psi}$ obtains the smallest eigenvalue of each term $B_{x_{1}\cdots x_{i-1}}$, and the first excited state (corresponding to the second eigenvalue) of $H''$ must take on the first excited state of at least one $B_{x_{1}\cdots x_{i-1}}$, implying a spectral gap of at least $(\epsilon-\delta)/4^m$, as claimed.
Finally, the approach of Lemma~\ref{l:amb} allows us to convert $O(\log)$-local $H''$ to $4$-local $H$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:spgap}]
As done in~\cite{A14}, we start with the Hamiltonian $H'$ from Equation~(\ref{eqn:H'}). In \cite{A14}, it was shown (Section A.3, Claim 2) that if all query Hamiltonians $H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}$ correspond to valid UQMA queries, $H'$ has a unique ground state and spectral gap at least $\epsilon/4^m$. When invalid queries are allowed, however, the spectral gap of $H'$ can vanish, invalidating the $\class{P}^{\class{UQMA}[\class{log}]}$-hardness proof of~\cite{A14}. Thus, we require a technique for identifying invalid queries and ``removing them'' from $H'$.
Unfortunately, it is not clear how a P machine alone can achieve such a ``property testing'' task of checking if a query is sufficiently invalid. However, the key observation is that an oracle $Q$ for SPECTRAL GAP can help. A bit of care is required here; naively, one might check if each query $H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}$ has a spectral gap using $Q$, since in the YES case, this must hold. However, it is quickly seen that even invalid queries can have a spectral gap.
Instead, we proceed as follows. Given an arbitrary $\class{P}^{\class{UQMA}[\class{log}]}$ circuit $U$ acting on $n$ bits, construct $O(\log(n))$-local $H'$ from Equation~(\ref{eqn:H'}). For each term $G'_{y_{1}\cdots y_{i-1}}$ appearing in $H'$, perform binary search using $O(\log n)$ queries to $Q$ to obtain an estimate $\Delta$ for the spectral gap of $G'_{y_{1}\cdots y_{i-1}}$ to within sufficiently small but fixed additive error $\delta\in 1/\textup{poly}(n)$. (A similar procedure involving a QMA oracle is used in Ambainis' proof of containment of $\prob{APX-SIM}\in\class{P}^{\class{QMA}[\class{log}]}$ to estimate the smallest eigenvalue of a local Hamiltonian; we hence omit further details here.) As done in the proof of Lemma~\ref{lem:spgap}, if $\Delta\leq \epsilon-\delta$, we conclude $H_{\spa{Y}_i}^{i,y_{1}\cdots y_{i-1}}$ is ``sufficiently invalid'', and replace $G'_{y_{1}\cdots y_{i-1}}$ with $G_{y_{1}\cdots y_{i-1}}$ from Equation~(\ref{eqn:newG}). Following the construction of Lemma~\ref{lem:spgap}, we hence can map $H'$ to a $4$-local Hamiltonian $H$ such that $H$ has a unique ground state and spectral gap $(\epsilon-\delta)/4^m$, and the ground state of $H$ corresponds to a correct query string for $H'$. Note that implementing the mapping from $H'$ to $H$ requires polynomially many queries to the oracle, hence yielding a polynomial time \emph{Turing} reduction.
Next, following~\cite{A14}, let $T:= \sum_{y_1 ... y_m} \bigotimes_{i=1}^m \ketbra{y_i}{y_i}\in\lin{\spa{Y}}$, where we sum over all query strings $y_1...y_m$ which cause $U$ to output $0$. Unlike~\cite{A14}, as done in Lemma~\ref{l:amb}, we apply Kitaev's unary encoding trick~\cite{KSV02} and implicitly encode the query strings in $T$ in unary. (We remark the term $H_{\rm stab}$ contained in $H$ will enforce the correct unary encoding in register $\spa{X}$). Finally, introduce a single-qubit register $\spa{B}$, and define
\[
H_{\rm final} := I_B \otimes H_{\spa{X},\spa{Y}} + 4\epsilon \ketbra{0}{0}_B \otimes T_{\spa{X}}\otimes I_{\spa{Y}}.
\]
The claim now follows via an analysis similar to \cite{A14}. Let $\ket{\psi}_{\spa{X},\spa{Y}}$ denote the unique ground state of $H$, whose $\spa{X}$ register contains the (unary encoding of) a correct query string for $U$. If $U$ accepts, then $\ket{i}_{\spa{B}}\otimes\ket{\psi}_{\spa{X},\spa{Y}}$ for $i\in\set{0,1}$ are degenerate ground states of $H_{\rm final}$, implying $H_{\rm final}$ has no spectral gap. Conversely, if $U$ rejects, observe that the smallest eigenvalue of $H_{\rm final}$ lies in the $\ket{1}_{\spa{B}}$ block of $H_{\rm final}$. This is because $H_{\rm final}$ is block-diagonal with respect to register $\spa{X}$, and we have from the proof of Lemma~\ref{l:amb} that $\lambda(H)< 3\epsilon$. Restricted to this $\ket{1}_{\spa{B}}$ block, the spectral gap of $H_{\rm final}$ is at least $(\epsilon-\delta)/4^m$ by Lemma~\ref{lem:spgap}. Alternatively, restricted to the $\ket{0}_\spa{B}$ block, any correct query string in $\spa{X}$ leads to spectral gap at least $4\epsilon$ (by construction of $T$, since $U$ outputs $0$ in this case), and any incorrect query string in $\spa{X}$ leads to spectral gap at least $(\epsilon-\delta)/4^m$ by Lemma~\ref{lem:spgap}. Hence, $H_{\rm final}$ has an inverse polynomial spectral gap, as desired.
\end{proof}
\section{Conclusions and open questions}\label{scn:conclusions}
We have studied the complexity of physical problems involving local Hamiltonians beyond the paradigm of estimating ground state energies. In this setting, we showed that measuring even a $1$-local observable against a $5$-local Hamiltonian's ground state (i.e. \prob{APX-SIM}) is $\class{P}^{\class{QMA}[\class{log}]}$-complete, and so is ``slightly harder'' than QMA. Similarly, we showed that estimating a two-point correlation function (i.e. \prob{APX-2-CORR}) is $\class{P}^{\class{QMA}[\class{log}]}$-complete. We upper bounded the complexity of $\class{P}^{\class{QMA}[\class{log}]}$ by showing it is contained in PP. Finally, we built on an approach of Ambainis~\cite{A14} to show $\class{P}^{\class{UQMA}[\class{log}]}$-hardness under Turing reductions of determining the spectral gap of a local Hamiltonian.
Although we resolve one of the open questions from~\cite{A14}, there are others we leave open, along with some new ones. Do our results for \prob{APX-SIM}\ and \prob{APX-2-CORR}\ still hold for $2$-local Hamiltonians, or (say) local Hamiltonians on a 2D lattice? Do such results also hold for specific Hamiltonian models of interest, such as the Heisenberg anti-ferromagnet on spin-$1/2$ particles? For example, when large coefficients for each local constraint are allowed, determining the ground state of the latter is QMA-complete~\cite{CM13,PM15}. Can \prob{SPECTRAL-GAP}\ be shown to be either $\class{P}^{\class{UQMA}[\class{log}]}$-complete or $\class{P}^{\class{QMA}[\class{log}]}$-complete (recall \prob{SPECTRAL-GAP}\ is in $\class{P}^{\class{QMA}[\class{log}]}$, and \cite{A14} and our work together show $\class{P}^{\class{UQMA}[\class{log}]}$-hardness)? What is the relationship between $\class{P}^{\class{QMA}[\class{log}]}$ and $\class{P}^{\class{UQMA}[\class{log}]}$? Finally, exploring the landscape of quantum Hamiltonian complexity beyond the confines of QMA has helped to characterize the complexity of physical tasks beyond estimating ground state energies --- what other relevant tasks are complete for $\class{P}^{\class{QMA}[\class{log}]}$, or for other classes beyond QMA?
\section*{Acknowledgements}
We thank Xiaodi Wu for stimulating discussions which helped motivate this project, including the suggestion to think about estimating two-point correlation functions (which arose via discussions with Aram Harrow, whom we also thank). We also thank Andris Ambainis and Norbert Schuch for helpful discussions, and remark that they independently conceived of some of the ideas behind Lemma~\ref{l:amb} and Theorem~\ref{thm:main1}, respectively (private communication). Part of this work was completed while SG was supported by a Government of Canada NSERC Banting Postdoctoral Fellowship and the Simons Institute for the Theory of Computing at UC Berkeley. SG acknowledges support from NSF grant CCF-1526189.
\bibliographystyle{alpha}
|
1,477,468,750,658 | arxiv | \section{Introduction}
GRO\,J1008$-$57\ is a transient high-mass X-ray binary (HMXB) system with a neutron
star primary and a Be companion. It was discovered by the Burst and
Transient Source Experiment aboard the \textit{Compton Gamma-Ray
Observatory} during a 1.4 Crab giant outburst in July 1993
\citep{Stollberg:93:GROJ1008Discovery,Wilson:1994:GROJ1008Discovery}.
Optical followup identified its Be-type companion and suggested a distance
to the source of 5\,kpc \citep{Coe:94:GROJ1008OptCounterpart}.
Like other Be/X-ray binaries (Be/XRBs),
GRO\,J1008$-$57\ exhibits regular outbursts (Type I) due to accretion
transfers during periastron passages as well as irregular giant (Type II)
outbursts
\citep[for a recent review of Be/XRB systems, see][]{Reig:2011:BeXRBReview}.
Its Type I outbursts occur predictably at the 249.48 day orbital period
\citep{Kuhnel:2013:GROJ1008,Levine:06:RXTEPeriodicities}.
\citet{Kuhnel:2013:GROJ1008} found that the spectra of GRO\,J1008$-$57\ during Type I
outbursts are similarly
regular: the continuum spectrum consists of an exponentially cutoff power-law
and a low-energy black body
component whose properties correlate strongly with source flux.
Accreting pulsars, of which Be/XRBs are a subclass,
characteristically exhibit cyclotron resonant scattering
features (CRSFs) in the hard X-ray band
due to Compton scattering off of electrons with
orbits quantized by the $\sim10^{12}$\,G
magnetic field of the neutron star.
The observed line energy provides a direct probe of the magnetic field
strength, with $E_{\rm cyc} = 11.6 B_{12}/(1+z)$~keV, where $B_{12}$ is
the magnetic field strength in units of 10$^{12}$\,G and $z$ is the
gravitational redshift at the emission radius \citep{Canuto:77:CRSF12B12}.
Based on \textit{CGRO}/OSSE spectra, \citet{Grove:95:GROJ1008CRSF} and \citet{Shrader:99:GROJ1008CRSF} each reported
indications for a possible CRSF at $\sim$88\,keV at low significance
($\sim2\sigma$) for GRO\,J1008$-$57. Their data
did not provide energy coverage below 50\,keV
to search for a lower-energy fundamental CRSF at $\sim45$\,keV. If the
88\,keV feature were confirmed as the fundamental, it would imply that
GRO\,J1008$-$57\ has a
magnetic field strength near 10$^{13}$\,G, the highest of any known accreting
pulsar\footnote{\citet{LaBarbera:01:LMCX4Cyc} reported an extremely broad
CRSF centered at 100\,keV for LMC X-4, but these measurements were not
confirmed by \textit{INTEGRAL} \citep{Tsygankov:05:LMCX4NoCyc}.} \citep[e.g.,][]{Caballero:12:XRayPulsarReview}.
Subsequent modeling of data taken over a broader energy band with
\textit{RXTE}, \textit{INTEGRAL}, and \textit{Suzaku}\ did not reveal
a lower-energy fundamental line in the 40--50\,keV region
\citep{Coe:07:GROJ1008Disk,Kuhnel:2013:GROJ1008}, and
detection of the 88\,keV CRSF remained marginal.
\citet{tmp_Wang:14:GROJ1008_INTEGRAL} reported a $\sim3\sigma$ detection
of a CRSF at 74\,keV in a 2009 outburst with \textit{INTEGRAL}.
The regular GRO\,J1008$-$57\ Type I outburst of September 2012 was followed by several
months of irregular flaring before the source brightened into a
giant outburst in November 2012. The increased flux triggered
\textit{MAXI} on November 9
\citep{Nakajima:2012:GROJ1008Brightening} and \textit{Swift}-BAT on
November 13 \citep{Krimm:2012:GROJ1008Brightening}. Peak flux levels reached
1 Crab in the next week, providing an opportunity to obtain high-statistics
observations of the system in outburst. \textit{Suzaku}\ executed a
Target-of-Opportunity (ToO) observation on November 20 and reported
a detection of a cyclotron line at $E_{\rm cyc} =$74--80\,keV, with the
exact energy depending on the continuum modeling
\citep{Yamamoto:2012:GROSuzakuCyc,tmp_Yamamoto:14:GROCyc}.
Thanks to its focusing hard X-ray telescopes,
\textit{NuSTAR}\ \citep{Harrison:2013:NuSTAR} provides unprecedented sensitivity
in the broad 3--79\,keV band. \textit{NuSTAR}'s continuous energy coverage
removes a major source of systematic errors when fitting broad-band models,
while the large effective area and lack of pile-up
enables high-statistics time-resolved
spectroscopy for bright sources. \textit{NuSTAR}\ is capable of executing
ToO observations within 24\,hours of trigger and is thus an ideal
instrument with which to study cyclotron lines across a wide range of
magnetic field strengths in neutron star binary systems
\citep[e.g.,][]{Fuerst:13:HerX1,Fuerst:14:VelaX1}. \textit{NuSTAR}\ observed GRO\,J1008$-$57\
on November 20, shortly after the peak of the outburst (Figure
\ref{fig:lc}).
\begin{figure}
\includegraphics[width=\columnwidth]{batlc.pdf}
\caption{
\textit{Swift}-BAT light curve of the giant outburst of GRO\,J1008$-$57\ with the \textit{NuSTAR}\ and
\textit{Suzaku}\ observation times marked. The BAT count rate is in units of
counts\,cm$^{-2}$\,s${-1}$.
\label{fig:lc}}
\end{figure}
In this paper we combine \textit{NuSTAR}, \textit{Swift}\ \citep{Gehrels:04:Swift}, and
\textit{Suzaku}\ \citep{Mitsuda:07:Suzaku} observations of the November 2012 giant
outburst in order to obtain the best constraints on the existence
of the putative cyclotron line. \S\ref{sec:observations} describes the
observations and data reduction. In \S\ref{sec:fits}, we perform a series
of spectral fits of the \textit{NuSTAR}, \textit{Suzaku}, and \textit{Swift}\ data. We fit
continuum models (\S\ref{sec:cont_fits}) as well as the previously reported
CRSF (\S\ref{sec:cyc_line}) to the data. Monte Carlo tests confirm the
significance of the feature. We perform searches for generic CRSFs at
lower energies in both the time-integrated (\S\ref{sec:fundamental_search})
and phase-resolved data (\S\ref{sec:phase_resolved}). We conclude in
\S\ref{sec:discussion}.
\section{Observations} \label{sec:observations}
\textit{NuSTAR}\ performed a TOO observation of GRO\,J1008$-$57\ beginning at
UTC 2012-11-30 8:41:07 and ending at UTC 2012-11-30 17:31:07.
The total on-source observation time was 12.4~ksec after excluding
occultation intervals and South Atlantic Anomaly passages.
We processed the data with HEASOFT 6.15 and
the \textit{NuSTAR}\ Data Analysis Software (NuSTARDAS) v.\,1.3.0 using CALDB
version 20131223. We extracted source counts from circular regions with
4.5\,arcmin radius from both \textit{NuSTAR}\ modules. Because of the brightness of the
source, flux from the PSF wings was present over most of the focal plane,
preventing extraction of a representative background region. Instead, we
scaled the background observed during deep pointings on the
Extended \textit{Chandra} Deep Field South region
obtained immediately after the GRO\,J1008$-$57\
observations \citep[e.g.,][]{DelMoro:14:NuSTARECDFS}.
The background was selected from the \textit{NuSTAR}\ orbital phases matching the GRO\,J1008$-$57\
observation and was extracted from the same detector region as the source.
The background is negligible over most of the \textit{NuSTAR}\ band; it only reaches
10\% of the source count rate at 60\,keV, and is 30--60\% of the source
rate in the 70--78\,keV range.
\textit{Swift}\ obtained a 2.3\,ksec snapshot of GRO\,J1008$-$57\ during the \textit{NuSTAR}\ observation
beginning at UTC 2012-11-30 11:09:25.
We reduced the \textit{Swift}-X-Ray Telescope \citep[XRT;][]{Burrows:05:XRT}
Windowed Timing mode data using standard
procedures in HEASOFT 6.13 and CALDB version 20120830.
\textit{Suzaku}\ observed GRO\,J1008$-$57\ earlier in its outburst beginning at
UTC 2012-11-20 14:44:31; see \citet{tmp_Yamamoto:14:GROCyc} for an
independent analysis of these data.
The exposure time was 50.4\,ksec with
the Hard X-ray Detector
\citep[HXD;][]{Takahashi:07:HXD}.
The X-ray Imaging Spectrometer \citep[XIS;][]{Koyama:07:XIS}
observed the source in burst mode, resulting in an exposure
time of 9.1\,ksec.
We reduced data from XIS
modules 0, 1, and 3 using standard procedures in
HEASOFT 6.13 and CALDB version 20130724.
Response files were created using the FTOOL task
\texttt{xisresp} with the medium option, selecting the default binning.
We used extraction regions with
80 arcsec radius and excluded the inner parts of the PSF, roughly following the
5\% pile-up contours. Pile-up was estimated using the \texttt{pileest}
routine, after correcting for residual attitude wobble using
\texttt{aeattcor2}. We combined the $3\times3$ and $5\times5$ editing modes
where available into one spectrum using \texttt{XSELECT}.
We reduced data from the HXD
with the standard pipeline using calibration files as published
with HXD CALDB 20110913. Spectra were extracted using the tools
\texttt{hxdpinxbpi}
and \texttt{hxdgsoxbpi} for the PIN diodes and GSO scintillator,
respectively. We obtained the tuned
background models from the \textit{Suzaku}\
website\footnote{\url{ftp://legacy.gsfc.nasa.gov/suzaku/data/background/pinnxb_ver2.0_tuned/}
and \url{ftp://legacy.gsfc.nasa.gov/suzaku/data/background/gsonxb_ver2.6/
}}, as well as the recommended
additional ARF for the GSO.
\section{Spectral Fitting} \label{sec:fits}
We fit the data using the \textit{Interactive Spectral Interpretation
System} \citep[ISIS;][]{citeisis_joern}
v1.6.2-19. For all instruments except for the
\textit{Suzaku}\ GSO data (for which the binning scheme was determined by the
background modeling), we rebinned the data to $\sim$\nicefrac{1}{3} of the
FWHM of the energy resolution to avoid oversampling the intrinsic detector
resolution. We minimized $\chi^2$ in our fits to the data.
The high source flux highlights systematic uncertainties in the response
matrices, so we exclude some regions from spectral fits.
We fit the \textit{NuSTAR}\ data in the 5--78\,keV range. The \textit{NuSTAR}\ response
falls off sharply beginning around 78\,keV, so this upper bound minimizes the
effect of response modeling
uncertainties on our cyclotron line fits. The \textit{NuSTAR}\
data showed residual deviations in the 3--5\,keV range when fit with data from
\textit{Swift}\ and \textit{Suzaku}, so due to the unusual brightness of the source
we omit this region to avoid biasing the fit.
We also omit the \textit{NuSTAR}\ data from 68--70\,keV, which is near the
tungsten K-edge and has a known response feature which could bias our
cyclotron line searches.
Similarly, we omit the \textit{Swift}\ data in the 0.2--1\,keV range and above 9\,keV
due to residual
features not seen in the XIS data. We also apply a 3\% systematic error
per spectral bin. Finally, we fit the \textit{Suzaku}\ XIS data in
the bands suggested by \citet{Nowak:11:CygX1}: 0.8--1.72\,keV,
1.88--2.19\,keV, and 2.37--7.5\,keV. We fit the PIN data in the
20--70\,keV band and GRO in the 60--120\,keV band.
\subsection{Continuum Fitting} \label{sec:cont_fits}
We fit two models frequently used in modeling accreting pulsar
spectra to the time-integrated continuum spectra:
a powerlaw with a high-energy cutoff, and an \texttt{npex} model consisting
of two powerlaws with negative and positive spectral indices and an
exponential cutoff \citep{Makishima:99:NPEX}.
We also included a Gaussian iron line and a low-energy black body component.
For fits including data from \textit{Suzaku}\ XIS, a second Gaussian component was
needed to adequately fit the iron line complex.
We used an updated version of the \citet{Wilms:2000:wilmAbundances}
absorption model (\texttt{tbnew}) as a neutral absorber
with \texttt{wilm} abundances \citep{Wilms:2000:wilmAbundances} and
\texttt{vern} cross-sections \citep{Verner:96:vernCrossSections}.
For the powerlaw with high-energy cutoff, we
removed residuals due to the discontinuity at the cutoff energy with a
Gaussian absorber tied to the cutoff energy \citep[e.g.,][and references
therein]{Coburn:02:AccretingPulsars}. We allowed the normalizations to
vary between all instruments being fit.
In contrast to the fits to Type~I bursts reported by
\citet{Kuhnel:2013:GROJ1008},
we found the \texttt{npex} model provides a better fit for all combinations
of instruments despite having fewer free parameters, so we restrict our
attention to this model for further analysis.
\citet{tmp_Yamamoto:14:GROCyc} similarly found that the \texttt{npex} model
provided the best fit to the \textit{Suzaku}\ data from this giant outburst.
Table \ref{tab:npex_fits}
provides the best-fit values for the time-integrated continuum parameters, and
Figures \ref{fig:spectra_n}--\ref{fig:spectra_nws}
show the best fits.
\begin{figure}
\includegraphics[width=\columnwidth]{spectra_n.pdf}
\caption{
Panel a) Count spectrum and \texttt{npex} model fit to the \textit{NuSTAR}\ data.
Panel b) Residual plot for the \texttt{npex} fit.
Panel c) Residual plot for a \texttt{npex} fit with \texttt{cyclabs}
component.
An arrow in panel b shows the centroid of the CRSF fit in panel c.
\label{fig:spectra_n}}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{spectra_nw.pdf}
\caption{
Joint \textit{NuSTAR}--\textit{Swift}-XRT fit. \textit{NuSTAR}\ data are in blue, XRT data are in
green. Panels as in Figure \ref{fig:spectra_n}.
\label{fig:spectra_nw}}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{spectra_s.pdf}
\caption{
\textit{Suzaku}-only fit. XIS data are red, pink, and purple, PIN data are
yellow, and GSO data are orange. Panels as in Figure \ref{fig:spectra_n}.
\label{fig:spectra_s}}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{spectra_nws.pdf}
\caption{
Joint \textit{NuSTAR}--\textit{Swift}--\textit{Suzaku}\ fit. Panels as in Figure \ref{fig:spectra_n}.
\label{fig:spectra_nws}}
\end{figure}
The brightness of the source highlights systematic effects in joint fits
between multiple instruments, producing poor goodness of fit. Only the
\textit{NuSTAR}-only fit has a reasonable goodness of fit, at $\chi^2_\nu = 1.18$
for 536 degrees of freedom.
There is substantial disagreement
between the instruments below 10\,keV (e.g., Figure \ref{fig:spectra_nws}).
The \textit{NuSTAR}\ and \textit{Suzaku}\ observations are not simultaneous, and so some
spectral evolution may have occured between the two epochs. However, there
is also
disagreement even among the three XIS modules (Figure \ref{fig:spectra_s}).
This disagreement at low energies, driven primarily by XIS1,
leads to differences in the best-fit blackbody
temperature and power-law indices (Table \ref{tab:npex_fits}).
These discrepancies were also noted in this dataset
by \citet{Kuhnel:2013:GROJ1008} and \citet{tmp_Yamamoto:14:GROCyc}; these
authors elected to excise several energy ranges from the XIS
backside-illuminated detectors or exclude the data entirely.
The fit for the blackbody temperature shows multiple minima
in the \textit{Suzaku}-only
fit, for example, with the $\sim3$\,keV temperature preferred to the
$0.4$\,keV temperature suggested by the joint fit to all instruments.
Similarly, our coarser binning and inclusion of all three XIS modules
results in better fits to the iron line complex with two broadened
Gaussians rather than the narrow 6.4, 6.67, and 7\,keV lines fit by
\citet{Kuhnel:2013:GROJ1008} and \citet{tmp_Yamamoto:14:GROCyc}.
We are primarily interested in the
spectral behavior at high energies, which is well above the folding energy
$E_{\rm fold}$ and hence relatively insensitive to these parameters.
\subsection{Evidence for a Cyclotron Line} \label{sec:cyc_line}
Next, we fit the data using the above
continuum model and a multiplicative cyclotron
scattering feature using a pseudo-Lorentzian optical depth profile
(the XSPEC \texttt{cyclabs} model, \citealp{Mihara:90:cyclabs}).
We initially confined
our search to line centers above 50\,keV, with fits initialized near the
75\,keV value reported by \citet{Yamamoto:2012:GROSuzakuCyc}.
Table \ref{tab:npex88_fits} reports the best-fit parameters, while Figures
\ref{fig:spectra_n}--\ref{fig:spectra_nws} compare
the residuals to fits without a cyclotron line.
Fits to the \textit{NuSTAR}\ data alone do not provide strong constraints on the
CRSF parameters, as there are degeneracies with the continuum modeling
because the cyclotron line lies at the upper edge of the \textit{NuSTAR}\ bandpass.
However, there are clear residuals in the \textit{NuSTAR}\ data above 70\,keV, and
\textit{NuSTAR}-only fits are significantly improved by the CRSF.
The best-fit \textit{Suzaku}\ CRSF parameters are a reasonable match to those reported in
\citet{tmp_Yamamoto:14:GROCyc} given the minor differences in analysis
methods.
Combining the \textit{NuSTAR}\ data with \textit{Suzaku}\ provides an independent confirmation
of the line.
In the joint fit, the line centroid moves to 78\,keV and the best-fit
width is 11\,keV, matching the values obtained by
\citet{tmp_Yamamoto:14:GROCyc} in their \texttt{npex} fits including XIS.
The GSO
cross-normalization changes from 1.19 relative to \textit{NuSTAR}\ in the
\texttt{npex} fit to 1.38 in the \texttt{npex} with cyclotron feature fit.
All other cross-normalization constants remain constant within errors
(Table \ref{tab:norms}). The cyclotron-line fit thus produces correctly
the expected agreement between the normalizations of \textit{Suzaku}\ PIN and GSO.
\input{table_norms}
Both the \textit{NuSTAR}\ and \textit{Suzaku}\ data thus independently show evidence for a
CRSF in the 70--80\,keV range.
Because the \textit{NuSTAR}\ data do not cover the
entire CRSF, the joint fit provides the best constraint on the line
parameters, but the parameters are sensitive to the \textit{NuSTAR}\ and \textit{Suzaku}-HXD
cross-calibration.
We assessed the significance of the detections using the method of
posterior predictive p-values \citep[ppp-values,][]{Protassov:02:LRTest}.
Briefly, we simulate many
instances of a model without a \texttt{cyclabs} feature
by folding the spectral model through instrumental
responses; the exposure and binning are matched to the real data, and the
data and background are perturbed to account for counting statistics.
For each simulated dataset, we
then fit the null model and a test model with a \texttt{cyclabs} feature.
For each simulated realization, we determine
$\Delta \chi^2$ between the two models and
compare the distribution of $\Delta \chi^2$ values for the simulations
to the observed value.
If few of the simulated $\Delta \chi^2$ values are as large as observed in
the real data, this provides evidence for the CRSF.
Rather than restricting the simulated model parameters to those of the
best-fit null model, we use the Cholesky method to
draw the simulated parameters from a multivariate
Gaussian derived from the covariance matrix obtained in the fit to the null
model \citep{Hurkett:08:XRTLineSearches}.
We performed 10,000 simulations of the \textit{NuSTAR}\ data alone as well as the
joint \textit{NuSTAR}, \textit{Swift}, and \textit{Suzaku}\ data. The line energy was allowed to
vary in the 50--100\,keV range and the line width between 1 and 30\,keV. In all
cases, the simulated $\Delta \chi^2$ was less than the value observed in
the real data, providing $>3.9\sigma$ evidence for the existence of
the line in each of the two fits.
In most of the simulated cases, the best-fit depth of the line
is zero, and so the two models are indistinguishable. The largest
deviation in $\chi^2$ was 17.0 (21.3) for the combined datasets (\textit{NuSTAR}\
only), far smaller than the $\Delta \chi^2$ values of 278.5 (62.8) seen in
the real data. Based on the difference between the observed and simulated
$\Delta \chi^2$ distributions, it is clear that the CRSF detection is much
more significant in the joint fit than when using \textit{NuSTAR}\ alone.
Given the distribution of $\Delta \chi^2$ in these simulations,
it would be computationally unfeasible to simulate enough realizations to
expect a $\Delta \chi^2$ value near the true value and
obtain a true ppp-value significance.
We can obtain a simple estimate of the significance (and hence the number of
simulations required to obtain that chance deviation) by summing the data
and model counts in the $\pm1\sigma$ energy window around the best-fit
78\,keV cyclotron line.
Dividing the difference between the \texttt{npex} model without the cyclotron line
and the data in this region by the statistical error
allows us to estimate the level of chance
fluctuation needed. The deviations in the
\textit{NuSTAR}\ data (which do not cover the full
cyclotron line) are 1.8$\sigma$ and 2.3$\sigma$ when the modules are
considered independently;
the deviation in the GSO data taken alone is 8.0$\sigma$.
We thus expect the GSO measurement to dominate the fit.
(We do not correct for trials over energy
because the high-energy line was previously reported in other observations.)
If taken at face value, the statistical errors would require more than
$8\times10^{14}$ simulations to achieve deviations in $\chi^2$ comparable to the
observed values.
We considered whether systematic calibration uncertainties could be
responsible for the observed feature.
While the method of ppp-values provides a robust assessment of line
significance \citep{Hurkett:08:XRTLineSearches}, it is sensitive to false
positives if systematic errors are present. If a line feature is due to
inaccuracy in the instrumental responses or the modeled background,
ppp-value tests will confirm its statistical significance but not its
physical reality. The calibration of the \textit{NuSTAR}\ responses in the
70--78\,keV range is less certain than at lower energies due to the
increasing faintness of astrophysical calibrators. However, measured deviations
from a fiducial spectrum of the Crab Nebula are $<15$\% from 70--78\,keV
(Madsen et al. 2014, in prep).
Similarly, few-percent deviations of the Crab spectrum have been measured in
\textit{Suzaku}\ GSO spectra near 70\,keV \citep{Yamada:11:GSOCal}. These effects
are not large enough to produce the $\sim$30\% deviation
seen here, so we conclude that the feature is both significant and real.
\subsection{Search for a Lower-Energy Fundamental Line}
\label{sec:fundamental_search}
We searched for a cyclotron line at half the energy of the 78\,keV line
reported in Section \ref{sec:cyc_line}.
The \textit{NuSTAR}\ data enable a more sensitive search than
is possible with PIN: The combined \textit{NuSTAR}\ data have an SNR of 135 per keV at
40\,keV in these data, compared to 60 for PIN. (Data from both instruments
are strongly source-dominated, given the brightness of the outburst.)
No obvious residuals are apparent
in the time-integrated \textit{NuSTAR}\ data near $\sim$38\,keV (Figure
\ref{fig:spectra_n}), consistent with previous non-detections in
time-integrated data. Some residual structure is present in the
\textit{Suzaku}-PIN data below 40\,keV (Figure \ref{fig:spectra_s}). Using PIN
data, \citet{tmp_Yamamoto:14:GROCyc} reported a possible fundamental with
$E_0 = 36.8\spm{1.1}{0.7}$\,keV, optical depth at line center
of $0.06\spm{0.08}{0.03}$, and width
of $11.1\spm{7.2}{10.2}$\,keV in their \textit{Suzaku}-only fit, but concluded it
is not statistically significant. A double-cyclotron line
\textit{NuSTAR}--\textit{Suzaku}--\textit{Swift}\ joint fit with the fundamental restricted to
half of the 90\% error limits for the 78\,keV does fit a line depth at
39.8\spm{0.5}{1.2}\,keV. It has depth 1.0\spm{0.7}{0.2} and width
6.0\spm{8.4}{4.5}\,keV. However, the improvement in $\chi^2$ is modest,
only 6.3 for three additional free parameters. Our ppp-value simulations
of the higher energy line found 47 out of 10,000 simulations had $\Delta
\chi^2$ values larger than this, implying $<2.9\sigma$ significance. A
\textit{NuSTAR}-only fit to a line at this position results in a fundamental with
depth consistent with zero. The 90\% CL upper limit on the optical depth
at 39\,keV is 0.04. The possible 39\,keV fundamental fit by the \textit{Suzaku}\
data is thus disfavored by the more sensitive \textit{NuSTAR}\ data. A broader
\textit{NuSTAR}\ search from 34--40\,keV (at half of the line centroid identified by
the independent \textit{NuSTAR}\ and \textit{Suzaku}\ fits) similarly yielded line depths
consistent with zero.
We also performed a generic search for lower-energy lines by stepping a
\texttt{cyclabs} feature through a 2\,keV grid of energies over the
10--60\,keV range.
We used the \textit{NuSTAR}\ data only, as in the joint fit the residuals show
dips (due to response differences highlighted by the brightness of the
source, Figure \ref{fig:spectra_nws}c) not present in the \textit{NuSTAR}-only
fit (Figure \ref{fig:spectra_n}c).
For speed, the continuum parameters were frozen in the initial search.
Only one trial (at 26\,keV) fit a line depth greater than zero, and in
this case the best fit line width was 1\,keV, at the narrowest limit.
The value of $\Delta \chi^2$ is 5.4, less than the largest values
obtained by chance coincidence in the Monte Carlo simulations of the
80\,keV line, and there is a known response calibration feature at this
energy.
Over all energies, the largest 90\% CL upper limit on the optical depth was
0.09 at 52\,keV. Accordingly, we can rule out a lower-energy fundamental with
greater depths in the time-integrated data.
\subsection{Phase-Resolved Fits}
\label{sec:phase_resolved}
Because cyclotron line intensities and energies may vary with pulse phase
\citep[e.g.,][]{Fuerst:13:HerX1},
we split the \textit{NuSTAR}\ observation into phase bins of roughly
constant signal-to-noise ratio and conducted spectral fits on each.
We barycentered the \textit{NuSTAR}\ event data with \texttt{barycorr} using the
DE200 solar system reference frame \citep{Standish:82:DE200Ephemeris} and
applied a correction for light-travel time across the binary orbit using the
ephemeris of \citet{Kuhnel:2013:GROJ1008}. Figure \ref{fig:phase_folded}
shows the \textit{NuSTAR}\ data folded at the best-fit spin period of 93.57434\,sec.
\begin{figure}
\includegraphics[width=\columnwidth]{phased_lc_axisedit.pdf}
\caption{\textit{NuSTAR}\ data (3--79\,keV) folded at the 93.57\,sec spin period
(top panel)
and the best-fit phase-resolved \texttt{npex} photon indices (middle
panel) and folding energies (bottom panel).
\label{fig:phase_folded}
}
\end{figure}
The time-resolved data were well-fit by the \texttt{npex} spectral model.
We fixed the positive power-law index to 2, consistent with the values
obtained in the time-integrated fits.
The photon indices show a correlation with intensity (Figure
\ref{fig:phase_folded}), while the folding energy shows a mild secular
increase throughout the pulse.
No obvious deficits are present in the residuals at lower energies.
We attempted to observe phase evolution of the 80\,keV CRSF. Using four
phase bins, \citet{tmp_Yamamoto:14:GROCyc} found only a slight dependence
of the CRSF energy on the pulse data using the \textit{Suzaku}\ data. We froze the
width and depth of the CRSF to the best-fit values from the joint
\textit{NuSTAR}--\textit{Suzaku}--\textit{Swift}\ fit but left the energy free to vary. However,
the relatively limited \textit{NuSTAR}\ energy coverage of the line does not permit
firm constraints on the line energy.
We also performed a harmonic CRSF fit to the phase-resolved data. We fit
for a fundamental line in the 34--47\,keV range and froze the $\sim$80\,keV
harmonic width and depth to the best-fit time-integrated values. In all
epochs, the line depth was consistent with zero, the linewidth
unconstrained, and the improvement in $\chi^2 < 2$.
The 90\% C.L. line depth upper limits were in the range 0.09--0.3.
We repeated the generic grid search for low-energy CRSFs in the
time-resolved spectra in case phase-dependent intensity was pushing a
fundamental below detectability in the time-integrated fit. As in the
time-integrated case, the additional CRSF fit widths pegged at the
minimum value of 1\,keV, fit depths that were consistent with zero,
and/or were associated
with the small known response feature at 26\,keV.
Over all energies, the largest upper limits on the line depth were
0.2--0.4 and occurred in the 50--60\,keV range.
Accordingly, our phase-resolved fits rule out a phase-dependent fundamental
CRSF below 70\,keV with depth greater than one third of the depth of the
80\,keV CRSF.
\section{Discussion} \label{sec:discussion}
Observations of the November 2012 giant outburst of GRO\,J1008$-$57\ with modern
instruments provide the best available constraints on the magnetic field
strength of this HMXB.
Our spectral fits have confirmed that the previously reported CRSF
is indeed the fundamental for GRO\,J1008$-$57. The best-fit line center for the
combined datasets is
78\spm{3}{2}\,keV. This
matches the CRSF reported by \citet{tmp_Wang:14:GROJ1008_INTEGRAL} using
\textit{INTEGRAL} data
but is lower than the 88$\pm$5\,keV value first reported by
\citet{Grove:95:GROJ1008CRSF}.
This difference is not highly significant, however.
The \textit{Suzaku}\ data provide a better
constraint on the line and higher significance detection because the line
centroid is at the upper edge of the \textit{NuSTAR}\ bandpass,
but the \textit{NuSTAR}\ data provide an
independent confirmation of the detection.
The higher sensitivity provided by \textit{NuSTAR}\ below 79\,keV enabled us to
perform the most constraining search for a fundamental CRSF at lower
energies.
Our \textit{NuSTAR}\ double-cyclotron line fits require
the ratio of the optical depths of the fundamental to the harmonic to be
less than 5\% in the time-integrated data.
This is less than the most extreme ratios observed
in other accreting pulsars, including Vela X-1 ($\sim$10\%;
e.g., \citealp{Fuerst:14:VelaX1}) and 4U~0115$+$634 ($\geq11\%$,
e.g., \citealp{Muller:13:4U0115}).
In both of those systems, however, phase-resolved fitting reveals intervals
of greater fundamental strength. While our phase-resolved limits on the
fundamental/harmonic optical depth ratios are less stringent ($<$11--37\%) than the
time-integrated constraint, we do not
detect a significant fundamental at any phase.
Photon spawning can weaken the strength of the observed fundamental:
an electron that scatters into an excited Landau state
will release one or more secondary photons with energy comparable to the
line energy that may escape to the observer.
Calculations suggest this process can
replace as much as 75\% of the flux in the fundamental CRSF
\citep{Schonherr:07:CRSF}.
It thus is difficult to account for the low time-integrated fundamental to
harmonic depth ratio we observe with spawning alone.
Moreover, spawning is influenced by the hardness
of the spectral continuum, with harder spectra producing weaker fundamental
lines with more pronounced emission wings \citep{Schonherr:07:CRSF}.
Our nondetection of a low-energy fundamental in
the time-resolved fits despite the phase variation in the continuum
spectrum (Figure \ref{fig:phase_folded}) thus argues against such masking.
We therefore conclude that the 78\,keV CRSF is likely the fundamental.
The inferred magnetic field strength for GRO\,J1008$-$57\ is
$6.7\times10^{12} (1+z)$\,G, the highest of known accreting pulsars.
\acknowledgments
This work was supported under NASA Contract No. NNG08FD60C and uses
data from the \textit{NuSTAR}\ mission, a project led by the California Institute of
Technology, managed by the Jet Propulsion Laboratory, and funded by the
National Aeronautics and Space Administration. We thank the \textit{NuSTAR}\
Operations team for executing the ToO observation and the Software and
Calibration teams for analysis support.
This research has used the
\textit{NuSTAR}\ Data Analysis Software (NuSTARDAS) jointly developed by the ASI
Science Data Center (ASDC, Italy) and the California Institute of
Technology (USA).
JW acknowledges partial support from Deutsches Zentrum f\"ur Luftund Raumfahrt
grant 50\,OR\,1113.
{\it Facilities:} \facility{NuSTAR}, \facility{Swift}, \facility{Suzaku}.
\bibliographystyle{yahapj}
|
1,477,468,750,659 | arxiv | \section{
Neutrino -- Two-Photon Vertex in Vacuum}
The electroweak process of the transition of two gammas to the
neutrino -- antineutrino pair is described by two Feynman diagrams with
charged fermions in the loop and with the photon interchange.
\def\markphotonatomur{\begin{picture}(2,2)(0,0)
\put(2,1){\oval(2,2)[tl]}
\put(0,1){\oval(2,2)[br]}
\end{picture}
}
\def\markphotonatomdr{\begin{picture}(2,2)(0,0)
\put(1,0){\oval(2,2)[bl]}
\put(1,-2){\oval(2,2)[tr]}
\end{picture}
}
\def\photonurhalf{\begin{picture}(30,30)(0,0)
\multiput(0,0)(2,2){5}{\markphotonatomur}
\end{picture}
}
\def\photondrhalf{\begin{picture}(30,30)(0,0)
\multiput(0,0)(2,-2){5}{\markphotonatomdr}
\end{picture}
}
\def\photonatomright{\begin{picture}(3,1.5)(0,0)
\put(0,-0.75){\tencircw \symbol{2}}
\put(1.5,-0.75){\tencircw \symbol{1}}
\put(1.5,0.75){\tencircw \symbol{3}}
\put(3,0.75){\tencircw \symbol{0}}
\end{picture}
}
\def\photonright{\begin{picture}(30,1.5)(0,0)
\multiput(0,0)(3,0){10}{\photonatomright}
\end{picture}
}
\def\photonrighthalf{\begin{picture}(30,1.5)(0,0)
\multiput(0,0)(3,0){5}{\photonatomright}
\end{picture}
}
\begin{minipage}[t]{100mm}
\unitlength=1.00mm
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\vspace*{35mm}
\begin{picture}(100.00,7.00)(-20,10)
\put(30.00,38.00){\circle{16.00}}
\put(37.0,38.00){\circle*{2.3}}
\put(24.0,34.00){\circle*{1.0}}
\put(24.0,42.00){\circle*{1.0}}
\put(37.0,18.00){\makebox(0,0)[cc]{Fig. 1}}
\put(7.00,50.00){\makebox(0,0)[cc]{\large $\gamma(k_1)$}}
\put(7.00,26.00){\makebox(0,0)[cc]{\large $\gamma(k_2)$}}
\put(14.0,24.00){\photonurhalf}
\put(14.0,52.00){\photondrhalf}
\put(52.00,48.00){\makebox(0,0)[cc]{$\nu_i(p_1)$}}
\put(52.00,28.00){\makebox(0,0)[cc]{$\bar \nu_i(p_2)$}}
\put(37.0,38.50){\line(3,1){16.00}}
\put(37.0,38.50){\vector(3,1){10.00}}
\put(37.0,37.50){\line(3,-1){16.00}}
\put(37.0,37.50){\vector(3,-1){10.00}}
\put(76.00,38.00){\makebox(0,0)[cc]
{$+ \, (\gamma_1 \leftrightarrow \gamma_2)$}}
\end{picture}
\end{minipage}
\noindent
Here the bold circle corresponds to the weak effective interaction
carried by both the $W$ and $Z$ bosons.
The main interest to this process exists in astrophysics where it could be
an additional channel of neutrino creation in a stellar thermal bath.
The most general
amplitude could be written in the following form
\begin{eqnarray}
{\cal M} = {\alpha \over \pi} \, {G_F \over \sqrt{2}}\,
\left [\bar\nu_i (p_1) \,
T_{\alpha \beta \mu \nu} \,
\nu_i (- p_2) \right ]\,
f^{\alpha \beta}_1 f^{\mu \nu}_2,
\label{Mgen}
\end{eqnarray}
\noindent
where $i$ is the neutrino flavor, $i = e, \mu, \tau$, and the tensors
$f^{\alpha \beta} = k^\alpha \varepsilon^\beta - k^\beta \varepsilon^\alpha$ are the photon
field tensors in the momentum space.
The tensor $T_{\alpha \beta \mu \nu}$ which should be constructed, has a
physical dimension of the mass inversed.
The earliest conclusion on this amplitude, the so-called Gell-Mann
theorem~\cite{Gell-M}, states that for massless neutrinos, for both photons
on-shell, and in the local limit of the weak interaction the amlitude
is exactly zero.
Really, in the center-of-mass frame the left neutrino and right
antineutrino carry out the total momentum unity in the local limit of the
weak interaction.
However, two photons can not exist
in the state with the total angular momentum equals to unity
(the Landau-Yang theorem~\cite{Lan,Yan}).
In other words, there are no covariants for constructing
the $T_{\alpha \beta \mu \nu}$ tensor.
With any deviation from the Gell-Mann theorem conditions, the non-zero
amplitude arises. In the case of massive neutrino the process is
allowed~\cite{Crew,Dode} due to the change of the neutrino helicity.
The electron as the lightest fermion gives the main contribution into the
amplitude. To illustrate the Lorentz structure we present here
the tensor $T_{\alpha \beta \mu \nu}$
for the case of low photon energies $(\omega \ll m_e)$
\begin{eqnarray}
T_{\alpha \beta \mu \nu} = {{i g_A}\over {12}} \,
{m_{\nu_i} \over m^2_e} \, \gamma_5 \,
\varepsilon_{\alpha \beta \mu \nu}.
\label{Tmn}
\end{eqnarray}
\noindent
Here $g_A$ is the axial-vector constant of the effective electron -- neutrino
interaction in the standard model.
In the case of non-locality of the weak interaction
the neutrino momenta become separated and the following structure
arises~\cite{Levi,Dicu}
\begin{eqnarray}
T_{\alpha \beta \mu \nu} = 2 i \left (1 + {4 \over 3} \ln{m^2_W \over m^2_e}
\right ) {1 \over m^2_W}
\left [\gamma_\alpha \, g_{\beta \mu} (p_1 - p_2)_\nu +
\gamma_\mu \, g_{\nu \alpha} (p_1 - p_2)_\beta \right ]
(1 - \gamma_5).
\label{Tnl}
\end{eqnarray}
\noindent
It is seen that in both cases the amplitude is suppressed, either by
small neutrino mass in the numerator or by large $W$-boson mass in the
denominator, and the contribution of this channel into the stellar
energy--loss appears to be small.
One more exotic case of non-zero amplitude is realized for off-shell
photons~\cite{Cung,KM93}, $k_\mu f^{\mu \nu} \ne 0$, when the photon momenta
could be included into
the tensor $T_{\alpha \beta \mu \nu}$
\begin{eqnarray}
T_{\alpha \beta \mu \nu} = - {{i g_A}\over {12}} \,
{1 \over m^2_e} \, \gamma^\rho (1 - \gamma_5)
\left (\varepsilon_{\rho \alpha \mu \nu} \, k_{1 \beta}
\;
+ \varepsilon_{\rho \mu \alpha \beta} \, k_{2 \nu}
\right ).
\label{Trp}
\end{eqnarray}
\section{
What Can the Magnetic Field Change?}
Regarding possible astrophysical applications of the process discussed,
it should be noted that a strong
electromagnetic field can also influence the process and could allow it.
An external electromagnetic field opens a possibility to construct the
amplitude because a new covariant, the electromagnetic field tensor
$F_{\mu \nu}$ arises. But the dimensionless parameter which appears in
the amplitude is $e F_{\mu \nu}/m_e^2$. It is also the suppressing factor
in the fields smaller than the critical Schwinger value of the electromagnetic
field: $B_e = m^2_e/e \simeq 4.41 \cdot 10^{13}\ $ G.
So, for the process to be significant, the fields are needed of the order or
larger than the critical value.
It should be mentioned that macroscopic electric
fields above the critical value are impossible because of the $e^+ e^-$
pairs creation which induces the short circuit of vacuum. Only microscopic
fields are possible in the close vicinity of heavy nuclei in the area much
less than the elecron Compton wavelength. On the other hand, vacuum is
stable in a magnetic field above the Schwinger value. The maximal magnetic
field strength achieved in a laboratory is no more than $10^9$ G.
However, it is now believed that magnetic fields stronger than the
Schwinger value ($10^{14}, \ 10^{15}$ G or more)
exist in astrophysical objects at the earlier stage of
their evolution.
Our purpose is to analyse the photon--neutrino process in a strong magnetic
field. In the Feynman diagrams of Fig.2, describing the process,
the double lines correspond to the
electron propagators constructed on the base of the exact solutions of the
Dirac equation in a magnetic field.
\begin{minipage}[t]{100mm}
\unitlength=1.00mm
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\vspace*{35mm}
\begin{picture}(100.00,7.00)(-20,10)
\put(30.00,38.00){\circle{13.00}}
\put(36.6,38.00){\circle*{1.3}}
\put(24.30,34.30){\circle*{1.0}}
\put(24.30,41.70){\circle*{1.0}}
\put(30.00,38.00){\circle{16.00}}
\put(34.50,38.00){\makebox(0,0)[cc]{$x$}}
\put(26.00,36.00){\makebox(0,0)[cc]{$y$}}
\put(26.00,40.50){\makebox(0,0)[cc]{$z$}}
\put(37.0,18.00){\makebox(0,0)[cc]{Fig. 2}}
\put(9.00,50.00){\makebox(0,0)[cc]{\large $\gamma$}}
\put(9.00,26.00){\makebox(0,0)[cc]{\large $\gamma$}}
\put(14.0,24.00){\photonurhalf}
\put(14.0,52.00){\photondrhalf}
\put(52.00,48.00){\makebox(0,0)[cc]{$\nu$}}
\put(52.00,28.00){\makebox(0,0)[cc]{$\bar \nu$}}
\put(37.0,38.00){\line(3,1){16.00}}
\put(37.0,38.00){\vector(3,1){10.00}}
\put(37.0,38.00){\line(3,-1){16.00}}
\put(37.0,38.00){\vector(3,-1){10.00}}
\put(76.00,38.00){\makebox(0,0)[cc]
{$+ \, (\gamma_1 \leftrightarrow \gamma_2)$}}
\end{picture}
\end{minipage}
In the recent paper~\cite{Shai} this process was studied in the case
of relatively small magnetic field and low photon energies. The amplitude was
obtained linearly depending on the field strength.
An attempt of calculation in a strong field was performed earlier~\cite{Losk}
for the case of low photon energies, however,
the amplitude presented in that paper was not gauge invariant.
We use the effective local Lagrangian ($|q^2| \ll m^2_W$) of the
electron -- neutrino interaction in the standard model
\begin{eqnarray}
{\cal L} = \frac{G_F}{\sqrt 2}
\big [ \bar e \gamma_\rho (g_V - g_A \gamma_5) e \big ] \,
\big [ \bar \nu_i \gamma^\rho (1 - \gamma_5) \nu_i \big ], \;
\label{Lef}\\
g_V = \delta_{i e} - \frac{1}{2} + 2 sin^2 \theta_W, \;
g_A = \delta_{i e} - \frac{1}{2}.
\nonumber
\end{eqnarray}
\noindent
The process amplitude contains two essentially different parts caused by
the vector and axial-vector parts of the electron current in the
Lagrangian~\eq{Lef}. It should be
mentioned that the axial-vector part of the amplitude in the local limit of
the weak interaction contains the triangle Adler anomaly. Fortunately, the
field-induced part of the amplitude is free of the triangle anomaly.
It could be demonstrated by expansion of the amplitude of the process in
Fig.2 over the external field.
\begin{minipage}[t]{100mm}
\unitlength=1.00mm
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\vspace*{35mm}
\begin{picture}(100.00,17.00)(-0,10)
\put(36.0,38.00){\circle*{1.0}}
\put(24.0,32.00){\circle*{1.0}}
\put(24.0,44.00){\circle*{1.0}}
\put(24.0,32.00){\line(0,1){12.00}}
\put(24.0,32.00){\line(2,1){12.00}}
\put(24.0,44.00){\line(2,-1){12.00}}
\put(57.0,13.00){\makebox(0,0)[cc]{Fig. 3}}
\put(14.0,22.00){\photonurhalf}
\put(14.0,54.00){\photondrhalf}
\put(36.0,38.00){\line(4,1){16.00}}
\put(36.0,38.00){\vector(4,1){10.00}}
\put(36.0,38.00){\line(4,-1){16.00}}
\put(36.0,38.00){\vector(4,-1){10.00}}
\put(66.00,38.00){\makebox(0,0)[cc]{$+$}}
\put(76.0,22.00){\photonurhalf}
\put(76.0,54.00){\photondrhalf}
\put(86.0,32.00){\circle*{1.0}}
\put(86.0,44.00){\circle*{1.0}}
\put(98.0,32.00){\circle*{1.0}}
\put(98.0,44.00){\circle*{1.0}}
\put(86.0,32.00){\line(0,1){12.00}}
\put(86.0,32.00){\line(1,0){12.00}}
\put(98.0,44.00){\line(0,-1){12.00}}
\put(98.0,44.00){\line(-1,0){12.00}}
\put(98.0,44.00){\line(2,1){16.00}}
\put(98.0,44.00){\vector(2,1){10.00}}
\put(98.0,44.00){\line(4,1){16.00}}
\put(98.0,44.00){\vector(4,1){10.00}}
\put(110.0,20.00){\makebox(0,0)[cc]{$+$}}
\multiput(98.00,32.00)(4.20,-4.20){3}{\line(1,-1){3.6}}
\put(98.00,32.00){\line(1,-1){2.50}}
\put(101.00,29.00){\line(1,-1){2.50}}
\put(104.00,26.00){\line(1,-1){2.50}}
\put(107.00,23.00){\line(1,-1){2.50}}
\put(120.00,38.00){\makebox(0,0)[cc]{$+ \dots$}}
\end{picture}
\end{minipage}
\noindent
Here the dashed line corresponds to the external field.
It is obvious that only vacuum part which is zero, could contain the anomaly.
The vector part of the amplitude is interesting by itself because it defines
the process of photon splitting in a magnetic field which is widely
discussed in the literature.
\section{
Details of Calculations in a Strong Magnetic Field}
The electron propagator in a magnetic field
could be presented in the form
\begin{eqnarray}
S(x,y) &=& e^{\mbox{\normalsize $i \Phi (x,y)$}}\, \hat S(x-y),\label{Sxy}\\
\Phi(x,y) &=& - e \int \limits^y_x d\xi_\mu \left [ A_\mu(\xi) + \frac{1}{2}
F_{\mu \nu}(\xi - y)_\mu \right ],\label{Phi}
\end{eqnarray}
\noindent where $e$ is the elementary charge,
$A_\mu$ is a 4-potential of the uniform magnetic field.
The translational invariant part $\hat S(x-y)$ of the propagator has several
representations. It is convenient for our purposes to take it in the form of
a partial Fourier integral expansion
\begin{eqnarray}
\hat S(X) &=& - \frac{i}{4 \pi} \int \limits^{\infty}_{0}
\frac{d\tau}{th \tau}\,
\int \frac{d^2 p}{(2 \pi)^2}
\Biggl \{
[\prl{(p \gamma)} + m]\Pi_{-}(1 + th \tau) +
\nonumber\\
&+& [\prl{(p \gamma)} + m]\Pi_{+}(1 - th \tau)
- \prp{(X \gamma)}\; \frac{i eB}{2\, th \tau} (1 - th^2 \tau)
\Biggr \}
\times
\nonumber\\
&\times& \exp\left(- \frac{eB X_{\mbox{\tiny $\bot$}}^2}{4\, th \tau} -
\frac{\tau(m^2 - p_{\mbox{\tiny $\|$}}^2)}{eB} - i \prl{(pX)} \right),
\label{Sx}\\[3mm]
d^2 p &=& dp_0 dp_3, \quad \Pi_\pm = \frac{1}{2} (1 \pm i \gamma_1 \gamma_2),
\quad \Pi^2_\pm = \Pi_\pm, \quad [\Pi_\pm, \prl{(a \gamma)}] = 0,
\nonumber
\end{eqnarray}
\noindent where $\gamma_\alpha$ are the Dirac matrices in the standard
representation, the four-vectors with the indices $\bot$ and $\parallel$
belong
to the \{1, 2\} plane and the Minkowski \{0, 3\} plane correspondingly,
when the field $\bf B$ is directed along the third axis. Then for arbitrary
4-vectors $a_\mu$, $b_\mu$ one has
\begin{eqnarray}
\prp{a} &=& (0, a_1, a_2, 0), \quad \prl{a} = (a_0, 0, 0, a_3), \nonumber \\
\prp{(ab)} &=& (a \Lambda b) = a_1 b_1 + a_2 b_2 , \quad
\prl{(ab)} = (a \widetilde \Lambda b) = a_0 b_0 - a_3 b_3,
\label{ab}
\end{eqnarray}
\noindent where the matrices are introduced
$\Lambda_{\alpha \beta} = (\varphi \varphi)_{\alpha \beta}$,\,
$\widetilde \Lambda_{\alpha \beta} =
(\tilde \varphi \tilde \varphi)_{\alpha \beta}$, connected by the relation
$\widetilde \Lambda_{\alpha \beta} - \Lambda_{\alpha \beta} =
g_{\alpha \beta} = diag(1, -1, -1, -1)$,
$\varphi_{\alpha \beta} = F_{\alpha \beta} /B,\;
{\tilde \varphi}_{\alpha \beta} = \frac{1}{2} \varepsilon_{\alpha \beta
\mu \nu} \varphi_{\mu \nu}$ are the dimensionless tensor of the external
magnetic field and the dual tensor,
$(a \Lambda b) = a_\alpha \Lambda_{\alpha \beta} b_\beta$.
In spite of the translational and gauge noninvariance of the phase
$\Phi(x, y)$ in the propagator~\eq{Sxy}, the total phase of three propagators
in the loop is translational and gauge invariant
\begin{eqnarray}
\Phi(x, y) + \Phi(y, z) + \Phi(z, x) = - \frac{e}{2} (z - x)_\mu
F_{\mu \nu} (x - y)_\nu. \nonumber
\end{eqnarray}
\noindent Some papers were known where an incorrect operating with these
phases was a source of mistakes.
The amplitude of the photon -- neutrino process $\ggnunu$ takes the form
\begin{eqnarray}
{\cal M} &=& e^2 {G_F \over \sqrt{2}} \int d^4 X\, d^4 Y\, Sp \{\hat j
\hat S(Y) \hat \varepsilon''
\hat S(-X-Y) \hat \varepsilon' \hat S(X)\} \times \nonumber \\
&\times& e^{- i e\,(X F Y)/2}\; e^{i (k' X - k'' Y)} +
(\varepsilon', k' \leftrightarrow \varepsilon'', k''),
\label{M}
\end{eqnarray}
\noindent where $j$ is the neutrino current in the momentum space,
$\varepsilon',\; k'$ and $\varepsilon'',\;k''$ are the polarization vectors and
the 4-momenta of initial photons,
$X = z - x, \, Y = x - y$.
Manipulations with the propagator~\eq{Sx} in the three-point loop leads to
a very cumbersome expression in a general case. Relatively simple results
were obtained for the process of photon splitting
only in the two limits of a weak field~\cite{Ad71} and of the
strong field with collinear kinematics~\cite{Baier}.
It is advantageous to use the asymptotic expression of the electron
propagator for an analysis of the process amplitude in the strong field.
This asymptotic could be easily derived
from Eq.~\eq{Sx} by evaluation of the integral over $\tau$ in the limit
\noindent
$eB /\vert m^2 - \prl{p}^2 \vert \gg 1$. In this case the propagator
takes the simple form
\begin{eqnarray}
\hat S(X) \simeq S_a(X) = \frac{i eB}{2 \pi} \exp (- \frac{eB X_{\mbox{\tiny $\bot$}}^2}{4})
\int \frac{d^2 p}{(2 \pi)^2}\, \frac{\prl{(p\gamma)} + m}{\prl{p}^2 - m^2}
\Pi_{-}e^{-i \prl{(pX)}},
\label{Sa}
\end{eqnarray}
\noindent
which was obtained for the first time in Ref.~\cite{Skob}.
Substituting the propagator~\eq{Sa} into the amplitude one obtains that two
parts of it which differ by the photon interchange
are linear on the field $B$, but they cancel each other.
Thus, the asymptotic form
of the electron propagator~\eq{Sa} only shows that the
linear-on-field part of the amplitude is zero and provides no way of
extracting the next term of expansion over the field strength.
We have also studied the process $\gamma \gamma\to \nu \bar\nu$ with the
asymptotic propagator~\eq{Sa} for more
general case when the effective two-fermion -- two-neutrino interaction
contains
the scalar, pseudoscalar, vector, and axial-vector connections.
We have obtained that only the scalar connection
gives the linear growth with $B$
in the amplitude. It is known that the effective scalar
two-fermion -- two-neutrino interaction
arises in the models with supersymmetry, left-right symmetry,
and with leptoquarks (a fermion in the loop is a quark in the last case).
Thus, the process $\gamma \gamma\to \nu \bar\nu$ could be a channel
of creation of sterile neutrinos, amplified by a strong magnetic field.
It could be an interesting direction for further investigations.
As the analysis shows, in the standard model the linear increase of the
amplitude with the field takes place in the next order of perturbation theory,
in the process $\gamma \gamma\to \nu \bar\nu \gamma$.
The probability of this process contains
the extra factor $\alpha (B/B_e)^2$ as compared with the process
$\gamma \gamma\to \nu \bar\nu$. Thus, for the field strength
$B > 10^{15}\, $ G, the process with additional photon would dominate.
\section{
Partial Amplitudes of the Process
$\gamma \gamma\to \nu \bar\nu $}
The next term of expansion of the amplitude over the field strength
can be found by the insertion of two
asymptotic~\eq{Sa} and one exact propagator~\eq{Sx} into the amplitude~\eq{M},
with all interchanges.
It is worthwhile to turn from the general
amplitude~\eq{M} to the partial polarization amplitudes.
For this purpose we use the orthogonal basis of 4-vectors
\begin{eqnarray}
b_\alpha^{(1)} = \frac{(k \varphi)_\alpha}
{\sqrt{k_{\mbox{\tiny $\bot$}}^2}},
\;
b_\alpha^{(2)} = \frac{(k \tilde \varphi)_\alpha}
{\sqrt{k_{\mbox{\tiny $\|$}}^2}},
\;
b_\alpha^{(3)} = \frac{k^2 (k \Lambda)_\alpha - (k \Lambda k) k_\alpha}
{\sqrt{k^2 k_{\mbox{\tiny $\bot$}}^2 k^2_{\mbox{\tiny $\|$}}}},
\;
b_\alpha^{(4)} = \frac{k_\alpha}{\sqrt{k^2}},
\label{b's}
\end{eqnarray}
\noindent
with the definitions of Eq.~\eq{ab}.
The vectors $b_\alpha^{(1)}$ and $b_\alpha^{(2)}$
correspond to the stationary photon states with definite dispersion
relations in a magnetic field, and an arbitrary polarization vector
$\varepsilon_\alpha$ of a photon with the momentum $k$ can be expanded over these
two vectors.
The neutrino current $j$ in the case of massless neutrinos is orthogonal to
the total 4-momentum of the neutrino -- antineutrino pair, and it could be
expanded over the three vectors,
$b_\alpha^{(1)}\;$, $b_\alpha^{(2)}$ and $b_\alpha^{(3)}$, where $k$ is
the total 4-momentum of the pair.
Making these expansions in the amplitude~\eq{M}, one obtains
18 independent partial amplitudes
\begin{eqnarray}
{\cal M} = \sum_{\scriptstyle\lambda = 1,2,3 \atop\scriptstyle\lambda',\lambda'' = 1,2}
(b^{(\lambda)} j) (b^{(\lambda')} \varepsilon') (b^{(\lambda'')} \varepsilon'')
\; \left (g_V {\cal M}^V_{\lambda\lm'\lambda''} + g_A {\cal M}^A_{\lambda\lm'\lambda''}\right )
\label{Mva}
\end{eqnarray}
\noindent
There are 6 amplitudes ${\cal M}^V$ for $\lambda = 1,2$ which
depict the process of photon splitting, with the substitution
$G_F/\sqrt{2} \to e$.
We have obtained the following expressions for the partial amplitudes,
to the terms of order $1/B$
\begin{eqnarray}
{\cal M}^V_{111} &=& {\cal M}^A_{111} = {\cal M}^A_{211} = {\cal M}^V_{311} = 0,
\label{MV111} \\
[4mm]
{\cal M}^V_{112} &=&
i \;\frac{2 \alpha}{\pi} \;\frac{G_F}{\sqrt{2}} \;
\frac{(k' \varphi k'')(k' \tilde \varphi k'')}
{[
\prl{(k')^2}\prp{(k'')^2}\prp{k^2}
]^{1/2}}\; H\!\!\left(\frac{4 m^2}{\prl{(k')^2}}\right),
\label{MV112} \\
[4mm]
{\cal M}^A_{112} &=&
i \;\frac{2 \alpha}{\pi} \;\frac{G_F}{\sqrt{2}} \;
\frac{(k' \varphi k'')(k' \widetilde \Lambda k'')}
{[
\prl{(k')^2}\prp{(k'')^2}\prp{k^2}
]^{1/2}}\; H\!\!\left(\frac{4 m^2}{\prl{(k')^2}}\right),
\label{MA112} \\
[4mm]
{\cal M}^V_{122} &=&
i \;\frac{2 \alpha}{\pi} \;\frac{G_F}{\sqrt{2}} \;
\frac{(k' \widetilde \Lambda k'')}
{[\prl{(k')^2}\prl{(k'')^2}\prp{k^2}]^{1/2}}
\Biggl \{ (k \Lambda k'') H\!\!\left(\frac{4 m^2}{\prl{(k')^2}}\right)
+ (k \Lambda k') H\!\!\left(\frac{4 m^2}{\prl{(k'')^2}}\right) \Biggr \},
\label{MV122} \\
[4mm]
{\cal M}^A_{122} &=&
i \;\frac{2 \alpha}{\pi} \;\frac{G_F}{\sqrt{2}} \;
\frac{(k' \tilde \varphi k'')}
{[\prl{(k')^2}\prl{(k'')^2}\prp{k^2}]^{1/2}}
\Biggl \{ (k \Lambda k'') H\!\!\left(\frac{4 m^2}{\prl{(k')^2}}\right)
- (k \Lambda k') H\!\!\left(\frac{4 m^2}{\prl{(k'')^2}}\right) \Biggr \}.
\label{MA122}
\end{eqnarray}
\noindent
Here $k',\;k''$ are the photon momenta, $k$ is the total momentum of the
$\nu \bar \nu$ pair, and the function is introduced
\begin{eqnarray}
H(z) &=&\frac{z}{\sqrt{z - 1}} \arctan \frac{1}{\sqrt{z - 1}} - 1, \quad z > 1,
\nonumber \\
H(z) &=& - \frac{1}{2} \left ( \frac{z}{\sqrt{1-z}}
\ln \frac{1 + \sqrt{1-z}}{1 - \sqrt{1-z}} + 2 +
i \pi \frac{z}{\sqrt{1-z}} \right ), \quad z < 1.
\label{Hz}
\end{eqnarray}
\noindent
The remaining amplitudes have a more complicated form and will be published
elsewhere.
\section{
Photon Splitting in a Strong Magnetic Field}
As was mentioned above, 6 partial amplitudes of the process $\ggnunu$
describe also the photon splitting $\gamma \rightarrow \gamma \gamma$. This process has its own long
history of investigations, see e.g.~\cite{PapRit89}.
Recent progress in astrophysics
has drawn attention again to the photon splitting induced by a magnetic
field~\cite{Hard}.
The study of this process in a strong
magnetic field~\cite{Baier,Ad96,W} has so far considered the
collinear limit, when the only allowed
transition with respect to photon polarizations is, $\gamma_{\mprl} \rightarrow \gamma_{\mprp} \gamma_{\mprp}$ (in Adler's
notation~\cite{Ad71}). However, photon dispersion in a strong
magnetic field, $B \gg B_e$, leads to significant deviations
from the collinearity of the kinematics of this process.
This is due to the fact that the eigenvalues of the
photon polarization operator (the photon effective mass squared)
become large near the so-called cyclotron
resonances~\cite{Shab}.
On the other hand, a large value of the polarization operator near the
resonance requires taking account of large radiative
corrections which reduce to a renormalization of the photon
wave-function
\begin{eqnarray}
\varepsilon_{\alpha}^{(\lambda)} \to \varepsilon_{\alpha}^{(\lambda)} \sqrt{Z_\lambda}, \quad
Z^{-1}_\lambda = 1 - \frac{\partial {\cal P}^\lambda}{\partial \omega^2}, \quad
\lambda = \parallel, \perp.
\label{Z}
\end{eqnarray}
\noindent Here
$\varepsilon_\alpha^{(\mbox{\tiny $\|$})} = b^{(1)}_\alpha,\;
\varepsilon_\alpha^{(\mbox{\tiny $\bot$})} = b^{(2)}_\alpha$
are the polarization
four-vectors of the photon modes and ${\cal P}^\lambda$ are the eigenvalues of the
photon polarization operator, corresponding to these modes~\cite{Shab}.
Both the effect of noncollinearity and
radiative corrections have not, so far, been taken into account.
Substituting $G_F/\sqrt{2} \to e$ in the amplitudes~\eq{MV112}
and~\eq{MV122} we obtain from ${\cal M}^V_{112}$ the amplitude of the
process $\gamma_{\mprl} \rightarrow \gamma_{\mprl} \gamma_{\mprp}$, forbidden in the collinear limit and from ${\cal M}^V_{122}$ the
amplitude of the allowed process $\gamma_{\mprl} \rightarrow \gamma_{\mprp} \gamma_{\mprp}$.
As for remaining amplitudes, we note that ${\cal M} (\gamma_{\mprl} \rightarrow \gamma_{\mprl} \gamma_{\mprl})$
is equal to zero in this approximation, see Eq.~\eq{MV111}.
On the other hand, the photon of the
$\perp$ mode due to its dispersion can split into two photons only in the
kinematic region $\prl{k^2} > 4 m^2$ where the tree-channel
$\gamma_{\mbox{\tiny $\bot$}} \to e^+ \,e^-$~\cite{Klep} strongly dominates.
Thus we will analyse further the photon splitting of the $\|$ mode
in the region $\prl{k^2} < (m + \sqrt{m^2 + 2 e B})^2$ where the tree-channel
$\gamma_{\mbox{\tiny $\|$}} \to e^+ \,e^-$ does not exist.
In the formal limit of the collinearity of the photon momenta, the amplitude
${\cal M} (\gamma_{\mprl} \rightarrow \gamma_{\mprl} \gamma_{\mprp})$ goes to zero while the amplitude
${\cal M} (\gamma_{\mprl} \rightarrow \gamma_{\mprp} \gamma_{\mprp})$ coincides with the amplitude obtained in
Ref.~\cite{Baier}.
However, the collinear limit $\vert q^2 \vert/\omega^2\ll 1$ is inadequate in
the strong field
${\vert q^2 \vert}/{\omega^2} \simeq ({\alpha}/{3 \pi})({B}/{B_e})
\sim 1$, when ${B}/{B_e} \simeq {3 \pi}/{\alpha}\sim 10^3$.
Although the process involves three particles, its amplitude is not a
constant,
because it contains the external field tensor in addition to the photon
4-momenta. The general expression for the splitting probability can be
written in the form
\begin{eqnarray}
W (\gamma_\lambda \to \gamma_{\lambda'} \gamma_{\lambda''}) &=&
\frac{g}{32 \pi^2 \omega} \int \vert {\cal M}
(\gamma_\lambda \to \gamma_{\lambda'} \gamma_{\lambda''})
\vert^2
Z_\lambda Z_{\lambda'} Z_{\lambda''} \times
\nonumber\\
&\times& \; \delta(\omega_\lambda({\bf k}) - \omega_{\lambda'}({\bf k'}) -
\omega_{\lambda''}({\bf k} - {\bf k'}))
\frac{d^3 k'}{\omega_{\lambda'} \omega_{\lambda''}},
\label{defW}
\end{eqnarray}
\noindent where the factor $g = 1 - {1 \over 2}\delta_{\lambda' \lambda''}$ is
inserted to account for possible identity of the final photons.
The integration over phase space of two final photons in Eq.~\eq{defW}
has to be performed using the photon energy dependence on the
momenta, $\omega = \omega_\lambda({\bf k})$, which can be found from
the dispersion equations
\begin{eqnarray}
\omega_\lambda^2({\bf k}) - {\bf k}^2 - {\cal P}^\lambda = 0.
\label{disp}
\end{eqnarray}
A calculation of the splitting probability~\eq{defW} is rather complicated
in the general case.
In the limit $m^2 \ll \omega^2 \sin^2 \theta \ll eB$,
where $\theta$ is an angle between the initial photon momentum
$\bf k$ and the magnetic field direction, we derive the
following analytical expression for the probability of the channel $\gamma_{\mprl} \rightarrow \gamma_{\mprl} \gamma_{\mprp}$:
\begin{eqnarray}
W &\simeq& \frac{\alpha^3 \omega \sin^2 \theta}{16} (1 - x)
[1 - x + 2 x^2 + 2(1 - x)(1 + x)^2 \ln (1 + x) -
\nonumber\\[2mm]
&-& 2 x^2 \,\frac{2-x^2}{1-x} \, \ln \frac{1}{x}], \qquad
x = \frac{2 m}{\omega \sin \theta} \ll 1.
\label{Wa}
\end{eqnarray}
Within the same approximation we obtain the spectrum of
final photons in the frame where the initial photon momentum is orthogonal
to the field direction:
\begin{eqnarray}
\frac{d W}{d \omega'} \simeq \frac{\alpha^3}{2}\cdot
\frac{\sqrt{(\omega - \omega')^2 - 4 m^2}}
{\omega' + \sqrt{(\omega' - \omega)^2 - 4 m^2}},
\label{spectr}\\
\frac{\omega}{2} - \frac{2 m^2}{\omega} < \omega' < \omega - 2 m,
\nonumber
\end{eqnarray}
\noindent where $\omega, \omega'$ are the energies of the initial and final
photons of the $\parallel$ mode.
We have made numerical calculations of the process probabilities
below and near the pair-creation threshold
for both channels~\cite{CKM}, which are valid in the limit
$\omega^2 \sin^2 \theta \ll eB$.
In this region the channel $\gamma_{\mprl} \rightarrow \gamma_{\mprp} \gamma_{\mprp}$ (allowed in the
collinear limit) is shown to dominate the channel $\gamma_{\mprl} \rightarrow \gamma_{\mprl} \gamma_{\mprp}$ (forbidden in
this limit). The probability
obtained without considering the noncollinearity of the kinematics and
radiative corrections is shown to be inadequate.
For example, this probability becomes infinite just above the threshold.
Both channels give essential contributions
to the probability at high photon energies, with the
``forbidden'' channel dominating.
It should be stressed that taking account of the photon polarization
leads to the essential dependence of the splitting probabilities on the
magnetic field, while the amplitudes do not depend
on the field strength value.
\section{Conclusions}
In this paper we have considered the photon-neutrino process
$\gamma \gamma\to \nu \bar\nu $ and photon splitting
in a strong magnetic field.
It is shown that various types of the neutrino -
electron effective interactions lead to different dependences of the
amplitudes on the field strength.
The partial polarization amplitudes are calculated within the standard model
in the limit of a strong field. The amplitudes do not depend on the
field strength in this limit.
Using the vector parts of the amplitudes, the photon
splitting $\gamma \rightarrow \gamma \gamma$ is investigated.
The collinear limit is shown to be an inadequate approximation for
this process
in a strong magnetic field ($B \gg B_e$), because of the significant
deviation of
the photon dispersion in the strong field from the vacuum dispersion.
The ``allowed'' channel $\gamma_{\mprl} \rightarrow \gamma_{\mprp} \gamma_{\mprp}$ is not comprehensive for the splitting in
the strong field. The ``forbidden'' channel $\gamma_{\mprl} \rightarrow \gamma_{\mprl} \gamma_{\mprp}$ is also essential, moreover,
it dominates at high energies of the initial photon.
The photon splitting probabilities are calculated
in the strong field limit for both channels. The probabilities depend
essentially on the field strength value,
due to the photon polarization in the strong magnetic field.
\vspace{10mm}
\section{Acknowledgements}
The authors are grateful to V.A.~Rubakov for useful discussions of a
problem of large radiative corrections in the vicinity of a cyclotron
resonance.
A.K. expresses his deep gratitude to the organizers of the
Ringberg Neutrino Euroconference for the possibility to participate in it.
This work was supported in part by the INTAS Grant N~96-0659
and by the Russian Foundation for Basic Research Grant N~98-02-16694.
The work of A.K. and N.M. was supported in part by the
International Soros Science Education Program under the Grants
N~d98-179 and N~d98-181.
\section{References}
|
1,477,468,750,660 | arxiv | \section{Introduction}
While there is no evidence of the presence of primordial antimatter in
the universe, the amount of primordial matter, i.e. baryons, has been determined
quite precisely from two independent observables. From the measurements of
the primordial deuterium abundance of the Big Bang Nucleosynthesis (BBN) and
the temperature anisotropies in the Cosmic Microwave Background (CMB), the amount of baryons
as a fraction of cosmic critical energy density is determined to be
$\Omega_{B}^{\rm{BBN}} = 0.048 \pm 0.002$~\cite{Cooke:2013cba} and
$\Omega_{B}^{\rm{CMB}} = 0.048 \pm 0.001$~\cite{Ade:2015xua}, respectively.
In addition, the CMB measurement also yields the amount of nonbaryonic matter,
i.e. the so-called Dark Matter (DM), to be $\Omega_X = 0.258 \pm 0.008$~\cite{Ade:2015xua}.
Given that the evidences of DM arise only from gravitational effects, it could be some
form of exotic matter or a particle very similar to its baryonic counterpart,
in particular it could be {\it asymmetric}.
The simplest asymmetric DM is either a complex scalar $\phi$ or a Dirac fermion $\psi$
uncharged under the SM gauge group.
A single Weyl fermion is not a suitable
candidate since it would be either massless or have a Majorana mass, meaning that it cannot
carry an asymmetry.
The idea of an asymmetric DM giving rise to comparable DM
and baryon densities is a few decades old~\cite{Nussinov:1985xr,Roulet:1988wx,Barr:1990ca}\footnote{The connection between DM and the baryon asymmetry has also been explored in the context of {\it symmetric} DM (See e.g. Refs.~\cite{McDonald:2011zza,McDonald:2011sv,D'Eramo:2011ec,Cui:2011ab,Canetti:2012vf,Davidson:2012fn,
Bernal:2012gv,Bernal:2013bga,Racker:2014uga,Cui:2015eba}).}.
In recent years, this idea has received a renew impetus and a plethora of the new ideas
culminate in some recent review articles~\cite{Davoudiasl:2012uw,Petraki:2013wwa,Zurek:2013wia, Boucenna:2013wba}
(see also~\cite{Cui:2011qe,Servant:2013uwa, Davidson:2013psa}).
Just like in a baryogenesis scenario, when Sakharov's conditions~\cite{Sakharov:1967dj}
are fulfilled, one could dynamically generate an asymmetry in $X\equiv\left\{ \phi,\,\psi\right\} $.
One can take this one step further and expect that like in the Standard
Model (SM), there are additional fast interactions among the DM sector
which efficiently annihilate away the symmetric component ($X\bar{X}\to\cdots$), ending up with only the asymmetric part.
Up to now there is no compelling evidence
that DM communicates with the SM via interactions other than gravity;
and if this is all that is, the possibilities of testing DM properties in the lab are very challenging.
On the other hand,
there are hints of a deeper connection between DM and the SM baryons.
For instance, their energy densities today are of similar order, which would suggest a common mechanism for the origin of the two species:
\begin{equation}
r\equiv\frac{\Omega_X}{\Omega_{B}} =\frac{Y_X^{0}\,m_X}{Y_{B_{\rm{SM}}}^{0}\,m_{n}} =\frac{\left|Y_{\Delta X}\right|\,m_X}{Y_{\Delta B_{\rm{SM}}}\,m_{n}}=5.4\,,\label{eq:ratio_DM_B}
\end{equation}
where $m_X$ and $m_{n}$ are the DM and the nucleon mass, respectively.
Here we denote $Y_{\Delta i}\equiv Y_{i}-Y_{\bar{i}}$, where $Y_{i}=n_{i}/s$
is the number density of $i$ normalized by the
entropic density $s=\frac{2\pi^{2}}{45} g_{\star}T^{3}$, $g_{\star}$
is the relativistic degrees of freedom that contribute to the entropy and $T$ the temperature of the thermal bath.
In Eq.~\eqref{eq:ratio_DM_B}, the superscript `$0$' denotes the value today, $Y_{B_\text{SM}}^{0}= (8.66 \pm 0.09) \times 10^{-11}$~\cite{Ade:2015xua} and the third
equality is due to the assumption that both DM and baryon are {\it maximally} asymmetric.\footnote{Ref.~\cite{Graesser:2011wi} considered both the symmetric and the asymmetric DM components.}
We also denote $Y_X^{0}=\left|Y_{\Delta X}\right|$ because DM today can consist of particles $X$ or antiparticles $\bar{X}$.
Notice that the asymmetric DM scenario itself does not
justify why $r\sim 1$; further theoretical inputs
which relate $m_X$ to $m_n$ or $Y_{\Delta X}$ to $Y_{\Delta B_\text{SM}}$ are needed (see e.g. Ref.~\cite{Bai:2013xga,Garcia:2015toa,Farina:2015uea,Farina:2016ndq}).
In this work, we are not trying to dynamically generate $r\sim 1$, but we will rather take it as a starting point.
Our aim is to consider an Effective Field Theory (EFT) description
where the DM and the SM sectors {\it share} a common asymmetry via interactions that were typically in equilibrium.
After the chemical decoupling between the two sectors, they barely interact i.e. {\it not caring} about each other.
In particular,
we consider an asymmetric DM scenario in which the DM is {\it not} charged under
the SM gauge symmetry,\footnote{DM candidates can also be the lightest neutral components of
some $SU(2)_L$ multiplets (with zero or nonzero hypercharge). The quest to find such particles
which are automatically stable was carried out in Ref.~\cite{Cirelli:2005uq}.
Asymmetric DM from $SU(2)_L$ multiplets with nonzero hypercharge
were considered in Refs.~\cite{Boucenna:2015haa,Dhen:2015wra}.} which makes its
detection particularly challenging through SM interactions. We further assume that the DM particles
do carry nonzero lepton and/or baryon number~\cite{Agashe:2004ci,Agashe:2004bm,Farrar:2005zd,Kaplan:2009ag, Dulaney:2010dj,Perez:2013tea,Feng:2013zda, Zhao:2014nsa, Perez:2014qfa,Fukuda:2014xqa,Ohmer:2015lxa,Fornal:2015boa, Cheung:2015mea}\,\footnote{For DM realizations within baryon and lepton number as gauge symmetries see e.g. Refs.~\cite{FileviezPerez:2010gw, Duerr:2013dza, Duerr:2013lka, Duerr:2014wra}.} which is fixed by their coupling to
the SM fields through higher dimensional operators of the form
\begin{eqnarray}
\frac{1}{\Lambda^{(2-p/2)N+n-4}}X^N\bar{{\cal O}}_{{\rm SM}}^{\left(n\right)}\,,
\label{eq:operator_gen}
\end{eqnarray}
where $\Lambda\equiv\Lambda'/\lambda$ with $\Lambda'$ being the effective
scale below which the effective operators are valid and
$\lambda$ the coupling constant between the DM and the SM sectors.
${\cal O}_{\rm SM}^{\left(n\right)}$ is a SM gauge invariant operator of
dimension $n$ consisting of {\it only} SM fields and $p=1,\,2$ for
$X$ being a fermion or a scalar, respectively. To ensure the stability of DM particles, one needs $N \geq 2$, which can be due to specific baryonic and/or leptonic charges carried by the DM.
In this work, we consider the minimal scenario with $N=2$,
where these higher dimensional operators play the crucial role of
distributing the asymmetry between the SM and the DM sectors.
As it will be shown, this scenario is predictive since there is only a limited number
of possible higher dimensional operators that can be written down; for a given DM mass, their
Wilson coefficients are fixed by the requirement that the asymmetry between the SM and the DM sectors is correctly distributed to match the observations.
The mass of the DM particles can span a wide range, from few GeV up to $\sim 100$~TeV.
If $X$ carries nonzero baryon number, we have to restrict $2\,m_X \gtrsim m_{n}$
to prevent fast nucleon decays.
Requiring all the DM symmetric component to be annihilated away, the upper bound ($m_X \lesssim 100$~TeV) has to be imposed in order to avoid unitarity violation~\cite{Griest:1989wd,Hui:2001wy}.
The heavier the DM particles are,
the more nonrelativistic they have to be during chemical decoupling. This happens because it is necessary to suppress their asymmetry
density through a Boltzmann suppression factor to obtain the correct relic abundance (see e.g. Ref.~\cite{Buckley:2010ui}).
In the sharing scenario, we assume a net nonzero charge asymmetry is generated at some high scale.\footnote{This is in contrast to scenarios which consider a net zero asymmetry where the dark and visible sectors carry an equal but opposite sign asymmetry e.g. Ref.~\cite{Davoudiasl:2010am}.} Since the sharing operator~\eqref{eq:operator_gen} does not violate the charge, its role is to distribute the asymmetry among the dark and visible sectors.
Fig.~\ref{fig:sharing} illustrates the sharing mechanism for the cases where the dark and visible sectors get into chemical equilibrium or not. In the case where the system achieves chemical equilibrium (left panel), both asymmetries depend on the same chemical potential ($\mu$). If the DM is nonrelativistic when the two sectors decouple at $T=T_f$, a Boltzmann factor suppresses its number asymmetry. On the other hand, for the scenario in which the system never reaches the chemical equilibrium (right panel), the sector where the initial asymmetry resides does matter, and the asymmetries in the dark and the SM sectors are characterized by the chemical potentials $\mu_X$ and $\mu_{\rm SM}$, respectively. For instance, if the initial total asymmetry is stored in the dark sector, the amount of asymmetry transferred to the SM depends on the strength of the coupling between these two sectors, which is represented in Fig.~\ref{fig:sharing} by $1/\Lambda$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{sharing.pdf}
\caption{Schematic representation of the asymmetry sharing scenario. Here $m_i$ refers to the mass of particle $i$. For the chemical equilibrium case, the system is characterized by a common chemical potential $\mu$.
For the scenario where the system never achieves chemical equilibrium, the DM and the SM sectors are described respectively
by $\mu_X$ and $\mu_{\rm SM}$.
See text for further explanation.
}
\label{fig:sharing}
\end{figure}
The present work is complementary to previous studies in the following ways:
$i)$ our discussion is model-independent: we write down all possible effective operators and focus on the lowest dimensional ones;
$ii)$ we cover the whole DM mass range where the effective operator description is valid;
$iii)$ we determine the viable parameter space taking into account various
phenomenological constraints. Note that our study
does not apply for the cases where the effective operator description is not valid
such as the scenario proposed in Ref.~\cite{Fonseca:2015rwa} where the
mediator which links the SM and DM sectors is light, with a mass comparable to the DM one.
The paper is organized as follows: in Section~\ref{sec2} we introduce the generalities
of the model where DM particles carry baryon and/or lepton number. In Section~\ref{sec3},
we discuss in detail the transfer of the asymmetry, whether it is effective before or
after the freeze out of the electroweak (EW) sphaleron processes.
A number of phenomenological constraints and future detection prospects are discussed in
Section~\ref{sec4}: DM direct and indirect detection, and the bounds coming
from the Large Hadron Collider (LHC). Concluding remarks are presented in Section~\ref{sec5}.
\section{Baryonic and Leptonic Dark Matter}
\label{sec2}
In order to connect the DM and the SM sectors, our working assumption is that the
DM particles carry baryon $B$ and/or lepton $L$ numbers. Of course, one could only
define these quantum numbers if there exist operators
which relate the DM and the SM baryons and/or leptons.
In particular, we consider {\it minimal} models in the sense
that the DM particles are singlets under the SM gauge group and they
are either complex scalars or Dirac fermions.
We consider that they couple in pairs ($N=2$) to the SM, so that the operator~\eqref{eq:operator_gen} reduces to
\begin{eqnarray}
\frac{1}{\Lambda^{n-p}}XX\bar{{\cal O}}_{{\rm SM}}^{\left(n\right)}\,.\label{eq:operator}
\end{eqnarray}
To make sure that the effective operator description remains valid,
we require $\Lambda' = \lambda\,\Lambda \gg E$, where $E$ is the characteristic energy scale being probed.
Taking the largest coupling before perturbative expansion breaks down i.e. $\lambda = 4\pi$,
we have $\Lambda \gg E/(4\pi)$; however, one should keep in mind that the description
could also break down earlier, for $\lambda < 4\pi$.
As we will see in more detail later, for the scenario after EW symmetry breaking,
we also impose $\Lambda > v$, where $v = \left<H\right>=174$~GeV is the Higgs
vacuum expectation value (vev) such that this framework remains consistent.
Notice that part of our minimality criteria is to assume that
${\cal O}_{{\rm SM}}^{\left(n\right)}$ does not contain new fields beyond the SM.
We further assume that the operator~\eqref{eq:operator}
preserves $B$ and $L$, which implies that
$B\left(X\right)=B\left({\cal O}_{{\rm SM}}^{\left(n\right)}\right)/2$
and $L\left(X\right)=L\left({\cal O}_{{\rm SM}}^{\left(n\right)}\right)/2$.
Assuming that the total asymmetry in a charge $q$ is preserved,
the operator~\eqref{eq:operator} plays a role in distributing the asymmetry
among the DM and the SM sectors with
\begin{eqnarray}
Y_{\Delta q} & \equiv & q_{X}\,Y_{\Delta X}+Y_{{\rm \Delta q}_{{\rm SM}}}
={\rm constant}\neq0\,,
\label{eq:tot_asym}
\end{eqnarray}
where $Y_{{\rm \Delta q}_{{\rm SM}}}
=\sum_{\Psi_{\rm SM}}q_{\Psi_{\rm SM}}\,Y_{\Delta {\Psi_{\rm SM}}}$
and $q_{i}$ is the charge of the field $i$ under $q$.
In principle, the generation of the total asymmetry in Eq.~\eqref{eq:tot_asym} and its transfer due to the operator~\eqref{eq:operator} could happen simultaneously.
In this case, instead of being constant, the asymmetry in Eq.~\eqref{eq:tot_asym} is an evolving
quantity which should be described by its corresponding Boltzmann Equation (BE).
For definiteness and to be as model independent as possible, we do not specify
the genesis mechanism at the high scale, instead, we assume that the asymmetry generation either
in the DM or in the SM sector, is completed before the transfer operator~\eqref{eq:operator}
becomes effective. As we will discuss in more detail in the next section, if the reactions induced by
the transfer operator get into {\it chemical equilibrium}
(i.e. proceeding faster than the expansion rate of the universe) at some point,
the initial conditions (e.g. where the initial asymmetry resides) become irrelevant.
On the other hand, if the transfer
operator never gets into {\it chemical equilibrium},
the initial conditions do play an important role.
The moment where the transfer of the asymmetry is efficient determines which conserved charge has to be studied.
If this transfer happens at temperatures higher than the one of the EW sphaleron processes freeze out ($T_{{\rm EWSp}}$),
the relevant conserved charge in the SM is $q=B-L$.\footnote{Notice that $B$ and $L$
are violated by EW sphaleron processes but the linear combination $B-L$ remains conserved.
Implicitly, we assume that whatever beyond the SM mechanism which violates $B-L$
and generates a nonzero $B-L$ asymmetry is no longer operative.}
On the other hand, if the transfer is operative at $T < T_{{\rm EWSp}}$, one can directly consider $q=B$.
This transfer, however, should be completed before $T\sim 1$~MeV to avoid spoiling the standard BBN predictions.
Additionally, in order to make the scenario {\it predictive}, we need a further assumption:
we suppose that as in the SM, there are additional
fast (gauge-like) interactions among the DM sector which efficiently
annihilate away the DM symmetric component ($X\bar{X}\to...$) and one ends up
with only its asymmetric component.\,\footnote{The typical annihilation freeze out temperature is about $T \sim m_X/20$ while to avoid nucleon decay, the lowest DM mass we consider is about 2 GeV. Hence the lowest freeze out temperature in our scenario is about 0.1 GeV and this will not affect BBN which takes place at much lower temperature, around MeV.
If the annihilation products are some light dark particles, due to pure redshift in their temperature after decoupling, the contribution to the dark radiation during BBN or later can be estimated from the ratio of relativistic degrees of freedom $(g_{\rm BBN}/g_{\rm decoupling})^{4/3} \lesssim 0.1$ which is allowed by current Planck observations \cite{Ade:2015xua}.} Without this assumption, one could still
study the model but it strays away from the philosophy of this work, since
the connection between the DM and the SM through the asymmetry as in Eq.~\eqref{eq:ratio_DM_B} is lost.
Under these considerations, the value of the conserved asymmetry $Y_{\Delta q}$ is fixed by Eqs.~\eqref{eq:ratio_DM_B} and~\eqref{eq:tot_asym}.
For instance, for $q=B-L$ and $q=B$ one has, respectively
\begin{eqnarray}
Y_{\Delta\left(B-L\right)}
& = & Y_{\Delta\left(B-L\right)}^{0} = \left|\left(B-L\right)_{X}\right|
Y_{X}^{0}+Y_{\left(B-L\right)_{{\rm SM}}}^{0}\nonumber \\
& = & \left[\left|\left(B-L\right)_{X}\right|r\,\frac{m_n}{m_{X}}+ \kappa \right]Y_{B_{{\rm SM}}}^{0}
\label{eq:YB-Ltot}
\end{eqnarray}
and
\begin{eqnarray}
Y_{\Delta B} & = & Y_{\Delta B}^{0}=\left|B_{X}\right|Y_{X}^{0}+Y_{B_{{\rm SM}}}^{0}\nonumber \\
& = & \left[\left|B_{X}\right|r\,\frac{m_n}{m_{X}}+1\right]Y_{B_{{\rm SM}}}^{0}\,,
\label{eq:YBtot}
\end{eqnarray}
where $\kappa$ is an order one coefficient that relates $Y_{\left(B-L\right)_\text{SM}}^0$ and $Y_{B_\text{SM}}^0$ and which depends on the relativistic
degrees of freedom at $T_{{\rm EWSp}}$ (e.g. $\kappa=\frac{79}{28}$ if the EW sphaleron processes freeze out before EW phase transition and assuming only the SM degrees of freedom~\cite{Harvey:1990qw}).
Let us stress that, since the nucleon mass $m_{n}\sim 1$~GeV, the SM baryon asymmetry $Y_{B_{{\rm SM}}}^{0}$
and the ratio of energy densities $r$ are fixed by observations; hence, for a given DM mass $m_{X}$, the total asymmetry (either $Y_{\Delta B}$
or $Y_{\Delta\left(B-L\right)}$) is also fixed.
In order to obtain the correct $Y_{B_{{\rm SM}}}^{0}$, one has to determine the value of $\Lambda$ for each DM mass.
To do so, one needs to track the transfer of the asymmetry from one sector to the other by solving numerically the BE (see Appendix~\ref{app:A}):
\begin{eqnarray}
\dot{Y}_{\Delta X} & = &
-2 \sum_{i,j,...}\left[\gamma\left(XX\leftrightarrow ij\cdots\right)
+ \gamma\left(X\bar{i}\leftrightarrow\bar{X}j\cdots\right)\right] \nonumber \\
& &\qquad \times \left[2\frac{Y_{\Delta X}}{g_X\, \zeta_X\, Y_0}
- \left(\frac{Y_{\Delta i}}{g_{i}\,\zeta_i\, Y_0}
+ \frac{Y_{\Delta j}}{g_{j}\, \zeta_j\, Y_0}+...\right)\right], \label{eq:YDeltaX}
\end{eqnarray}
where $\dot{Y}_{i}\equiv s\,H\,z\,\frac{dY_{i}}{dz}$,
$z\equiv m_X/T$, $H=1.66\sqrt{g_{\star}}\frac{m_{X}^{2}}{M_{{\rm Pl}}\,z^{2}}$ is the Hubble expansion rate,
$M_{{\rm Pl}}$ is the Planck mass and
$Y_{0}\equiv \frac{15}{8\pi^2\,g_{\star}}$.
$\gamma (a b \leftrightarrow i j\cdots)$
is the thermally averaged reaction density for the scattering $a b \leftrightarrow i j\cdots$ .
For the particle $i$, $g_i$ denotes the corresponding degrees of freedom while the statistical function $\zeta_i$ is given by
\begin{equation}
\zeta_{i} \equiv \frac{6}{\pi^{2}}\int_{z_{i}}^{\infty}dx\,x\,
\sqrt{x^{2}-z_{i}^{2}}\,\frac{e^{x}}{\left(e^{x}\pm1\right)^{2}}\,,\label{eq:zeta}
\end{equation}
where $z_i \equiv m_i/T$ and the $+\,(-)$ sign corresponds to a fermionic (bosonic) field $i$.
For relativistic particles ($z_i\ll 1$), $\zeta_i \sim 1\, (2)$.
Notice that from Eq.~\eqref{eq:tot_asym}, we have
$\dot{Y}_{{\rm \Delta q}_{{\rm SM}}}=-q_{X}\dot{Y}_{\Delta X}$, which reflects the conservation of the charge $q$.
Hence, the symmetry of the system allows to describe the dynamics of the asymmetries using a single BE for either
$Y_{{\rm \Delta q}_{{\rm SM}}}$ or $Y_{\Delta X}$: all the asymmetries on the right hand side of Eq.~\eqref{eq:YDeltaX}
can be written only in terms of $Y_{{\rm \Delta q}_{{\rm SM}}}$ or $Y_{\Delta X}$~\cite{Fong:2015vna}.
Once the BE~\eqref{eq:YDeltaX} is solved and the valued of $\Lambda$ (for each $m_X$) is determined, the operator~\eqref{eq:operator} is completely fixed and one
can
duly calculate its phenomenological consequences which
are discussed in detail in Section~\ref{sec4}.
In the next section, we will consider different scenarios for the sharing of the asymmetries.
\section{Scenarios for the Sharing of the Asymmetries}
\label{sec3}
In this study, we discuss two different scenarios for the transfer processes.
In the first scenario, we consider the situation where the operator~\eqref{eq:operator} is operative and then freezes out at $T_f$, before the
EW sphaleron processes freeze out at $T_{{\rm EWSp}}$, i.e. $T_f > T_{{\rm EWSp}}$.\footnote{There could be in-between cases where the operator Eq.~\eqref{eq:operator}
can be operative at $T > T_{{\rm EWSp}}$ and then freezes out at $T_f < T_{{\rm EWSp}}$ , but in that scenario, one loses part of the predictive power
of the framework because the symmetries which relate the DM and the SM sectors
are not longer the same and the relation spelled out in Eq.~\eqref{eq:tot_asym} (or
more specifically in Eq.~\eqref{eq:YB-Ltot}) no longer applies.
Hence we restrict the analysis to $T_f > T_{{\rm EWSp}}$.} In this regime, the initial temperature $T_i$, defined to be the temperature when the total asymmetry generation is completed, depends on the unspecified UV completion of the model which the EFT cannot describe.
In fact, there are solutions that strongly depend on the initial conditions
(and in particular on $T_i$) and they correspond to cases where the dynamics is UV-dominated.
Hence, we only consider solutions which do not depend on $T_i$, i.e. those that achieve chemical equilibrium.
In the second scenario, we consider the situation where the operator Eq.~\eqref{eq:operator}
is only operative after the EW sphaleron processes freeze out. In particular,
the initial temperature is taken to be $T_i = T_{{\rm EWSp}}$, which we fix to $T_{{\rm EWSp}} = 132$~GeV~\cite{D'Onofrio:2014kta} and also for simplicity take this temperature to be the EW symmetry breaking scale.
In this case,
we have a well-defined initial temperature and can also entertain solutions in
which the reactions induced by Eq.~\eqref{eq:operator} never reach the chemical
equilibrium.
\subsection{Before the Electroweak Sphaleron Processes Freeze Out}
\label{sec:before_EW}
Here we consider the scenario where the operator Eq.~\eqref{eq:operator}
is relevant before the EW sphaleron processes freeze out.
In this case, the relevant symmetry of the SM is $B-L$.
Our minimality criteria is to consider the SM gauge invariant operators
consist of only the SM fields but carry nonzero $B-L$.
Then, the lowest dimensional realization of the operator ${\cal O}_{{\rm SM}}^{\left(n\right)}$
corresponds to $n=5$~\cite{Weinberg:1979sa,Weinberg:1980bf}\footnote{In the following,
the 2-component Weyl spinor notation is used. Notice that the operator
$\epsilon_{ij} \epsilon_{kl} \left(\ell_{L_\alpha}^{i} \ell_{L_\beta}^{j} \right) H^k H^l =
{\cal O}^{\left(5\right)}_{\alpha\beta} - {\cal O}^{\left(5\right)}_{\beta\alpha}$
and hence it is not independent.}:
\begin{equation}
{\cal O}^{\left(5\right)}_{\alpha\beta}
= \epsilon_{ik} \epsilon_{jl}
\left(\ell_{L_\alpha}^{i} \ell_{L_\beta}^{j} \right) H^k H^l, \label{eq:before_EW_op}
\end{equation}
where $\ell_L$ and $H$ are respectively the lepton and the Higgs doublets,
$\alpha,\beta, ...$ label the family indices, whereas
$i,j,...$ the $SU(2)_L$ indices. $\epsilon_{ij}$ is the total
antisymmetric tensor with $\epsilon_{12}=1$.
The operator ${\cal O}^{\left(5\right)}_{\alpha\beta}$
has zero $B$ and $L$ equals to two, which fixes $B_X=0$ and $L_X=1$.
Next we derive the relation between the particle asymmetries and
the (effective) $U(1)$ symmetries of the system~\cite{Fong:2015vna}.
Notice that while the operator~\eqref{eq:operator} with $\cal{O}_{\rm SM}$ given by
Eq.~\eqref{eq:before_EW_op} conserves the
total lepton number, it generally breaks the individual lepton flavor numbers.
In the case considered here, the symmetries are the $B-L$, the hypercharge $Y$ and the $X$ number.
While the former two symmetries remain exact,
the $X$ number is approximate in the sense that as $\Lambda \to \infty$
(or when the reactions induced by the operator~\eqref{eq:operator} decouple),
the $X$ number becomes a conserved quantity. We assume that the total $B-L$ is fixed by Eq.~\eqref{eq:YB-Ltot} and that the hypercharge is zero. Furthermore, the relevant SM degrees of freedom are the ones
of the unbroken EW phase, where all the fermions are massless. Besides $\ell_L$
and $H$, one also needs to take into account the SM fields which carry nonzero chemical potentials:
the quark doublets $q_L$, the up- and down-type quark singlets ($u_R$ and $d_R$),
and the charged lepton singlets $e_R$, where we have suppressed the family and the color indices.
Let us consider the DM particle $X$ which carries $X$ number equal to one,
baryon minus lepton number $\Delta_X \equiv (B-L)_X$ and zero hypercharge,
while all the SM particles carry the standard
charge assignments. Assuming that the total hypercharge remains zero $n_{\Delta Y} = 0$,
the number asymmetries of particles per degrees of freedom
($SU(2)_L$ and $SU(3)_c$ multiplicities) normalized over the statistical function (Eq.~\eqref{eq:zeta})
can be expressed in terms of the $B-L$ and the $X$ charge asymmetries
($n_{\Delta (B-L)}$ and $n_{\Delta X}$) as follows~\cite{Fong:2015vna}:
\begin{equation}
\frac{n_{\Delta i}}{g_i\,\zeta_i} =
c_i \left(n_{\Delta (B-L)} - \Delta_X\, n_{\Delta X}\right),
\label{eq:before_EW_relation}
\end{equation}
with $c_i$ given in Table~\ref{tab:before_EW}, where the family indices for quarks and leptons have been suppressed.
\begin{table}
\centering
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
$\boldsymbol{i}$ & $\boldsymbol{q_{L}}$ & $\boldsymbol{u_{R}}$ & $\boldsymbol{d_{R}}$ & $\boldsymbol{\ell_{L}}$ & $\boldsymbol{e_{R}}$ & $\boldsymbol{H}$\\
\hline\hline
$\boldsymbol{c_{i}}$ & $\frac{7}{237}$ & $-\frac{5}{237}$ & $\frac{19}{237}$ & $-\frac{7}{79}$ & $-\frac{3}{79}$ & $-\frac{4}{79}$\\
\hline
\end{tabular}
\caption{The coefficients relating the number asymmetries of the corresponding
fields to the charge asymmetries, as in Eq.~\eqref{eq:before_EW_relation}.}\label{tab:before_EW}
\end{table}
For the operator~\eqref{eq:before_EW_op},
$\Delta_X = -1$ and the BE for $Y_{\Delta X}$
from Eq.~\eqref{eq:YDeltaX} reduces to
\begin{equation}\label{BEbefore}
\dot{Y}_{\Delta X}
= - 2\gamma_{\ell\ell H H}
\left[2\frac{ Y_{\Delta X}}{g_X \zeta_X Y_0}
+\frac{22}{79}\left(\frac{Y_{\Delta (B-L)}}{Y_0}
+\frac{Y_{\Delta X}}{Y_0}\right)\right],
\end{equation}
where $\gamma_{\ell\ell H H}\equiv\gamma_{XX\to\ell\ell HH}+\gamma_{X\bar\ell\to \bar X\ell HH}+\gamma_{XH^\dagger\to\bar X\ell\ell H}$ denotes collectively the thermally averaged reaction
densities (defined in Eq.~\eqref{eq:reaction_density_z}) resulted from the operator~\eqref{eq:operator} with $\cal{O}_{\rm SM}$ given by
Eq.~\eqref{eq:before_EW_op}. Notice that $Y_{\Delta (B-L)}$ here is
fixed by observation to be $Y_{\Delta (B-L)}^0$, as in Eq.~\eqref{eq:YB-Ltot}.
The relevant reduced cross sections are collected in Appendix~\ref{app:reduced_cross_section}.
\subsection{After the Electroweak Sphaleron Processes Freeze Out}
\label{sec:after_EW}
Let us consider the scenario where the transfer
is operative after the EW sphaleron processes freeze out.
In this case, $B$ and $L$ are the effective symmetries of the system.
Although $L$ is not of interest here, an existing lepton asymmetry
can affect the results as it will be shown later.
We assume that the EW symmetry is already broken and therefore that the fermions and the weak gauge bosons
are massive. In this case, we have another relevant scale namely the vev of the Higgs $v$.
In our EFT approach with an effective scale $\Lambda$,
we should impose $\Lambda \gg v$ to make sure the whole framework remains consistent.
As before, our minimality criteria is to consider SM gauge invariant operators
consisting of only SM fields, and carrying nonzero $B$.
The lowest dimensional realizations of the operator ${\cal O}_{{\rm SM}}^{\left(n\right)}$ are those for $n=6$~\cite{Weinberg:1979sa,Wilczek:1979hc,Abbott:1980zj,Alonso:2014zka}:
\begin{eqnarray}
{\cal O}^{\left(6\right){\rm I}}_{\alpha\beta\delta\gamma}
& = & \epsilon_{abc}\epsilon_{ij} \left(q_{L_\alpha}^{ia} \ell_{L_\beta}^{j} \right)
\left(d_{R_\delta}^{b} u_{R_\gamma}^{c} \right), \label{eq:after_EW_op1}\\
{\cal O}^{\left(6\right){\rm II}}_{\alpha\beta\delta\gamma}
& = & \epsilon_{abc}\epsilon_{ij} \left(q_{L_\alpha}^{ia} q_{L_\beta}^{jb}\right)
\left(u_{R_\delta}^{c} e_{R_\gamma} \right), \label{eq:after_EW_op2}\\
{\cal O}^{\left(6\right){\rm III}}_{\alpha\beta\delta\gamma}
& = & \epsilon_{abc}\epsilon_{il}\epsilon_{jk} \left(q_{L_\alpha}^{ai} q_{L_\beta}^{jb} \right)
\left( q_{L_\delta}^{kc} \ell_{L_\delta}^{l} \right), \label{eq:after_EW_op3}\\
{\cal O}^{\left(6\right){\rm IV}}_{\alpha\beta\delta\gamma}
& = & \epsilon_{abc}\left( d_{R_\alpha}^{a} u_{R_\beta}^{b} \right)
\left( u_{R_\delta}^{c} e_{R_\gamma} \right), \label{eq:after_EW_op4}
\end{eqnarray}
where $a$, $b$ and $c$ denote the color indices and $\epsilon_{abc}$ is the total antisymmetric tensor.\,\footnote{If there are right-handed neutrinos $\nu_R$'s, one could also have operator of type $u_R d_R d_R \nu_R$. If $\nu_R$ has no Majorana mass term (or if $M_{\nu_R} \ll m_X$), the sharing scenario is completely analogous to the one described by the operators \eqref{eq:after_EW_op1}--\eqref{eq:after_EW_op4} (up to gauge multiplicities). For $M_{\nu_R} \gg m_X$, the sharing operator is not relevant due to Boltzmann suppression at $T \sim m_X$. If $M_{\nu_R} \sim m_X$, this will require a separate analysis to take into account the dependence on Majorana mass term and also the effect of washout of lepton asymmetry due to lepton-number-violating Majorana mass term. In any case, the qualitative effect could be captured by our later analysis as we consider different initial conditions for lepton asymmetry: from $Y_{\Delta L} = -51/28 Y_{\Delta B}$ to $Y_{\Delta L} = 0$ to take into account various degrees of erasure of the lepton asymmetry (see Sec.~\ref{s:numerical}).}
All the operators above have both $B$ and $L$ equal to one which fixes $B_X=L_X=1/2$.
One might also consider the dimension-7 operator
${\cal O}_{\alpha\beta\delta\gamma}^{\left(7\right)}
= \epsilon_{abc}\epsilon_{ij} \left(u_{R_\alpha}^{a} d_{R_\beta}^{b} \right)
\left( \bar\ell_{L_\delta}^{i} d_{R_\gamma}^{c} \right) H^{j\dagger}$
which upon the EW symmetry breaking gives rise to
$\epsilon_{abc}\,v\,\left(u_{R_\alpha}^{a} d_{R_\beta}^{b} \right) \left( \bar\nu_{L_\delta} d_{R_\gamma}^{c} \right)$.
For consistency, we impose $\Lambda \gg v$ such that
this operator (and other higher dimensional operators proportional to powers of $v/\Lambda$)
is subdominant and then not considered any further.
As before, we derive the relation between the particle asymmetries and
the (effective) $U(1)$ symmetries of the system.
In general, the operator~\eqref{eq:operator} conserves total lepton number
but violates individual lepton flavor numbers, and we assume that this is the case.
For simplicity, we also assume that the EW symmetry is already broken and hence,
one should consider the conservation of the electric charge and
also take into account the effect of the masses of the SM fermions.
Due to the mass terms, the chemical potential for the left- and right-handed fields become equal.
Therefore, in the following, we will not differentiate the fermion chiralities:
$u = u_L = u_R$ and similarly for all the other SM fermions.
The SM fields which carry nonzero chemical potentials are the up-type
$u$ and down-type $d$ quarks, the neutrinos $\nu$, the charged leptons $e$
and the charged weak boson $W$. As before, we have suppressed the family
and the color indices. Hence, $u$ refers to the $u$, $c$ and $t$ quarks; analogously,
$d$ refers to $d$, $s$ and $b$; and $e$ refers to $e$, $\mu$
and $\tau$.
Taking the total electric charge to be zero ($n_{\Delta Q} = 0$) and
assuming the decoupling of the top quark ($\zeta_t = 0$), one can express
the particle number asymmetries in terms of the charge asymmetries $n_{\Delta B}$,
$n_{\Delta L}$ and $n_{\Delta X}$ as follows:
\begin{equation}
\frac{n_{\Delta i}}{g_i\, \zeta_i} =
\frac{1}{c_0} \left(c_B^i\, n_{\Delta B} + c_L^i\, n_{\Delta L} + c_X^i\, n_{\Delta X}\right)\,,
\label{eq:after_EW_relation}
\end{equation}
where $c_0 \equiv 6 \left[ 5 + (2 + \zeta_b + \zeta_c + \zeta_s) \frac{3\zeta_{W}}{2}
+ (4+3\zeta_c)\zeta_b + (4+3\zeta_s)\zeta_c + 4\zeta_s \right]$,
$c_X^i = - B_X\, c_B^i - L_X\, c_L^i$, and $c_B^i$ and $c_L^i$ are given in
Table~\ref{tab:after_EW}.
\begin{table}
\begin{tabular}{|c||c|c|}
\hline
$\boldsymbol{i}$ & $\boldsymbol{c_{B}^{i}}$ & $\boldsymbol{c_{L}^{i}}$\tabularnewline
\hline
\hline
$\boldsymbol{u}$ & $3\left(2+\zeta_{b}+\zeta_{s}+\frac{3\zeta_{W}}{2}\right)$ & $2\left(1+\zeta_{b}+\zeta_{s}\right)$\tabularnewline
\hline
$\boldsymbol{d}$ & $3\left(3+2\zeta_{c}+\frac{3\zeta_{W}}{2}\right)$ & $-2\left(1+\zeta_{c}\right)$\tabularnewline
\hline
$\boldsymbol{\nu}$ & $-2\left(1-\zeta_{b}+2\zeta_{c}-\zeta_{s}\right)$ & $2\left[3+\left(2+\zeta_{b}+\zeta_{c}+\zeta_{s}\right)\frac{\zeta_{W}}{2}+\left(2+\zeta_{c}\right)\zeta_{b}+\left(2+\zeta_{s}\right)\zeta_{c}+2\zeta_{s}\right]$\tabularnewline
\hline
$\boldsymbol{e}$ & $1-\zeta_{b}+2\zeta_{c}-\zeta_{s}$ & $2\left[\left(2+\zeta_{b}+\zeta_{c}+\zeta_{s}\right)\frac{\zeta_{W}}{2}+\left(1+\zeta_{c}\right)\left(1+\zeta_{b}+\zeta_{s}\right)\right]$\tabularnewline
\hline
$\boldsymbol{W}$ & $3\left(1-\zeta_{b}+2\zeta_{c}-\zeta_{s}\right)$ & $-2\left(2+\zeta_{b}+\zeta_{c}+\zeta_{s}\right)$\tabularnewline
\hline
\end{tabular}
\caption{The coefficients relating the number asymmetries of the corresponding
fields to the charge asymmetries, as in Eq.~\eqref{eq:after_EW_relation}.}\label{tab:after_EW}
\end{table}
For the operators~\eqref{eq:after_EW_op1}--\eqref{eq:after_EW_op4} we have $B_X = L_X = 1/2$;
the BE for $Y_{\Delta X}$ is
\begin{equation}\label{BEafter}
\dot{Y}_{\Delta X}
= - 2\gamma_{qqq\ell}
\left[2\frac{Y_{\Delta X}}{g_X\, \zeta_X\, Y_0}
-\frac{1}{c_0\,Y_0}\left(
c_B\, Y_{\Delta B} + c_L\, Y_{\Delta L}
-\frac{1}{2}(c_B+c_L) Y_{\Delta X}
\right)\right]\,,
\end{equation}
where
\begin{eqnarray}
c_B & = & 22
+5\zeta_s + \frac{27\zeta_W}{2}
+ 5\zeta_b + 8\zeta_c\,, \label{cb}\\
c_L & = & 2 \left[ 2 + (2+\zeta_c+\zeta_s)\frac{\zeta_W}{2}
+ \left(3+\zeta_c+\frac{\zeta_W}{2}\right)\zeta_b + (3+\zeta_c)\zeta_s \right]\,.\label{cl}
\end{eqnarray}
$\gamma_{qqq\ell}\equiv\gamma_{XX\to qqq\ell}+\gamma_{X\bar\ell\to\bar Xqqq}+\gamma_{X\bar q\to\bar Xqq\ell}$ denotes collectively the thermally averaged reaction densities
resulted from operator~\eqref{eq:operator} with $\cal{O}_{\rm SM}$ given by
any of the operators~\eqref{eq:after_EW_op1}--\eqref{eq:after_EW_op4}.
Although in the numerical calculations of the next section we will consider
the statistical functions $\zeta_i$ in Eq.~\eqref{eq:zeta} for $W$, $b$, $c$ and $s$,
it turns out that the only particle which might decouple during the evolution is $W$
and its effect is negligible. Hence, it is a good approximation
to consider the case where these particles are fully relativistic, i.e.
$\zeta_W/2 = \zeta_b = \zeta_c = \zeta_s = 1$ with $\{c_0,\,c_B,\,c_L\}=\{228,\,67,\,30\}$.
Notice that $Y_{\Delta B}$ here is fixed by observations
to be $Y_{\Delta B}^0$ as in Eq.~\eqref{eq:YBtot} while the value for
$Y_{\Delta L}^0$ is model-dependent.
As before, the relevant reduced cross sections are collected in Appendix~\ref{app:reduced_cross_section}.
\subsection{Numerical Results }
\label{s:numerical}
In principle, for each DM mass $m_X$ one can solve the BE and find the appropriate value for $\Lambda$ in order to distribute the asymmetries of the two sectors such that the observed DM relic abundance and the baryon asymmetry are reproduced.
However, the results depend on the following assumptions:
\begin{enumerate}
\item[$(i)$] {\it The total asymmetry.}
We assume that the total asymmetry is fixed by
Eq.~\eqref{eq:YB-Ltot} or Eq.~\eqref{eq:YBtot}, for
scenarios where the transfer of the asymmetry happens before or after the EW sphaleron processes freeze out, as described in
the preceding section. In other words, the genesis of the asymmetry
is already completed prior to the transfer. For the case after the
freeze-out of the EW sphaleron processes,
since the total lepton number is conserved, there is an extra dependence on the
initial total lepton asymmetry. In this case, we vary
$Y_{\Delta L}$ from $-\frac{51}{28} Y_{\Delta B}$ to 0.
The former value is set by the EW sphaleron processes that freeze out
before EW phase transition assuming the SM relativistic degrees of
freedom~\cite{Harvey:1990qw}, while the latter considers the possibility of a mechanism that erases the lepton asymmetry
(after the freeze out of the EW sphaleron processes). This variation is represented by the bands of solutions in Fig.~\ref{mainresults}.
\item[$(ii)$] {\it The initial distribution of the asymmetry.}
Even if the total asymmetry is fixed by Eq.~\eqref{eq:YB-Ltot} or Eq.~\eqref{eq:YBtot}, one
has to consider whether the initial asymmetry resides
in the DM or in the SM sector.
Let us remember that this discussion is only relevant in the case where the sharing operator does not reach the chemical equilibrium.
If the chemical equilibrium is attained, the evolution of the system becomes independent of the initial conditions.
In this study, we assume that all the initial
asymmetry resides either in the DM or in the SM sector. Of course, it is also possible that part
of the initial asymmetry resides in the dark sector while the rest in the visible sector.
However, without a specific model for the asymmetry generation, this assumption appears too
contrived and also results in a loss of predictivity.
\end{enumerate}
In order to explore the solutions of the BE corresponding to the cases where the asymmetry is transferred before and after the freeze out of the EW sphalerons (Sections~\ref{sec:before_EW} and~\ref{sec:after_EW}, respectively),
we consider
the two cases where $X$ is a complex scalar or a Dirac fermion.
For $X$ as a Dirac fermion, for each $\cal{O}_{\rm SM}$, there are two possible kinds of couplings,
one which involves $X_L X_L$ and the other which involves $X_R X_R$. For simplicity,
we assume both to couple equally. For definiteness,
we further assume that DM only couples to the {\it first family} SM fermions
in Eqs.~\eqref{eq:before_EW_op} and~\eqref{eq:after_EW_op1}, and hence all the SM fermions
involved can be taken to be massless. The corresponding results for the operators~\eqref{eq:after_EW_op2}--\eqref{eq:after_EW_op4}
can be obtained by the rescaling due to different gauge multiplicities listed in
Table~\ref{tab:multiplicity_factors} (Appendix~\ref{app:reduced_cross_section}).
With the assumption of couplings only with the first family of SM fermions,
the limits from collider searches presented in the next section
are the most stringent. Furthermore, there are no flavor violating
processes. If the assumption of couplings only to the first family is relaxed,
these processes have to be taken into account. However, this introduces model dependency since
for a complete analysis, one would need to consider a UV completion for the operators
(see for instance Ref.~\cite{Kim:2013ivd}). For the purpose of the present work,
this possibility is not considered.
\begin{figure}[t]
\centering
\includegraphics[height=7.5cm]{scalar.pdf}
\includegraphics[height=7.5cm]{fermion.pdf}
\caption{Parameter space where the measured baryon asymmetry of the universe and the DM relic abundance can be reproduced simultaneously, for
complex scalar (upper panel) and fermionic (lower panel) DM, and for the scenarios where the transfer of the asymmetry is efficient before and after the freeze out of the EW sphalerons (see text).
In the upper left hatched regions $\Lambda$ is smaller than the Higgs vev.
}
\label{mainresults}
\end{figure}
In Fig.~\ref{mainresults}, we show the regions, in the plane $[m_X/\Lambda\,,m_X]$, where the measured baryon asymmetry of the universe and the DM relic abundance can be reproduced simultaneously, for complex scalar (upper panel) and fermionic (lower panel) DM.
In the upper left hatched parts of the figure, the effective scale $\Lambda$ is smaller than the Higgs vev. This region is disregarded,
as discussed in Section~\ref{sec:after_EW}.
Moreover, during the freeze out the relevant energy scale is $E\sim 2\,m_X$
and the perturbative constraint yields $m_X/\Lambda\ll 2\pi$,
taking the coupling to be $4\pi$. In order to be conservative,
in Fig.~\ref{mainresults} we cut off at $m_X/\Lambda = 1$.
For the numerical analysis we are fixing $r=\Omega_X/\Omega_B=5.4$ and $Y_{\Delta B_\text{SM}}=9\cdot 10^{-11}$.
The two scenarios discussed previously in Section~\ref{sec3}, namely when the transfer of the
asymmetry is efficient before and after the freeze out of the EW sphalerons, are depicted in the figure.
For the scenario where the asymmetry sharing takes place before the EW sphaleron processes freeze out, we only consider the solutions when the system achieves chemical equilibrium during its evolution. As $m_X$ increases, to obtain the right relic abundance, the number asymmetry of $X$ needs to decrease. This can be achieved by increasing the ratio $m_X/\Lambda$ such that the chemical decoupling happens at a later time when the number density is more Boltzmann suppressed. Notice that the increase in $m_X/\Lambda$ is quite mild due to strong Boltzmann suppression from the increase in $m_X$.
Note that for this case to work the DM has to be heavier than $\sim 500$~GeV, otherwise the freeze out occurs after the EW sphaleron freeze out.
For the scenario where the transfer happens after the EW sphaleron processes freeze out,
the upper and lower bounds of the bands represent the two different initial total lepton asymmetries
discussed previously: $Y_{\Delta L}=0$ and $Y_{\Delta L}=-\frac{51}{28} Y_{\Delta B}$, respectively.
For $Y_{\Delta L}=-\frac{51}{28} Y_{\Delta B}$, the system never reaches
chemical equilibrium,
which implies a dependence on the initial conditions: the initial asymmetry can reside either in the DM or in the SM sector.
Let us first consider the scenario where all the asymmetry is stored in the dark sector.
The fact that the system never reaches chemical equilibrium implies that there is no Boltzmann suppression.
Hence, as $m_X$ increases, the ratio $m_X/\Lambda$ has to be strongly enhanced in order to deplete the number asymmetry of the DM (or equivalently, increase the transfer of the number asymmetry to the SM sector) to obtain the right relic abundance.
In this case, the increase in $m_X/\Lambda$ has to be quite steep as $m_X$ increases.
Next we consider the case where the initial asymmetry is stored in the SM sector.
We found that this scenario is not viable for the following reason.
In fact, the transfer from the visible to the dark sector increases the DM asymmetry,
but its value cannot reach the observed one which is higher than the chemical equilibrium value.
As we raise $Y_{\Delta L} \to 0$, in the mass range
$10$~GeV $\lesssim m_X \lesssim 500$~GeV the system gets into
chemical equilibrium. In this regime,
the results depend quite significantly on the assumed initial lepton asymmetry. For $m_X \lesssim 10$~GeV
or $m_X \gtrsim 500$~GeV, the system is not able to reach chemical equilibrium and
interestingly, the results become rather independent of the existing lepton asymmetry.
This can be understood from the following: in this regime,
the first term of the right hand side of the BE~\eqref{BEafter},
which is independent on $Y_{\Delta L}$, becomes the dominant one;
while the other terms, including the one that depends on $Y_{\Delta L}$, are subdominant.
In this case where the transfer happens after the EW sphaleron processes freeze out, the DM spans a large range of masses from few GeV to $\sim 2$~TeV.
\section{Phenomenological Constraints}\label{sec4}
Now we discuss several constraints and experimental
opportunities due to the following two realizations of the operator~\eqref{eq:operator}:
\begin{eqnarray}
\frac{1}{\Lambda^{5-p}}XX \bar{\cal O}^{(5)}
&=& \frac{1}{\Lambda^{5-p}}XX \epsilon_{ik} \epsilon_{jl}
\left(\bar\ell_{L}^{i} \bar\ell_{L}^{j} \right) H^{k\dagger} H^{l\dagger}\,, \label{eq:op5}\\
\frac{1}{\Lambda^{6-p}}XX \bar{\cal O}^{\rm (6)I}
&=& \frac{1}{\Lambda^{6-p}}XX \epsilon_{abc}\epsilon_{ij} \left(\bar q_{L}^{ia} \bar\ell_{L}^{j} \right)
\left(\bar d_{R}^{b} \bar u_{R}^{c} \right)\, , \label{eq:op6}
\end{eqnarray}
where we have considered the coupling only to the first family SM fermion (the indices are dropped)
and selected only ${\cal O}^{\rm (6)I}$ among the four operators from
Eqs.~\eqref{eq:after_EW_op1}--\eqref{eq:after_EW_op4}, as it has been done
in the previous section. As we mentioned before, the results for the operators~\eqref{eq:after_EW_op2}--\eqref{eq:after_EW_op4}
can be obtained through the rescaling due to the different gauge multiplicities as specified in
Table~\ref{tab:multiplicity_factors} (Appendix~\ref{app:reduced_cross_section}); though the
phenomenological signatures can be different (for instance, in some cases they involve only charged leptons while in others only neutrinos).
Furthermore, let us note that the requirement of fast $X\bar{X}$ annihilations in general gives rise to both DM direct detection and collider signatures which, however, are model-dependent.
On the other hand, this kind of processes do not contribute to the DM indirect detection because there is only either $X$ or $\bar{X}$ today (i.e. DM is maximally asymmetric).
Since the present shared asymmetry scenario cannot restrict this type of operators, we will not consider them further besides assuming that they are efficient enough to remove the DM symmetric component.
\subsection{Collider}\label{sec:collider}
\begin{figure}[t]
\centering
\includegraphics[height=5.3cm]{scalar-8TeV.pdf}
\includegraphics[height=5.3cm]{scalar-13TeV.pdf}
\includegraphics[height=5.3cm]{fermion-8TeV.pdf}
\includegraphics[height=5.3cm]{fermion-13TeV.pdf}
\caption{Inclusive cross sections for monojet and missing transverse energy with (light pink) and without (blue) monolepton for scalar (first row) and fermionic (second row) DM at the LHC, for a center of mass energy of 8~TeV (first column) and 13~TeV (second column).
The blue solid line corresponds to the ATLAS exclusion limit on monojet searches; the dashed lines to the conservative limits for the break down of the EFT description (see text).
}
\label{LHC}
\end{figure}
The operator~\eqref{eq:op5} can lead to measurable signatures at colliders.
At the LHC, one can have the vector boson scattering with production of two same-sign leptons together with two jets and missing transverse energy: $pp \to W^\pm W^\pm jj$ and from our operator we will have $W^+ W^+\to e^+ e^+ XX$ and the conjugate process.
There are dedicated searches at ATLAS~\cite{Aad:2014zda,Aad:2016tuk} and CMS~\cite{Khachatryan:2014sta,Khachatryan:2016kod} that can be used in order to constraint this scenario. The study of this process will be included in an upcoming work~\cite{projectLHC}.
Furthermore, one could also have $e^- e^- \to W^- W^- XX$ and the conjugate process, however, this requires an electron-electron or a positron-positron
lepton collider, which will not be available in the near future.
On the other hand, the operator~\eqref{eq:op6} gives rise to two types of signatures
at the LHC:
\begin{itemize}
\item[$(a)$] monojet with missing energy;
\item[$(b)$] monojet plus monolepton and missing energy.
\end{itemize}
There are several features
due to that operator that we would like to highlight.
First, for processes involving a charged lepton in the final state, we can
further distinguish between $p p \to j\, e^+ + E_T^{\rm miss}$ and $p p \to j\, e^- + E_T^{\rm miss}$.
In particular, the production cross section of the latter at the LHC is
about two to three orders magnitude more suppressed than the former, purely due to
scarcity of antiquarks in the proton. This type of asymmetry is a distinguished feature
of our scenario. Hence, we will only focus on the dominant process $p p \to j\, e^+ + E_T^{\rm miss}$.
Secondly, due to the steep energy dependence of the cross section of our operator
$\sigma \propto E^{2(5-p)}/\Lambda^{2(6-p)}$, the LHC
will be more sensitive than direct and indirect searches.
For fermionic DM ($p=1$), the production cross section at the LHC is further enhanced. This can be understood as follows: equating the cross sections for fermion and scalar DM during the asymmetry transfer ($E \sim m_X$), we have $\Lambda_{\rm fermion}^{10} \sim E^2 \Lambda_{\rm scalar}^{8} \sim m_X^2 \Lambda_{\rm scalar}^{8}$. Hence taking the ratio of fermion to scalar DM cross section at the LHC, we have $E^8/\Lambda_{\rm fermion}^{10} \times \Lambda_{\rm scalar}^{8}/E^6 = E^2/m_X^2$ enhancement (see Fig.~\ref{LHC}).
Thirdly, the two types of signatures $(a)$ and $(b)$ depend on the same coupling
and hence one can utilize a more sensitive channel to constraint our scenario.
In Fig.~\ref{LHC}, we show the total production cross sections for $(a)$ (blue bands) and $(b)$ (light pink bands)
at the LHC with a center of mass energy $\sqrt{s}=$ 8~TeV (left panels) and 13~TeV (right panels), using the solutions presented
in Fig.~\ref{mainresults}. We also plot the line of $\Lambda = \sqrt{s}/(4\pi)$ to show
the estimations below which the effective operator description could break down
for the LHC at $\sqrt{s}=$ 8~TeV (dashed-dotted lines) and 13~TeV (dashed lines). The effective description breaks down at lower
energy if the coupling is smaller. We have assumed the most conservative value for the typically transferred energy to be the maximum value i.e. $\sqrt{s}$. Here we do not consider the unitarity bound which we expect to be of a similar order~\cite{Marciano:1989ns}.
Based on monojet searches by ATLAS~\cite{Aad:2015zva} and CMS~\cite{Khachatryan:2014rra},
we provide an estimate on the upper bound on cross section. Using {\tt MadGraph5\_aMC@NLO}~\cite{Alwall:2014hca},
we estimate the efficiency times acceptance
$\epsilon \times A$ and the most sensitive regime by imposing a cut on the final quark
transverse momentum, which in this case, is equivalent to $E_T^{\rm miss}$.
From the ATLAS and the CMS analysis,
we determine the most sensitive regimes to be $E_T^{\rm miss} > 600$~GeV
and $E_T^{\rm miss} > 550$~GeV which give an upper bound on
$\sigma \times \epsilon \times A$ to be 3.8~fb
and 7~fb at 95\% CL, respectively. Our estimation gives $\epsilon \times A \sim 0.5$
which implies the upper bounds $\sigma \sim $ 3.8~fb/0.5 = 7.6~fb and 7~fb/0.5 = 14~fb.
In Fig.~\ref{LHC}, the solid blue line refers to the ATLAS exclusion limit on monojet searches, which is more stringent. This bound will weaken once we do the full analysis since the efficiency will
be lower~\cite{projectLHC}.
In the lower left panel of Fig.~\ref{LHC} is shown that
the monojet searches at LHC with 8~TeV (horizontal solid blue line) can already constraint a part of the parameter space of fermionic DM, corresponding to masses between $\sim 12$~GeV and $\sim 70$~GeV.
However, that region is very close to the zone where we conservatively estimate our EFT approach to break down (indicated by the dashed-dotted lines) and a UV description would be required. In this regime, the EFT is not reliable because the LHC could in principle resolve our effective operators to reveal new heavy degrees of freedom. However, one could still use EFT description by applying the truncation method i.e. by removing high momentum transfer events which violate the EFT to derive a weaker bound \cite{Berlin:2014cfa, Busoni:2013lha, Busoni:2014sya, Busoni:2014haa,Racco:2015dxa}. Alternatively, one could use UV complete models for analysis to obtain model dependent bounds (e.g. on new heavy states).
A more comprehensive analysis including constraints from monolepton searches
(e.g. Ref.~\cite{Khachatryan:2014tva}) with a UV complete model will be presented in Ref.~\cite{projectLHC}. Finally, in Fig.~\ref{boring}, we translate the EFT bounds and ATLAS limit of 7.6~fb on monojet searches at 8~TeV to the $[m_X/\Lambda,\,m_X]$ plane.
\begin{figure}[t]
\centering
\includegraphics[height=5.3cm]{scalar-boring.pdf}\hspace{-.2cm}
\includegraphics[height=5.3cm]{fermion-boring.pdf}
\caption{Same as Fig.~\ref{mainresults} but adding the thermal averaged cross section $\langle\sigma v\rangle=3\cdot 10^{-26}$~cm$^3$/s for indirect detection (solid green), the ATLAS exclusion limit from monojet searches (solid red) and the conservative limits for the break down of the EFT description for the LHC at 8 and 13~TeV (dashed-dotted and dashed lines, respectively). The stars indicate the most optimistic points for indirect detection.
}
\label{boring}
\end{figure}
\subsection{Dark Matter Indirect Detection}
For DM indirect detection, both operators~\eqref{eq:op5} and~\eqref{eq:op6}
can give rise to observable astrophysical signatures. From the operator~\eqref{eq:op5},
the possible annihilation channels are: $XX \to \nu \nu$,
$XX \to \nu \nu h$, $XX \to \nu \nu h h$ and $XX \to e^- e^- W^+ W^+$ and the conjugate process, where $\nu$ and $h$ are respectively the SM
neutrino and the Higgs boson.\footnote{In our sharing scenario,
the sign of the $B-L$ asymmetry in the DM sector is the same as the one of the SM sector (which is positive).
Hence we will be left with only $\bar X$ today and their annihilations will necessarily contain
antineutrinos in the final states~\cite{Fukuda:2014xqa}.} The first process dominates over the others which have additional phase space suppression. Hence, let us consider the thermal averaged cross section
$\left<\sigma v\right>_{XX \to \nu \nu}$. Even for the most optimistic point
in our parameter space $m_X = 400\,(500)$~GeV (indicated by the stars in Fig.~\ref{boring}), we have
$\left<\sigma v\right>_{XX \to \nu \nu}\sim 5\times 10^{-32}$ cm$^3$/s
for $X$ being a complex scalar (Dirac fermion). This value is about nine
orders of magnitude smaller than the current sensitivities of the IceCube~\cite{Aartsen:2013dxa,Aartsen:2015xej} and the ANTARES~\cite{Adrian-Martinez:2015wey}
experiments.\footnote{In fact,
Ref.~\cite{Queiroz:2016zwd} pointed out that for $m_X$ greater than $\sim 200$~GeV, one could
obtain up to an order of magnitude stronger bounds utilizing the gamma-ray flux generated from EW bremsstrahlung.}
Similarly for the operator~\eqref{eq:op6}, the annihilation $XX \to$ 3 quarks + 1 lepton
gives rise to potentially observable fluxes of
gamma rays, neutrinos, positrons, etc. For the most optimistic case of
$m_X\sim 2$~TeV, we determine $\left<\sigma v\right>_{XX \to \text{3 quarks + 1 lepton}}
\sim$~few~$10^{-30}$~cm$^3$/s for both complex scalar and fermionic DM.
This is about four orders of magnitude far from the current sensitivities of detectors like VERITAS~\cite{Zitzer:2015eqa}, H.E.S.S.~\cite{Lefranc:2015vza} or
MAGIC and Fermi-LAT~\cite{Ahnen:2016qkx}, which are closing in around the thermal cross section
for standard WIMP DM $\left<\sigma v\right>_{\rm WIMP} \sim$ few $10^{-26}$ cm$^3$/s~\cite{Steigman:2012nb}.
Hence, currently or in the near future, \ the possibilities of probing this scenario via indirect detection are very challenging.
We want to stress that this conclusion does not hold when the effective operator description does not apply
(e.g. in the proposal in Ref.~\cite{Fonseca:2015rwa}), where one can indeed have promising indirect signatures.
\subsection{Dark Matter Direct Detection}
The operator~\eqref{eq:op5} generates inelastic scatterings of the type
$X + \text{nucleon} \rightarrow \bar{X} + \text{nucleon} + 2\,\nu$
through a Higgs exchange. Since neutrinos can carry away momenta, the kinematic
for this process is different from the usual 2-to-2 scattering and the sensitivity
of the experiment should decrease.
One can do an estimation on the spin independent cross section
$\sigma_{X-n}$ by comparing the scalar $X$ scenario to the Scalar Singlet DM model (SSDM)~\cite{McDonald:1993ex,Burgess:2000yq} (also happening via a Higgs exchange), as follows
\begin{equation}
\frac{\sigma_{X-n}}{\sigma_{{\rm SSDM}}} \sim \frac{1}{\lambda_\text{HP}^2}\left(\frac{m_{X}}{\Lambda}\right)^{6}\frac{2^{3}\pi}{2^{11}\pi^{5}}\,,
\end{equation}
where $\lambda_\text{HP}\sim \mathcal{O}\left(10^{-2}\right)$ is the Higgs-portal coupling in the SSDM. Considering the most optimistic points for $m_X = 400$~GeV and
$m_X/\Lambda =$ 0.07, one obtains $\frac{\sigma_{X-n}}{\sigma_{{\rm SSDM}}} \sim 10^{-8}$
(for the fermionic $X$, the ratio will be similarly suppressed).
This is much smaller than any experimental sensitivities and put us well within the regime
where the coherent neutrino scattering background becomes relevant~\cite{Billard:2013qya}.
For the operator~\eqref{eq:op6}, although we have restricted the analysis to $2\,m_X > m_n$
such that the nucleon decay is kinematically forbidden,
one might have to consider the possibility of an induced nucleon
decay (IND) as originally proposed in Ref.~\cite{Davoudiasl:2010am}.
However, due to baryon number conservation, there is no IND in the shared asymmetry scenario
as we explain in the following.
First let us define the SM baryon asymmetry to be positive.
Then for the system which achieves chemical equilibrium as shown on the left panel
of Fig.~\ref{fig:sharing}, the net baryon asymmetry has to be positive
which also implies that the DM asymmetry has to be positive.
From the operator~\eqref{eq:op6}, $B_X=1/2$ and hence we are left with
only $X$ today. Also the same operator causes the IND:
$\bar X+p\to X+e^+$, $\bar X+n\to X+\bar\nu_e$, $\bar X+p\to X+\pi^0$ and $\bar X+n\to X+\pi^0$; and since there is no
$\bar X$ present today, these processes cannot happen.\footnote{One can
ponder about processes like $X$ + nucleon $\to \bar{X}$ + 2 nucleons
+ lepton, but this cannot occur due to kinematics.}
On the other hand, in the case where the DM and the SM sectors never get into
chemical equilibrium, the initial distribution of the asymmetries become
relevant. Since we assume that the initial total asymmetry is stored
in either one of the sectors (e.g. on the right panel of Fig.~\ref{fig:sharing}),
its sign has to be the same as the SM baryon asymmetry, which is positive.
Therefore, we are left with only $X$ today and as before, IND cannot occur.
In the case where non-equilibrium dynamics generates asymmetries equal in magnitude
and opposite in sign in the visible and DM sectors,
the DM with opposite sign baryon number today can result in IND~\cite{Davoudiasl:2010am,Davoudiasl:2011fj,Blinov:2012hq,Huang:2013xfa,Demidov:2015bea}.
In these scenarios, unlike in the shared asymmetry case, the transfer operators need to
always be out of equilibrium, otherwise the asymmetry would be washed out.
\section{Conclusions}
\label{sec5}
In this work, we have considered the case where DM is a singlet under the SM gauge interactions
but carries nonzero baryon and/or lepton numbers. In this case, the DM can be asymmetric
just like the SM baryons, and the asymmetries could be {\it shared}.
We assumed then the DM to be {\it maximally} asymmetric, and either a complex scalar or a Dirac fermion.
The DM mass spans the range between few GeV and $\sim 100$~TeV.
The connection between the dark and the visible sectors was described
by effective operators in the context of an EFT, and it was separated in two different regimes depending on whether the transfer of the asymmetries was effective before or after the EW sphaleron processes freeze out.
The main difference between these two regimes is the following:
before the EW sphaleron processes freeze out the relevant symmetry is $B-L$, while after the EW
sphaleron processes freeze out $B$ and $L$ become separately the appropriate symmetries.
The leading operators consisting of only the SM fields
come in a limited number: one dim-5 operator with $B-L$ charge and four dim-6 operators
with $B$ charge. This feature makes the present scenario predictive in the following sense.
For a given DM mass, the total conserved asymmetry is fixed by the measurements of the ratio of energy densities $\Omega_X/\Omega_B$ and the SM baryon asymmetry $Y^0_{B_\text{SM}}$: the main role of the effective operators is to distribute the asymmetry between the visible and the dark sectors.
Furthermore, the requirement of obtaining the correct sharing (the observed DM relic abundance
and the SM baryon asymmetry) fixes the Wilson coefficients of the operators.
Once the coefficients are fixed, one can determine the phenomenology of this scenario.
Regarding possible signatures at different facilities, we found that
while DM indirect and direct detection are very challenging to current experimental searches,
the LHC is already probing relevant parts of the parameter space.
This fact is due to the steep energy dependence of our operators.
The LHC phenomenology for this model is very rich and goes out of the scope of the present work.
We will dedicate a detailed analysis in an upcoming work~\cite{projectLHC}.
\section*{Acknowledgments}
The authors want to thank Julien Billard, María Eugenia Cabrera Catalán, Oscar Éboli, André Lessa, Boris Panes
and Alberto Tonero for helpful discussions.
NB, CSF and NF are supported by the São Paulo Research Foundation (FAPESP) under grants 2011/11973-4 \& 2013/01792-8, 2012/10995-7 \& 2013/13689-7 and 2015/15361-4, respectively.
NB is partially supported by the Spanish MINECO under Grant FPA2014-54459-P and by the `Joint Excellence in Science and Humanities' (JESH) program of the Austrian Academy of Sciences.
|
1,477,468,750,661 | arxiv | \section{ Numerical Details }
In order to solve the equation,
\begin{equation}\label{eq:continuity}
\nabla\cdot\left(n({\bf r},t)\nabla \alpha({\bf r},t)\right)= - \frac{\partial}{\partial {t}}n({\bf r},t)
\end{equation}
we construct the explicit matrix representation of the operator $ \nabla\cdot\left(n({\bf r},t)\nabla \alpha({\bf r},t)\right)$ subject to the boundary conditions,
\begin{equation}\label{boundary conditions}
\alpha({\bf r}\to\infty,t)=0 \;\;\;\;\text{and}\;\;\;\;
\frac{\partial}{\partial{\theta}}\alpha({\bf r},t)\vert_{\theta=\pi,0}=0\,.
\end{equation}
In the rectangular computational domain that we use to solve the problem, the grid ($r,\theta$) extends from $0\to R=30$a.u. in $r$ (the density is negligible this far from the nucleus) and $0\to \pi$ in $\theta$. Consequently the boundary conditions, Eq. (\ref{boundary conditions}) translate to
\begin{eqnarray}
\nonumber
\alpha(r=R,\theta,\varphi,t)&=&0\\
\frac{\partial}{\partial{\theta}}\alpha(r,\theta,\varphi, t)\vert_{\theta=\pi}&=&0\;\;\;\;\;\;
\frac{\partial}{\partial{\theta}}\alpha(r,\theta,\varphi, t)\vert_{\theta=0}=0
\end{eqnarray}
The finite difference approximation of the derivative operator has the nice property that the resulting matrix is sparse, and consequently the operator, which by definition is local in space, remains so in this representation as well, since only a few adjacent grid points are coupled. The high sparsity of the matrix also allows for efficient computation of matrix inversion. Despite the computational efficiency it offers, caution is required to avoid numerical inaccuracies especially where the density becomes small.
We ensure that our conclusions are robust with numerics, interpreting the results in regions where the inversion is accurate, and checking that the action of the matrix representing
$\nabla\cdot n({\bf r},t)\nabla$ on the solution vector $\alpha({\bf r}, t)$ agrees with the right-hand-side of Eq.~(\ref{eq:continuity})
In a similar way, we calculate the exact Hartree potential $v_{\sss H}({\bf r}, t)$, by numerically inverting
\begin{equation}
\nabla^2 v_{\sss H}({\bf r},t)=-4\pi n({\bf r},t)
\end{equation}
and then use $
v_{\sss XC}({\bf r}, t) = v_{\sss S}({\bf r},t) - v_{\sss H}({\bf r},t) - v_{\rm ext}({\bf r},t)
\label{eq:vxc}$ to obtain the xc potential, and we isolate the correlation potential noting that, for our choice of KS state, $v_{\sss X}({\bf r},t) = -v_{\sss H}({\bf r},t)/2$.
\section{ Movies}
As part of supplementary material, we provide the following two movies:
\begin{enumerate}
\item \textbf{DenVc.mp4} depicts the dynamics of correlation potential $v_{\sss C}({\bf r},t)$ along ($\phi=0,\theta=\frac{\pi}{4}$) and ($\phi=0,\theta=\frac{3\pi}{4}$) in the lower left and right panels respectively, the corresponding density along those angles is displayed in the top panels.
\item \textbf{CurrentVec.mp4} shows the current density vector in the in the x-z half-plane.Note that the other half of the plane is symmetric to the one displayed in the movie.
\end{enumerate}
\end{document}
|
1,477,468,750,662 | arxiv | \section{Introduction}
Dispersion forces arise between any two polarizable media, and are therefore
ubiquitous in nature. Even if their strength is often relatively weak at short
range\cite{derjaguin87}, they appear to have a significant contribution to the
explanation of a wide range of phenomena
\cite{israelachvili91,parsegian05,ninham10}, such as ice premelting
\cite{fiedler20,esteso20}, the stabilization of thin lipid
films\cite{parsegian69}, or the adhesion of Geckos to vertical surfaces\cite{autumn00}.
\vspace{0.1cm}
\\
The characterization of interactions between macroscopic bodies is often
described as the result of the summation over all individual interactions
between pairs of particles\cite{hamaker37,dietrich88,schick90,israelachvili91}. For the interaction between two plates across vacuum, this view results in an energy function, $g(h)$, that decreases as the squared inverse distance of separation between the plates, $h$:
\begin{equation}
g(h) = -\frac{A_{ham}(0)}{12\pi h^{2}}
\label{eq:energy}
\end{equation}
The Hamaker constant, $A_{ham}(0)$ is a fundamental property of the interacting bodies that lumps the magnitude of the force established between pairs of molecules. From this perspective, it is a mean field parameter that is dictated by properties of pairs of interacting particles within two macroscopic objects.
\vspace{0.1cm}
\\
Alternatively, the more modern Lifshitz theory of the van der Waals forces\cite{dzyaloshinskii61,lifshitz56} computes the Hamaker constant considering instead the energies assigned to the electromagnetic modes of vibration allowed inside the system. The dispersion forces emerge in this approach from simultaneous fluctuations of the particles as a response to these electromagnetic waves. In this framework, $A_{ham}(0)$ is dictated instead by the collective dielectric response of the materials.
However, when the distance of separation takes large values, the picture is much more complex. As $h$ increases, the electromagnetic waves require significant amounts of time to promote fluctuations between material patches. This happens due to the fact that the speed of light at which the electromagnetic waves move is finite, so that when the vibration frequency of the polarizable particles is large, the time of the wave to travel from atom to atom might be comparable to the period assigned to the fluctuations. This phenomenon is called retardation\cite{parsegian05}, and effectively weakens the dispersive interactions for large distances.
\vspace{0.1cm}
\\
The retardation effect is considered also in the Lifshitz theory of van der Waals forces \cite{lifshitz56,dzyaloshinskii61}. When taken into account, the Hamaker coefficients of Eq. \ref{eq:energy} becomes a function of the distance of separation, $A_{ham} = A_{ham}(h)$. For distances close to zero, $A_{ham}(h\rightarrow 0)$ reaches a constant value expected in the absence of retardation, the Hamaker constant $A_{ham}(0)$. This short-distances regime is dubbed London regime, i.e., we talk about London dispersion forces.
\vspace{0.1cm}
\\
On the other hand, as the distance of separation between the plates increases, $A_{ham}(h)$ decays gradually and develops a distinct $h$ dependence, which cannot be described in terms of pairwise summation of dispersion interactions. Particularly, for two perfect metal plates in vacuum at large separation and zero temperature, Casimir showed that the energy of interaction adopts the celebrated form \cite{casimir48}:
\begin{equation}\label{eq:casimir}
g(h) = -\frac{\pi^2 c\hbar}{720 h^3}
\end{equation}
This result explicitly points to the non-trivial dependence of the 'Hamaker
constant', which, more accurately corresponds to a Hamaker function $A(h)$
decaying as $1/h$ in this limit
\cite{israelachvili72,sabisky73,white76,chan77,gregory81,palasantzas08,vanzwol10}.
As a matter of fact, the interest of this result goes well beyond the mere study of surface interactions, as it conveys invaluable information on the nontrivial structure of the quantum vacuum and the role of zero-point energy in physics \cite{milton98,lamoreaux07}. For this reason, there has been considerable interest in verifying this prediction experimentally \cite{lamoreaux97,mohideen98,bressi02,munday09,man09,garret18}. In practice, however, it must be recognized that Eq.\ref{eq:casimir} is an asymptotic result which can only be realized at zero temperature, for perfect metals. Verification of the underlying physics of Eq.\ref{eq:casimir} needs to take into account the simultaneous effect of finite temperature and finite conductivity of metals \cite{obrecht07,fisher20}. Nevertheless, attempts to single out asymptotic corrections to the more general result of Lifshitz and collaborators \cite{lifshitz56,dzyaloshinskii61} is difficult, and remains
still a matter of discussion
\cite{ninham70,parsegian70,chan77,schwinger78,milton98,ninham98,bostrom00,lambrecht00,bezerra04,geyer05,ninham14,obrecht07,fisher20}.
Recently, a promising approach for the study of the crossover regime from retarded to non-retarded interactions was proposed \cite{macdowell19,luengo21}. The result provided well known exact analytic results over the full range of plate separations, albeit within the so called dipole approximation of the Lifshitz equation. The key idea in that study is a quadrature rule which allows to describe the crossover regime in a non-perturbative manner. This is a significant issue, because in the Lifshitz result, plate separation, temperature and dielectric properties are entangled in a highly non-trivial manner, so it is not clear whether each effect can be singled out separately.
In this study, we aim to extend that work beyond the linear dipole approximation, which is required in order to obtain the exact Casimir limit for perfect metals at zero temperature.
In the next section we provide the essential background to the Lifshitz theory of intermolecular forces.
In section III we extend the recently introduced Weighted Quadrature
Approximation (WQA) beyond the dipole approximation. The working formulae
relies on knowledge of the exact Hamaker constant at zero plate separation.
Therefore, we devote section IV to derive a new, simple and accurate
approximation for the Hamaker constant $A_{ham}(0)$. In the next section, we
work out analytical results for retarded interactions between materials obeying
the Drude model of the dielectric responses.
In Section VI
we compare the resulting interaction coefficients with exact numerical
solutions of Lifshitz theory calculated from a detailed description of
dielectric properties published recently \cite{tolias18,gudarzi21},
showing how the new
methodology for the computation of the Hamaker constant proposed in this
article yields values of $A_{ham}(0)$ in excellent agreement with the ones
predicted by the exact Lifshitz formula. Our results are summarized in the
Conclusions.
\section{Lifshitz theory}
In the frame of Lifshitz theory, the Hamaker function for retarded interactions
between two metal plates across vacuum takes the form\cite{supplementary21}
\@fleqntrue\@mathmargin0pt
\begin{equation}
A_{ham}(h) =
\notag
\end{equation}
\@fleqnfalse
\begin{equation}
\frac{3 k_{B}T}{2}{\sum_{n=0}^{\infty}}'\int_{r_{n}}^{\infty}\sum_{k=1}^{\infty}x \ dx \frac{(\Delta^{E}_{mv})^{2k}+(\Delta^{M}_{mv})^{2k}}{k}e^{-kx}
\label{eq:lifshitz}
\end{equation}
Where $r_{n} = 2h\epsilon_{v}^{1/2}\omega_{n}/c$, $k_{B}$ is the Boltzmann's constant and $T$ is the temperature in Kelvin. All along this study we employ room temperature, $T=300\ K$. Besides, we have that
\begin{equation}
\Delta^{E}_{mv} = \frac{x_{v}-x_{m}}{x_{v}+x_{m}},\hspace{1 cm} \Delta^{M}_{mv} = \frac{\epsilon_{m}x_{v}-\epsilon_{v}x_{m}}{\epsilon_{m}x_{v}+\epsilon_{v}x_{m}}
\label{eq:delta}
\end{equation}
With $x_{i}^{2} = x^{2}+(\epsilon_{i}-\epsilon_{v})(2h\omega_{n}/c)^{2}$, being
$c$ the speed of light. The subscripts $m$ and $v$ denote metal and vacuum,
respectively. In these equations, the magnetic susceptibilities of the media
have been assumed to be equal to unity. The $\epsilon_{i}$ is the dielectric response of each medium $i$, which reflects the tendency of that substance to polarize reacting to an electromagnetic wave. It is thus a function of the frequency of the incoming wave, which is also the frequency at which the particles of that material will oscillate in response\cite{parsegian05}. Here the $\epsilon_{i}(\omega_{n})$ are evaluated at the discrete set of Matsubara frequencies $\omega_{n} = 2\pi n k_{B}T/\hbar$, being $\hbar$ the Planck's constant in units of angular frequency.
\vspace{0.1cm}
\\
Notice that the result of \ref{eq:lifshitz} greatly generalizes the well known
result of Casimir, and allows for a panoply of interaction, including
non-monotonous dependence of $A_{ham}(h)$, as well as monotonic repulsive interactions. For instance, retardation-driven repulsion has been found for the force between gold and silica surfaces immersed in bromobenzene\cite{munday09,bostrom12}. This repulsive interaction in the Casimir regime has been found to allow supersliding between two surfaces, arising from an extremely low friction coefficient\cite{feiler08}.
\vspace{0.1cm}
The prime in Eq. \ref{eq:lifshitz} means that the first term of the summation in
$n$ has half weight. This term corresponds to the contribution of the zero
frequency, the only one that remains as $h\rightarrow \infty$, and it thus
provides the interaction coefficient for very large distances. Singling out
this term, the zero frequency contribution is given as\cite{supplementary21}:
\begin{equation}
A_{ham}^{\omega_{n}=0} = \frac{3k_{B}T}{4}\sum_{k=1}^{\infty}\left(\frac{\epsilon_{m}(0)-\epsilon_{v}(0)}{\epsilon_{m}(0)+\epsilon_{v}(0)}\right)^{2k}\frac{1}{k^{3}}
\label{eq:zero}
\end{equation}
Where $\epsilon_{i}(0)$ is the static dielectric response, which for metals goes to infinity. Consequently, Eq. \ref{eq:zero} reveals that the interaction of two plates of metal across vacuum has $A_{ham}^{\omega_{n}=0} = A_{ham}(h\rightarrow \infty) = \frac{3k_{B}T}{4} \zeta(3)$, which, at $T=300$~K amounts to $3.73\times 10^{-21} \ J$.
\\
After these considerations, we can focus on the remaining contributions of Eq.
\ref{eq:lifshitz}, namely $A_{ham}^{\omega_{n}>0}(h)$. A usual treatment of the
Hamaker function is to consider only the first term of the summation in $k$
\cite{tabor69,hough80,prieve88,bergstrom97}. This is called the linear or dipole approximation, and works well in those cases where the $ (\Delta^{E}_{mv})^{2k}+(\Delta^{M}_{mv})^{2k}$ function vanishes rapidly. In the situation that we handle, the large values of $\epsilon_{m}(\omega_{n})$, specially for low frequencies due to the plasma resonance, make the use of the linear approximation unreliable.
\vspace{0.1cm}
\\
Additionally, the complex interplay between London and Casimir regime that we have described is encapsulated inside the Eq. \ref{eq:lifshitz} in a non trivial way. In previous works\cite{macdowell19,luengo21} we have taken advantage of several mathematical tools to provide insightful expressions within the linear approximation attempting to clarify the physical interpretation of the Lifshitz formula.
\vspace{0.1cm}
\\
\section{WQA beyond the dipole approximation}
The Weighted Quadrature Approximation (WQA) introduced recently within the
linear dipole approximation\cite{macdowell19}, employs the Gaussian Quadrature
as an analytical tool to simplify the Lifshitz formula. Here we use the same idea to generalize the WQA to the infinite
order sum of Eq. \ref{eq:lifshitz}. After a first Gaussian Quadrature, using
$xe^{-kx}$ as the weight function, and the approximation of the summation in $n$
to an integral via the Euler-MacLaurin formula, we reach\cite{supplementary21}
\begin{equation}
A_{ham}^{\omega_{n}>0}(h) = \frac{3c\hbar}{8\pi}\int_{\nu_{T}}^{\infty}d\nu \ \sum_{k = 1}^{\infty} \widetilde{R}_{k}(\nu,x_{1,k},h)[k\nu h+1]e^{-k\nu h}
\label{eq:start}
\end{equation}
Where $\nu_{T} = 2\epsilon_{v}^{1/2}\omega_{T}/c$, $\omega_{T} = 2\pi k_{B}T/\hbar$ and $\widetilde{R}_{k}(\nu,x_{1,k},h) = \epsilon_{v}^{-1/2}j_{v}^{-1}R_{k}(\nu,x_{1,k},h)$, being $j_{v} = \left(1+\frac{1}{2}\frac{d \ln \epsilon_{v}}{d \ln \omega_{n}}\right)$, $x_{1,k} = (kr_{n}^{2} + 2r_{n} + 2/k)/(kr_{n} + 1)$, and
\begin{equation}
R_{k}(\nu,x,h) = \frac{(\Delta^{E}_{mv})^{2k}+(\Delta^{M}_{mv})^{2k}}{k^{3}}
\label{eq:erre}
\end{equation}
In practice, for metallic plates interacting across vacuum, $\epsilon_{v}(\omega)=1$, so that $j_{v}=1$, and $\widetilde{R}_{k}(\nu,x,h)= R_{k}(\nu,x,h)$.
Recall that the dependence on $\nu=\nu_{T}n$, $x$, and $h$ enters by the hand of $\Delta^{E,M}_{mv}$, as dictated by Eq. \ref{eq:delta}. The function $R_{k}(\nu,x,h)$ exhibits a very complicated dependence on the distance of separation, arising from the fact that the frequencies captured by the integral in Eq. \ref{eq:start} are being reduced as the distance increases. At this stage, this result is still too arid to infer intuitively the expected functional behavior.
\vspace{0.1cm}
\\
We proceed by introducing $\nu_{\infty}$, an effective parameter specific of
each system meant to describe the decay length of $R_{k}(\nu,x,h)$ as
$\nu\to\infty$. Then a second Gaussian Quadrature is
performed\cite{supplementary21}, and we achieve the WQA extended to include the complete summation over k
\begin{equation}
A_{ham}^{\omega_{n} >0}(h) = \frac{3c\hbar\nu_{\infty}}{8\pi}\sum_{k=1}^{\infty}\widetilde{R}_{k}(\nu^{*}_{k},x_{1,k},h)e^{\xi_{k}}e^{-\nu_{T}kh}\widetilde{F}_{k}
\label{eq:wqa}
\end{equation}
\begin{equation}
\widetilde{F}_{k} = \frac{(\nu_{T}kh+1)(\nu_{\infty}kh+1)+\nu_{\infty}kh}{(\nu_{\infty}kh+1)^{2}}
\notag
\end{equation}
\begin{equation}
\xi_{k}= \frac{(\nu_{T}kh+1)(\nu_{\infty}kh+1)+2\nu_{\infty}kh}{(\nu_{\infty}kh+1)^{2}(\nu_{T}kh+1)+(\nu_{\infty}kh+1)\nu_{\infty}kh}
\notag
\end{equation}
With $\xi_{k} = (\nu^{*}_{k} - \nu_{T})/\nu_{\infty}$.
Truncating the sum in Eq.\ref{eq:wqa} beyond $k=1$, this result becomes the original WQA proposed recently\cite{macdowell19}. To this order of approximation, WQA is very accurate for dielectric materials with low dielectric response\cite{macdowell19,luengo21}. However, the extension provided here is required to describe interactions for materials with large dielectric response, because the terms $\Delta^{M/E}_{mv}$ are close to unity and the convergence of the series is not fast enough to warrant truncation at first order.
Equation \ref{eq:wqa} has not only the advantage of being entirely analytic, but also allows straightforward interpretation of the transition from London to Casimir regime of $A_{ham}(h)$ through the comparison of the magnitudes of $h$, $\nu_{\infty}$ and $\nu_{T}$.
\vspace{0.1cm}
\\
For short distances, $h\ll \nu_{\infty}^{-1}$,
\rev{we find the auxiliary functions $\widetilde{F}_k=1$ and $\xi_k=1$, so that
\begin{equation}
A_{ham}^{\omega_{n} >0}(h\to 0) = \frac{3c\hbar\nu_{\infty}}{8\pi}\sum_{k=1}^{\infty}
\left( \frac{\epsilon_m-1}{\epsilon_m+1} \right)^{2k} \frac{e}{k^3}
\label{eq:lim0}
\end{equation}
with the dielectric function evaluated at a constant wave number $\nu^{*}=\nu_T + \nu_{\infty}$. Accordingly, the value of $A_{ham}(h\rightarrow 0)$ in Eq. \ref{eq:wqa} becomes independent of $h$, and the usual Hamaker constant is recovered.
}
\rev{
At large values of the distance of separation $h\gg \nu_{T}^{-1}$,
we find $\xi_k\to 0$, while $\widetilde{F}_k=\nu_T/\nu_{\infty}$. This yields:
\begin{equation}
A_{ham}^{\omega_{n} >0}(h\to\infty) = \frac{3c\hbar\nu_{T}}{4\pi}\sum_{k=1}^{\infty} \left( \frac{\epsilon_m^{1/2}-1}{\epsilon_m^{1/2}+1}\right)^{2k} \frac{e^{-\nu_{T}kh}}{k^3}
\label{eq:liminf}
\end{equation}
where the dielectric functions are now evaluated at the thermal wave-number $\nu_T$. In this limit, the retarded interactions are supressed exponentially. Accordingly, only the static term in the Hamaker constant survives. This leads to the other asymptotic behavior $A_{ham}(h\rightarrow \infty)$, also independent of $h$.
}
\rev{
In between these two limits, $\nu_{\infty}^{-1} \ll $h$ \ll \nu_{T}^{-1}$, the
$h$ dependence of $A_{ham}(h)$ is governed by the factor
$\widetilde{F}_{k}(h)$, which in this range effectively decays as
$2/(\nu_{\infty}kh)$. This leads to:
\begin{equation}
A_{ham}^{\omega_{n} >0}(h) = \frac{3c\hbar}{4\pi h}\sum_{k=1}^{\infty} \frac{\left( \Delta_{m}^E + \right)^{2k} + \left(\Delta_m^M \right)^{2k}}{k^4}
\label{eq:limh}
\end{equation}
where the dielectric functions are now to be evaluated at frequencies lying between $\nu_T$ and $\nu_{\infty}$.
For a perfect metal, implying $\Delta_{mv}^{E/M}=1$, Eq.\ref{eq:wqa} readily yields
$A_{ham}^{\omega_{n} >0}(h) = \frac{3c\hbar}{2\pi h}\zeta(4)$, which corresponds to the exact result of Casimir for two interacting metals at zero temperature.
}
\vspace{0.1cm}
\\
This inspection highlights one major strength of the WQA: it provides a parametric clarification of the qualitative change of the distance dependence on the dispersion forces. The distance implied in $\nu_{\infty}^{-1}$ signals the point at which the surface van der Waals free energy in Eq. \ref{eq:energy} switches its dependence on the distance of separation from the $\sim 1/h^{2}$ proper of London regime, to the $~1/h^{3}$ characteristic of Casimir interactions which is exact at zero temperature. At finite temperature, however, $\nu_T$ is finite, and the Casimir regime becomes fully suppressed for $h>\nu_T^{-1}$.
\section{A simple Quadrature method for the Hamaker constant}
For practical matters, use of Eq. \ref{eq:wqa} requires knowledge of the system's parameter $\nu_{\infty}$, \rev{which must be chosen so as to obtain optimal agreement with the exact results. A look at the limiting regimes of $A_{ham}^{\omega_{n}>0}(h)$ displayed in Eq.\ref{eq:lim0}-\ref{eq:limh} shows that the choice of $\nu_{\infty}$ is only significant in the limit of small $h$. Accordingly, we seek $\nu_{\infty}$ by matching the $h\rightarrow 0$ limit of Eq. \ref{eq:wqa} to the exact $A_{ham}^{\omega_{n}>0}(0)$ from Eq. \ref{eq:lifshitz} (c.f. Ref.\cite{luengo21}).} Unfortunately, the numerical calculation of $A_{ham}^{\omega_{n}>0}(0)$ also is a cumbersome task. Therefore, in this section we present a very accurate one-point quadrature for $A_{ham}(0)$ that adopts a particularly simple analytic form for two metallic plates obeying the Drude model.
Our starting point is Eq.\ref{eq:start} in the limit of $h\rightarrow 0$:
\begin{equation}
A_{ham}^{\omega_{n}>0}(h\to 0) = \frac{3c\hbar}{8\pi}\int_{\nu_{T}}^{\infty}d\nu \ Li_3(R_1)
\label{eq:start0}
\end{equation}
where $Li_3(x)=\sum_{k=1} x^k/k^3$ is the polylogarithmic function, and
$R_1(\nu)=R_1(\nu,x,h=0)$ takes the particularly simple form:
\begin{equation}
R_{1}(\nu) = \left(\frac{\epsilon_{m}(i\nu)-\epsilon_{v}(i\nu)}{\epsilon_{m}(i\nu)+\epsilon_{v}(i\nu)}\right)^{2}
\label{eq:erre1}
\end{equation}
Analytical approximations for the Hamaker constant are usually obtained by replacing $Li_3(R_1)$ by its zeroth order approximation $Li_3(R_1)\approx R_1$ and truncating beyond first order. Here, we notice that $R_1(\nu)$ has the properties of a well behaved distribution, so that applying the first mean value theorem, Eq.\ref{eq:start0} may be expressed exactly as:
\begin{equation}
A_{ham}^{\omega_{n}>0}(h\to 0) =
\frac{3c\hbar}{8\pi}
\frac{Li_3(R_1^*)}{R_1^*}
\int_{\nu_{T}}^{\infty} d\nu {R}_{1}(\nu)
\label{eq:start1}
\end{equation}
where $R_1^* = R_1(\nu*)$, is evaluated at a mean value frequency, $\nu*$, in
the interval between $\nu_T$ and $\infty$. An accurate estimate of $R^*_1$,
without needing an explicit evaluation of $\nu*$ may be obtained right away by
requiring the second order expansion of the quadrature rule Eq.\ref{eq:start1}
to match exactly that of Eq.\ref{eq:start0}. This then leads to the simplified
prescription (see supplementary material, Ref.\cite{supplementary21}):
\begin{equation}
A_{ham}^{\omega_{n}>0}(h\to 0) =
\frac{Li_3(R^*_1)}{R^*_1} \,
A_{ham,0}^{\omega_{n}>0}
\label{eq:quadrature}
\end{equation}
where
\begin{equation}
R^{*}_1 = \frac{I_{2}}{I_{1}}, \hspace{1cm} I_{m} = \int_{\nu_{T}}^{\infty}\left[R_{1}(\nu)\right]^{m}d\nu
\notag
\end{equation}
while $A_{ham,0}^{\omega_{n}>0}=\frac{3c\hbar}{8\pi}I_1$ is just the usual zeroth order approximation of the Hamaker constant.
In practice, $R_1(\nu)<1$ at all frequencies, so that $R_1^*$ is small always
and $Li_3(R_1^*)/R^*_1=1+R_1^*/8+(R_1^*)^2/27...$ is a correction factor of order unity.
In fact, by construction, Eq.\ref{eq:quadrature} is exact up to second order in
$R_1^*$, and provides very accurate results when compared with the numerical
solution of Eq. \ref{eq:lifshitz}. Henceforth, we will refer to the prescription
of Eq.\ref{eq:quadrature} as the Q rule. Its performance will be tested later
for the interaction between two plates of Al, Be, Cr, W and Au using
recently published data as a benchmark.\cite{tolias18,gudarzi21}
\begin{figure*}[htb!]
\includegraphics[width=0.48\textwidth,height=0.35\textwidth,keepaspectratio]{drude_al.pdf}
\includegraphics[width=0.48\textwidth,height=0.35\textwidth,keepaspectratio]{drude_w.pdf}
\\
\includegraphics[width=0.48\textwidth,height=0.35\textwidth,keepaspectratio]{drude_ag.pdf}
\includegraphics[width=0.48\textwidth,height=0.35\textwidth,keepaspectratio]{drude_pb.pdf}
\\
\includegraphics[width=0.48\textwidth,height=0.35\textwidth,keepaspectratio]{drude_au.pdf}
\caption{Retarded interaction coefficients computed with the Drude model
at 300 K. Hamaker coefficients for the interaction of two plates of Al
(top left), W (top right), Ag (bottom left) and Pb (bottom right), across
vacuum. The $A_{ham}(h)$ are displayed in Joules. In each case, it is shown a comparison of the $A_{ham}(h)$ resulting from the Lifshitz equation (Eq. \ref{eq:lifshitz}) in black, the WQA with $\nu_{\infty}$ obtained via the Numerical Method to match the Quadrature $A_{ham,Q}(0)$ (NM + Q + WQA) in red, and the WQA supported by the analytic solution for $\nu_{\infty}$ in green.}
\label{fig:drude}
\end{figure*}
\section{Analytic solution for the Drude model}
In the following, we will gauge Eq.\ref{eq:wqa} at $h=0$ with Eq.\ref{eq:quadrature} in order to estimate the system parameter $\nu_{\infty}$. Once this is known, we can then use Eq.\ref{eq:wqa} to estimate the Hamaker function for arbitrary values of the plate separation.
We begin the exploitation of the previous formulae by assuming the single - Drude model - oscillator as a sufficient description of the dielectric response of the metal. This model assumes free moving electrons\cite{youn07}, i.e. electrons displacing with no restoring force\cite{parsegian05}, oscillating at the plasma frequency, $\omega_{P}$. These electrons can experiment collisions with defects, lattice vibrations or with other electrons\cite{tanner13}, resulting in a damping coefficient, $\gamma$. All together, the Drude model reads
\begin{equation}
\epsilon_{m}(i\omega) = 1 + \frac{\omega_{P}^{2}}{\gamma\omega+\omega^{2}}
\label{eq:drudemodel}
\end{equation}
This representation has a fundamental limitation. It is not clear that the employment of the Drude model for real metals is physically justified, since most of them are expected to exhibit interband transitions\cite{ordal83}. This might be surpassed simply by considering that the Drude model is merely a phenomenological description of $\epsilon_{m}(i\omega)$, which is always a smooth function irrespective of the complex optical response of the metal.
\\
It is with such spirit that Ordal \textit{et al.} published\cite{ordal83} a set of Drude model parameters for various metals obtained from a fit of data in the near and far IR regime. These parameters are tabulated in Table \ref{table:drude} for Al, W, Ag and Pb, which also includes results from Gudarzi and Aboutalebi for Au \cite{gudarzi21}.
\\
However, it is clear that the Drude model does not properly account for the possible contributions of the high frequencies. This potentially has a great impact on the calculations surrounding the Lifshitz theory, because most of the Matsubara frequencies fall on a high energy regime. Even if the fact that metals are good conductors entails that the IR frequencies will strongly contribute to $A_{ham}(h)$, neglecting the remaining frequencies might provide a poor estimation. Hence the consequences of performing such approximation will be discussed later.
\vspace{0.1cm}
\\
Nevertheless, the use of the Drude model presents a very important advantage: it
allows us to get an explicit expression of $\nu_{\infty}$ for the interaction
between two plates of a certain metal across vacuum, depending fundamentally on
the plasma frequency of that metal\cite{supplementary21}. Indeed, assuming $\omega_{P} \gg \gamma,\ \omega_{T}$, which is usually the case, Eq.\ref{eq:quadrature} provides the following simple result for the Hamaker constant between two metallic plates:
\begin{equation}
A_{ham}^{\omega_{n}>0}(h\to 0) =
\frac{3\, Li_3(5/8) }{5\sqrt{8}} \,\hbar \omega_P
\label{eq:hamakerdrude}
\end{equation}
Matching this result to the $h\to 0$ limit of Eq.\ref{eq:wqa} provides a transcendental equation for $\nu_{\infty}$ which can be solved exactly to yield:
\begin{equation}
\nu_{\infty} = f\nu_{P} - \nu_{T}
\label{eq:drudeanalytic}
\end{equation}
Where $\nu_{P} = \frac{2}{c}\omega_{P}$ and $f = 0.553656$. Using Eq.\ref{eq:drudeanalytic} together with Eq.\ref{eq:wqa} provides a fully analytical description of the Hamaker function in all ranges of plate separation.
\\
We test this result for a number of metals. The analytical solution is compared with two numerical estimates of different level or refinement. Firstly, results are compared with the exact Hamaker function of Eq.\ref{eq:lifshitz}. Secondly, results are given for the WQA, Eq.\ref{eq:wqa}, with the parameter $\nu_{\infty}$ obtained by
forcing $A_{ham}^{\omega > 0}(h\rightarrow 0)$ of the WQA (Eq.\ref{eq:wqa}) to match $A_{ham,Q}^{\omega_{n}\ >0}(h \rightarrow 0)$ of the Quadrature in Eq. \ref{eq:quadrature}. We call the latter prescription the NM + Q + WQA method. Table I displays values of $\nu_{\infty}$ that result from this prescription for a number of metals.
Fig. \ref{fig:drude} illustrates the results of this comparison. The analytic solution shows excellent agreement with the two numerical methods for all tested metals, which highlights the accuracy of our closed form results Eq.\ref{eq:wqa} and Eq.\ref{eq:hamakerdrude}-Eq.\ref{eq:drudeanalytic}.
\vspace{0.1cm}
\\
\\
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$ $ & $Al$ & $W$ & $Ag$ & $Pb$ & $Au$\\
\hline
$\omega_{P}/eV$ & $14.78$ & $6.01$ & $9.01$ & $7.70$ & $9.1$ \\
\hline
$\gamma\cdot 10^2/eV$ & $8.04$ & $5.38$ & $1.80$ & $18$ & $6.0$ \\
\hline
$\nu_{\infty}\cdot 10^{-7}/m^{-1}$ & $3.80$ & $1.56$ & $2.32$ &
$2.00$ & $9.27$\\
\hline
\end{tabular}
\caption{Plasma frequency, $\omega_{P}$, and damping coefficient, $\gamma$ used in this work. Results for Al, W, Ag and Pb from Ref.\cite{ordal83}. Results for Au from Ref.\cite{gudarzi21}.The last line shows the $\nu_{\infty}$ parameter obtained with the Numerical Method, employed to get the red lines in Fig. \ref{fig:drude}.}
\label{table:drude}
\end{table}
To check whether the Drude model is adequate enough to account for a complete description of $\epsilon_{m}(\omega)$, we take advantage of
improved dielectric parametrizations reported recently, which account for
interband transitions in the ultraviolet frequencies and
beyond.\cite{tolias18,gudarzi21} We use this
data to compute the retarded Hamaker coefficients with the Lifshitz equation,
and compare it with the output of the analytic solution for a single Drude
oscillator.
Fig.\ref{fig:tolias} displays $A_{ham}(h)$ of Al and W emerging from Eq. \ref{eq:lifshitz} with the parameterization of P. Tolias\cite{tolias18}, compared to the output of the analytic expression with the Drude model using the parameters published by Ordal \textit{et al}\cite{ordal83}. In the case of Al, the Drude model seems to provide a good characterization of $\epsilon_{m}(\omega)$, resulting in a very similar $A_{ham}(h)$ function for all separation distances.
\vspace{0.1cm}
\\
On the contrary, for W, the high frequencies contributions to $A_{ham}(h)$ that are missed by the single Drude fit lead to a very low estimate of the Hamaker constant as obtained from the analytic solution. As the retardation effect sets in, these large frequencies are cut off, and the remaining ones are properly given by the Drude model, so that the very large distances regime is well described with the single oscillator model of Ordal \textit{et al}. This highlights that the analytic solution will provide a poor approximation whenever the metal presents significant high frequency contributions to its dielectric response, revealing an insufficiency of the Drude model to account for those frequencies.
This observation is relevant for the experimental measurement of the Casimir regime of van der Waals forces.\cite{lamoreaux97,mohideen98,bressi02,munday09,man09,garret18} Particularly, high precision experiments aimed at testing the low temperature limit, Eq.2 often rely on the asymptotic expansion of Eq.3 based on a single Drude oscillator.\cite{} It is therefore very important to assess to what extent can one neglect contributions of interband transitions of small wave-length. We check this for the particularly significant case of gold, which is most often the choice for high precision measurements of the Casimir force.\cite{lamoreaux97,mohideen98,bressi02,munday09,man09,garret18} Fig.\ref{fig:gold} displays the exact Lifshitz result for gold, with dielectric properties as obtained in Ref.\cite{gudarzi21} The black line is the result obtained with the complete dielectric response, while the blue line provides the Hamaker function when only the conducting electrons are considered. At short distances, we see that more than half of the Hamaker constant results from contributions due to core electrons, as expected. Starting at about 100~nm, however, the contribution of core electrons has vanished almost completely. Whence, for gold it appears that asymptotic expansions based on simple Drude or plasma models should be reliable in the micrometer range. Care must be taken when asymptotic expansions are used below the hundreth of nanometer range, however.
\section{Comparison with numerical solutions}
So far we have tested the Quadrature proposed in
Eq.~\ref{eq:quadrature} for metals described by
the Drude model, where the integrals implied by $I_{m}$ are analytically
solvable\cite{supplementary21}.
As demonstrated in the preceding section, despite of its potential strength, the use of the analytic solution of the Drude model is quite circumstantial, depending on whether the high energy regime is negligible or not.
Now we show that the single-point quadrature of Eq.\ref{eq:quadrature} alone is
able to provide a simple method to get the Hamaker constant, $A_{ham}(0)$, upon
numerical integration, even for those cases where $\epsilon_{m}(\omega)$
exhibits a complex high-frequency behavior. We use once again the fits for the
dielectric response of metals published by Tolias and Gudarzi and Aboutalebi to
get the $I_{1}$ and $I_{2}$ appearing in Eq. \ref{eq:quadrature} through
numerical integration with the composite trapezoidal
rule.\cite{tolias18,gudarzi21}
\vspace{0.1cm}
\\
\begin{figure*}[htb!]
\includegraphics[width=0.48\textwidth,height=0.35\textwidth,keepaspectratio]{tolias_al.pdf}
\includegraphics[width=0.48\textwidth,height=0.35\textwidth,keepaspectratio]{tolias_w.pdf}
\caption{Retarded interaction coefficients of several metals
at 300 K. The blue lines represent the Hamaker coefficients for the
interaction
of Al (left), and W (right), using the Lifshitz equation and detailed
dielectric data from Ref.~\cite{tolias18}. The analytic solution within the
Drude model is also displayed in green for comparison. The Hamaker coefficients
are provided in Joule.}
\label{fig:tolias}
\end{figure*}
\begin{figure*}[htb]
\includegraphics[width=0.48\textwidth,height=0.35\textwidth,keepaspectratio]{complete_au.pdf}
\caption{$A_{ham}(h)$ for the interaction of gold plates across air. The black line is obtained using Lifshitz theory, with full dielectric properties as reported in Ref \cite{gudarzi21} (Drude oscillator + core electrons). The blue line are results obtained using only the Drude oscillator for conduction electrons. The red line is predictions from NM + Q + WQA for the full dielectric response.
\label{fig:gold}}
\end{figure*}
\begin{figure*}[htb]
\includegraphics[width=0.48\textwidth,height=0.35\textwidth,keepaspectratio]{convergence_al.pdf}
\includegraphics[width=0.48\textwidth,height=0.35\textwidth,keepaspectratio]{convergence_w.pdf}
\caption{$A_{ham}(h)$ of Aluminium (left) and Tungsten (right) resulting
from the Lifshitz formula (blue line) and the WQA (red line), with
detailed optical properties as described in Ref.\cite{tolias18}.
The $\nu_{\infty}$ parameter of the WQA has been obtained via Numerical Method
to match the Quadrature, and the dashed violet lines represent the WQA solution
with one, two, and three terms of the summation in k. Hamaker functions are
provided in Joule.}
\label{fig:convergence}
\end{figure*}
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$ $ & $Al$ & $W$ & $Be$ & $Cr$ & $Au$\\
\hline
$A_{ham,Q}(0)/(10^{-19}\ J)$ & $3.44$ & $4.74$ & $3.39$ & $3.61$ & $4.40$ \\
\hline
$A_{ham}^{}(0)/(10^{-19} \ J)$ & $3.53$ & $4.85$ & $3.47$ & $3.69$ & $4.49$ \\
\hline
r. e. $\%$ & $2.55$ & $2.27$ & $2.30$ & $2.17$ & $2.00$ \\
\hline
\end{tabular}
\caption{Hamaker constants for several metals as obtained from
detailed optical properties of Ref.\cite{tolias18,gudarzi21}.
The first line displays the Quadrature values computed with numerical
integration, where $A_{ham,Q}(0)=
A_{ham,Q}^{\omega_{n}>0}(0)+A_{ham}^{\omega_{n}=0}$. The second line shows exact
results from Lifshitz theory. Results for Al, W, Be, Cr from Ref.\cite{tolias18}. Results for Au calculated in this work using a dielectric functions as provided in Ref.\cite{gudarzi21}. The last line contains the relative error made by the Quadrature method.}
\label{table:tolias}
\end{table}
Fig.\ref{fig:convergence} represents the Hamaker function of Al, comparing the result of the exact Lifshitz equation with the NM+Q+WQA prescription (Eq.\ref{eq:wqa} and Eq.\ref{eq:quadrature}).
We plot in dashed purple lines the solution of Eq. \ref{eq:wqa} including an increasing number of terms in the summation over $k$ to check the convergence of the series. The second term provides already acceptable accuracy with respect to the complete summation, and almost complete convergence is achieved with the third term. This examination supports the use of a Quadrature exact up to second order, that was the assumption under which Eq. \ref{eq:quadrature} was derived. In fact,the difference between the second order ($k=2$) and third order results ($k=3$) is smaller than the uncertainty that results from the parameterization of the dielectric response in most systems\cite{burger20}.
\vspace{0.1cm}
\\
When numerically solving the equality between
$A_{ham,Q}^{\omega_{n}>0}(h\rightarrow 0)$ of the Quadrature and
$A_{ham}^{\omega_{n}>0}(h\rightarrow 0)$ of the WQA for tungsten, it was found
that there is no $\nu_{\infty}$ parameter that provides exactly that match. In
this case, the validity of the Q+WQA method can be
compromised. In practice, we find that taking the parameter that gives the larger possible $A_{ham}^{\omega_{n}>0}(h \rightarrow 0)$ already results in a very good approximation for both the value of the Hamaker constant and the habit of the Hamaker function in the Casimir regime. Indeed, the Fig. \ref{fig:convergence} shows that the WQA reproduces closely the functional behavior of the Hamaker function of the Lifshitz theory. We found that this problem does become quite significant for the relevant case of gold. Here, we did not find a choice of $\nu_{\infty}$ that could match the Hamaker constant, and the optimal value provides a Hamaker function that underestimates $A_{ham})(h\to 0)$ by 15\%, as shown in Fig.\ref{fig:gold}.
\vspace{0.15cm}
\\
Despite this defficiency in predicting the full Hamaker function for some metals, the proposed method for the calculation of Hamaker constants does remarkably well for all metals studied. Table \ref{table:tolias} displays the comparison between Hamaker constants of Al, W, Be, Cr and Au, and those obtained via Eq. \ref{eq:quadrature}. The Quadrature provides good agreement with the result of the Lifshitz equation without retardation, and its performance never exceeds a $3\%$ of relative error. This suggests the use of this novel Quadrature as a straightforwardly solvable alternative to the more intricate Lifshitz formula. The use of this formula is expected to be particularly helpful for estimating Hamaker constants between materials across a dielectric medium, where the first order approximation or Tabor-Winterton approximation often fails\cite{bergstrom97}.
\section{Conclusions}
\label{sec:conclusion}
Understanding the dispersive interactions between two surfaces is a crucial feature in the study of adhesion and friction phenomena\cite{israelachvili91,parsegian05,munday09}. At zero temperature, the van der Waals free energy exhibits a crossover from the non-retarded (London) behavior at short distances to the retarded (Casimir) behavior at long distances of separation\cite{palasantzas08}. At finite temperature, however, the Casimir regime of retarded interactions is suppressed at sufficiently large distances. This complex crossover behavior is non-trivially embodied in the Lifshitz equation for retarded interactions. The influence of the retardation effect switches the distance dependence of the interaction energy from the $\sim 1/h^{2}$ typical of London dispersion interaction, to the $\sim 1/h^{3}$ associated to the Casimir regime\cite{parsegian05} and back to$\sim 1/h^{2}$ at finite temperature and inverse distances smaller than a thermal wave-number $\nu_T$.
\vspace{0.1cm}
\\
In this paper, we have worked out an analytical approximation for the Hamaker function which illustrates the crossover behavior between these three different regimes and remains accurate at all ranges of separation. In this study, we also present an accurate Quadrature method to compute the Hamaker constant corresponding to the limit of small plate separation. Our quadrature rule consistently reproduces the values of $A_{ham}(0)$ provided by previous studies\cite{tolias18}, with less than 3\% error.
We have illustrated our results with the special case of two metallic plates in vacuum, but the method can be applied just as well for any two materials interacting across a dielectric medium.
\vspace{0.1cm}
\\
Finally, we made use of the Quadrature method to infer a fully analytical equation for the computation of the retarded interaction coefficients between two metal plates with dielectric response as given by a single Drude oscillator. However, we highlight that the use of this formula is limited to those metals with little response at large frequencies.
\vspace{0.1cm}
\\
We believe that this work provides a comprehensive picture of the behavior of the retarded interactions between metallic plates.
Methodologically, we hope that the quadrature rules employed here are susceptible of quantitative exploitation in a broad range of studies where dispersion interactions play an important role.
\section*{Acknowledgments}
We acknowledge funding from the Spanish Agencia Estatal de Investigaci\'on under grant FIS2017-89361-C3-2-P.
\section*{Authors Contributions}
JLM and LGM discussed and formulated theory. JLM performed calculations and drafted manuscript. LGM designed research and revised manuscript.
|
1,477,468,750,663 | arxiv | \section{Introduction}\label{intro}
In the last decade or so gravitationally lensed QSOs, both doubles and
quads, have been used mostly for the determination of the Hubble parameter
(see \citet{coles08} for the latest work, and summary of earlier results),
and for the estimation of the mass distribution in the lensing galaxies.
In this paper we will concentrate on the latter.
One can loosely divide the information on the lens mass distribution into
two categories: radial and angular. Much attention has been paid in the
literature to the sky-projected radial mass distribution
in lenses because the slope of the density profile, and its variation with
radius is a test of cosmological models \citep{nfw96,nfw97}.
The density profile slope in the
central regions is also important because it is affected by the (adiabatic)
contraction of dark matter halos in response to the collapsing baryons during
galaxy formation \citep{fsw05,fsb08,g08}.
The angular distribution of lensing mass, for example, the degree of
ellipticity, the change in the ellipticity position angle with radius,
etc. have received some attention as well \citep{cdk08,sw06,ok04}, but mostly
as ``nuisance'' parameters in determining the radial density profile or the
Hubble constant. It is somewhat ironic that the
generally uninteresting ellipticity position angle can be unambiguously
estimated by any reasonable modeling method, even by eye \citep{sw03},
whereas, the more interesting density profile slope is often very
uncertain because of the mass-sheet, or steepness degeneracy \citep{fgs85,s00}.
The positions of lensed images of a quad or a double can also be looked
at as consisting of angular and radial information. By radial information
we mean the relative spread of images in distance from the lens center.
The angular information is the angular separation of the images as viewed
from the lens center. For example, in the Cloverleaf, H1413+117, and
the Einstein Cross, Q2237+030 any two adjacent images are roughly $90^\circ$
apart. In doubles, the two non-central images tend to be separated by
$\sim 150^\circ-180^\circ$.
Obviously there is no simple one-to-one relation between, say, the radial
structure of the lensing mass and the radial distribution of lensed images.
However, there are some qualitative connections between the two.
For example, a steep lens mass distribution tends to produce quads
with narrow radial spread of images, largely independent of the angular
distribution of these images, or the ellipticity of the lensing mass.
Conversely, if the lensing mass has a shallow density profile the images
tend to have a wider radial spread. In the Appendix of this paper we show
that one angular property of the lensing mass, its ellipticity position
angle can be straightforwardly and rather precisely estimated from the
angular positions of the four images of the quad (Section~\ref{estimatingPA}).
The main work presented in this paper is loosely motivated by the
preceding paragraph. Specifically, we ask what information about the
lensing mass can be retrieved by looking solely at the angular distribution
of lensed images around the lens center.
\section{Defining angles and bisector rays}\label{defining}
Following \citet{sw03}, we refer to the four images of a quad by their
arrival time, as 1,~2,~3,~4. Image 1 is the global minimum of the arrival
time surface and hence is the first arriving image. Image 2 is the second
arriving image, and is also a minimum. Images 3 and 4 are saddles of the
arrival time surface. Image 5, a maximum, is the central demagnified image,
and is usually not detected. (See Figure~\ref{fourpanels}). As explained in
\citet{sw03} figuring out the arrival {\it order} of images in observed quads
can be done, in most cases, based on the morphology of the image distribution
alone, without measuring the time delays.
Images 2 and 3 (minimum and saddle) often appear close together; these are
the two images that merge and disappear when the source moves away from the
lens center. Because of that, the angular separation of these two images
(as seen from the lens center), which we will call $\theta_{23}$ can be a
measure of the "quadrupoleness" of a quad system. When 2 and 3 are close
together the system is barely a quad, and could have been a double if the
source happened to be somewhat further away from the lens center, whereas
a quad with images 2 and 3 about $90^\circ$ apart is a ``well established''
quad.
We also define $\beta_{12}$, as the ray anchored at the lens center that bisects
the angle between images 1 and 2. If we further specify that $\beta_{12}$
points roughly away from image 4, then the definition of $\beta_{12}$ is
unambiguous.
Similarly, we define $\beta_{34}$ as the ray bisecting the angle between
images 3 and 4, and pointing roughly away from image 1. The two lower panels
in Figure~\ref{fourpanels} show both these rays for a synthetic mass distribution,
whose projected density contours are shown in the upper left panel.
The images are filled circles. The arrival time surface
is shown in the upper right. The lower left panel shows that the images are
found as the intersection of the solution of the lens equation in the $x$ and
$y$ directions, shown by thick (red) and thin (blue) curves, respectively.
The lower right panel shows the source plane caustics, the source position
(empty green circle), and the two bisector rays.
These angles and bisector rays turn out to have some very interesting
properties, which relate to certain aspects of the lens mass distributions.
\section{Mass distribution: Lenses with two-fold symmetry}\label{twofold}
\subsection{Defining two-fold symmetric lenses}\label{deftwofold}
A two-fold symmetric lens is a projected mass distribution that has
two orthogonal axes of bilateral symmetry. A wide class of popular lens models
are two-fold symmetric. For example, this category includes elliptical
lenses, with any radial density profile. The degree of ellipticity can be a
function of radius, but the ellipticity position angle (PA) should not change
with radius. Lenses with single or multiple external shear axes, as long as
the shear axes are arranged so as to obey the symmetry, also belong in this
category. Two lens classes commonly used for parametric modeling, Pseudo
Isothermal Elliptical Mass Distributions (PIEMD) and Pseudo Isothermal
Elliptical Potentials (PIEP) \citep{kk93} are also members of the two-fold
symmetric family of lenses.
We exclude lenses that, even though two-fold symmetric, have 'wavy'
isodens. (Isodens are contours of equal projected surface mass density in the lens.)
For example, lenses whose isodens follow $\cos(2n\theta)$, with $n>1$, or where
isodens look like petals. In other words, mass distributions with non-convex
isodens are excluded. This is further discussed in Section~\ref{invariant}.
The mass distributions thus defined will be referred to as two-fold symmetric.
In this paper we examine mass distributions through the properties of the
quad lenses they generate. Our study is statistical in nature; we use the
properties of the entire quad population produced by a given mass distribution.
Insights gained from this study help to draw conclusions from the real data, where
a given galaxy lenses one, or maybe a small handful of sources.
In this Section we discuss two-fold symmetric lenses and show that
members of this family are indistinguishable when viewed in a diagnostic
plane whose axes are certain combinations of image angles. Next, we discuss
this diagnostic 'bisector' plane.
\subsection{Introducing the bisector plot}\label{bisector}
The lower right panel of Figure~\ref{fourpanels} suggests that the axes
containing $\beta_{12}$ and $\beta_{34}$ are good indicators of the
orientation of the diamond caustic, and by extension, the PA of the major
and minor axes of the lensing mass distribution around the image ring.
This statement is quantified in the Appendix; here we use this observation
to motivate our choice of $\beta_{12}-\beta_{34}$ as an angle that contains
useful information about the lensing mass.
In the main portion of Figure~\ref{bisector_twofold} (upper right panel)
we plot $\beta_{12}-\beta_{34}$ vs. $\theta_{23}$. Each (red) dot represents a
4-image lens configuration (a quad); all the dots arise from the same galaxy,
but each dot has a different source position, picked randomly on the source
plane. (Sources that do not produce quads did not make it into this plot.)
The galaxy lens used here has an ``isothermal''
projected density profile $\Sigma(R)\propto R^{-1}$ with a small core to avoid
central singularity. The ellipticity, $\epsilon=0.2$, is constant with radius.
(The relation between $\epsilon$ and the axis ratio, $r$ is,
$\epsilon=[1-r]/[1+r]$.)
We call the distribution of points in the $\beta_{12}-\beta_{34}$ vs.
$\theta_{23}$ plane, the bisector plot. The first thing to note is that
the distribution of points in the bisector plot is not random.
There are no quads with the bisector difference less
than $90^\circ$. More interestingly, there is a well defined envelope, a
curved line above and to the right of which there are no quads. We will
call this the `envelope'.
The bisector plot of Figure~\ref{bisector_twofold} is
flanked by two panels. The solid line histogram in the left side panel shows the
distribution of bisector plot points along the $\theta_{23}$ direction; the
$\beta_{12}-\beta_{34}$ values have been ``marginalized'' over. The solid line
histogram in the bottom panel is the distribution of $\beta_{12}-\beta_{34}$
values; here, the $\theta_{23}$ values have been marginalized over. These two
histograms do not fully quantify the distribution of points in the main
two-dimensional bisector plot, but they do give us an easy, though incomplete
way of examining that distribution.
As an example consider a hypothetical quad lens at ($100^\circ,60^\circ$).
When projected on to the two histograms the point falls in the middle of
both the distributions. So, if one is to ask if this point could have been
drawn from the two distributions, the answer would be 'yes' in both cases.
However, looking at the full 2-d bisector plane it is obvious that the quad
cannot be generated by this lens, as it lies above the bounding envelope,
well outside the distribution.
\subsection{The bisector plot: an invariant property?}\label{invariant}
In the previous section we looked at the bisector plot of one type of lens,
with a certain density profile and certain ellipticity. We have also generated
bisector plots for many types of lenses, with varying density profiles,
varying degrees of ellipticity, including ellipticity $\epsilon(r)$ which
changes in radius, lenses with and without external shear, etc.
Our numerous experiments suggest that {\it all lenses that possess two-fold
symmetry, regardless of the radial density distribution and the magnitude or
radial dependence of ellipticity and external shear generate the same
distribution of points in the bisector plot, bounded by a vertical line
and a concave envelope.} We conclude that all two-fold symmetric lenses, as
defined in Section~\ref{deftwofold} are indistinguishable
in the bisector plot. This is one of the main findings of this paper.
This invariance must derive from the shape of the caustic in the source plane.
From our experiments we have noticed that the inner (five image) caustics of
all two-fold symmetric lenses are diamond-shaped, and appear to share
the following two features. First, the diamond caustic itself has two-fold
symmetry (and so the two lines connecting the opposite cusps are perpendicular
to each other), and second, the diamond caustics of any two such lenses can be
made to have the same shape if one is allowed to linearly stretch or shrink
them in the directions along the lines connecting the opposite cusps.
By symmetry arguments, the first feature seems natural for lens mass
distributions that have two-fold symmetry. The lines connecting opposite
cusps of the diamond caustic of a lens with no such symmetry, for
example the one shown in Figure~\ref{fourpanels} (lower right panel),
do not intersect at right angles. The second feature implies the invariance
of the caustic itself (modulo linear stretching of the $x$ or $y$ coordinates),
and is probably the crux of the bisector plot invariance shown in
Figure~\ref{bisector_twofold}.
The invariance does not extend to lenses that have 'wavy' isodens; such lenses
tend to produce caustics more complicated than diamond shapes.
The invariance does not apply to lenses with naked cusps, i.e.
lenses whose diamond caustic cusps stick outside of the oval caustic because
of large ellipticity in the mass distribution.
\subsection{The bisector plot envelope for a specific lensing potential}\label{SISell}
The set of quads that delineate the upper bounding envelope of the bisector
plane, shown, for example in Figure~\ref{bisector_twofold}, must correspond
to a continuous set of sources in the source plane of any two-fold
symmetric lens. We speculate, and confirm using experiments with synthetic
lenses, that the envelope quads, when mapped back to the source plane, form
a straight line that connects the center of the lens to the point on the
diamond caustic closest to the center; we call this the point of closest
approach, and denote it $\vec r_c$.
If the bisector plane is indeed universal, as we claim, then the envelope
must be described by a universal analytical expression. Here we derive the
equation for the envelope for a specific type of a two-fold symmetric lens.
We start with a lensing potential of the form, $\phi(r,\theta)=r\,f(\theta)$
\citep{wmk00}, and work in cylindrical coordinates on the plane of the sky.
The arrival time surface is,
$\psi(r,\theta)=\frac{1}{2}|\vec r-\vec r_s|^2-\phi(r,\theta)$.
The lensing equation, $\vec\nabla\psi=0$, in the $\hat r$ and $\hat \theta$
directions is written as,
\begin{equation}
r_s\cos(\theta-\theta_s)=r-f,\quad\quad\quad\quad
r_s\sin(\theta-\theta_s)={{\partial f}\over{\partial\theta}}
\label{lenseq}
\end{equation}
Using these, the square of the distance of the source from the lens
center is,
\begin{equation}
r_s^2=(r-f)^2+\Bigl({{\partial f}\over{\partial\theta}}\Bigr)^2,
\label{rs2}
\end{equation}
The determinant of the magnification matrix for our lensing potential is,
\begin{equation}
\det A={1\over r}\Bigl[(r-f)-{{\partial^2f}\over{\partial\theta^2}}\Bigr]
\label{detA}
\end{equation}
For sources on the caustic, $\det A=0$, and so
$r-f={\partial^2f}/{\partial\theta^2}$. The caustic equation becomes
\begin{equation}
r_s^2=\Bigl({{\partial^2f}\over{\partial\theta^2}}\Bigr)^2
+\Bigl({{\partial f}\over{\partial\theta }}\Bigr)^2.
\label{rs2caus}
\end{equation}
The two lensing equations, eq.\ref{lenseq} can then be rewritten as,
\begin{equation}
r_s\cos(\theta-\theta_s)={{\partial^2 f}\over{\partial\theta^2}},
\quad\quad\quad\quad
r_s\sin(\theta-\theta_s)={{\partial f}\over{\partial\theta}}.
\label{lenseqcaus}
\end{equation}
Equations~\ref{rs2caus} and \ref{lenseqcaus} make it apparent that
the caustic is oval shaped in the plane defined by orthogonal axes equal
to the second and first derivatives of $f$ with respect to $\theta$,
respectively. The angle that specifies position in that plane is
$(\theta-\theta_s)$. This oval is illustrated in Figure~\ref{oval}, with
filled points, and the right and upper axes. Note that this plane, where the
caustic has an oval shape is not same as the source plane. For comparison,
the caustic in the source plane is also shown in Figure~\ref{oval}, with empty
points, and the left and lower axes. In the source plane the caustic has the
usual diamond shape.
The point of closest approach belongs to the oval and is either on the
${{\partial^2 f}\over{\partial\theta^2}}$ axis, or on the
${{\partial f}\over{\partial\theta}}$ axis, i.e. either
${{\partial f}\over{\partial\theta}}=0$, or
${{\partial^2 f}\over{\partial\theta^2}}=0$, respectively.
To proceed further we specify the form of $\phi$,
\begin{equation}
\phi(r,\theta)=br(1+\gamma\cos 2\theta),
\label{phi}
\end{equation}
where $b$ and $\gamma$ are constant for any given lens. This is the lensing
potential of a singular isothermal sphere with an added elliptical perturbation,
$\gamma$, which generates shear. If there were no shear, $b$ would be the
Einstein ring radius of the SIS lens. This SIS+elliptical lens model
is discussed, for example in \citet{dalal98}. For this lens,
\begin{equation}
{{\partial^2f}\over{\partial\theta^2}}=-4b\gamma\cos 2\theta,\quad\quad\quad\quad
{{\partial f}\over{\partial\theta}}=-2b\gamma\sin 2\theta,
\label{derivs}
\end{equation}
which implies that the point of closest approach corresponds to
${{\partial^2 f}\over{\partial\theta^2}}=0$. (This is shown as the solid line
segment in Figure~\ref{oval}.) From the first of equations~\ref{lenseqcaus},
and restricting ourselves to the 1st and 4th quadrants
(the other two are redundant because of symmetry) we derive that
$\theta-\theta_{c}=\pi/2$, $\theta=\pi/4$, and so $\theta_{c}=-\pi/4$. Here,
$\theta$ is the lens plane angle of only one of the images. $\theta_{c}$ is the
angle of the point of the closest approach, $\vec r_c$ in the source plane,
which is shown as the dashed line segment in Figure~\ref{oval} (left and lower
axes refer to the source plane).
According to our hypothesis all the points defining the bisector plot envelope
lie on a straight line. Therefore, having found its angle, namely $\theta_{c}$
we can now solve for the source positions themselves. To do this we use the
second of equations~\ref{lenseqcaus}. Squaring it, and using
~$\sin^2\theta_{c}=\cos^2\theta_{c}=\frac{1}{2}$ we get,
\begin{equation}
\frac{1}{2}\Bigl[{{r_s}\over{2b\gamma}}\Bigr]^2(1-\sin 2\theta)
=\sin^2 2\theta
\label{quadratic}
\end{equation}
Here, $\theta$ refers to any one of four images, two minima and two
saddles, and in fact this quadratic equation does have four solutions.
There are two solutions for $\sin 2\theta$ from the quadratic itself, and
each one of these gives two solutions because
~$\cos 2\theta=\pm \sqrt{1-\sin^2 2\theta}$.
The two images with $\sin 2\theta>0$ are in the 1st and 2nd quadrants, while
the other two are in the 3rd and 4th. For each of these two pairs of images
their $x$-coordinates place them equidistantly on either side of the $y$-axis.
This implies that the angular distribution of the four images is symmetric about
the $y$-axis. We can take advantage of this in determining how to sort these 4
images in order of arrival time. First note that
images 2 and 3 are interchangeable; the same is true for images 1 and 4.
Images 2 and 3 are the ones that merge together when the
source is on the caustic. This happens for the largest possible $r_s$, i.e.
$r_c=2b\gamma$. By considering various pairs of adjacent images in turn, one
can show that of the 4 images the two that satisfy the merging criterion are
the ones with $\sin 2\theta=(\Delta+K)/2$, where $\Delta=\sqrt{K^2+4K}$, and
$K=\frac{1}{2}[r_s/r_c]^2=\frac{1}{2}[r_s/2b\gamma]^2$.
When the source is on the caustic $2\theta=-\pi/2$
for both of these. The other two images have to be 1 and 4.
The angular separation between images 2 and 3 is then
\begin{equation}
\theta_{23}=
\pi/2-\tan^{-1}\Biggl[{{(\Delta+K)/2}\over{\sqrt{1-\frac{(\Delta+K)^2}{4}}}}\Biggr].
\label{th23}
\end{equation}
Similarly, the angular separation between images 1 and 4, which is always
greater that $\pi/2$ is,
\begin{equation}
\theta_{14}=
\pi/2+\tan^{-1}\Biggl[{{(\Delta-K)/2}\over{\sqrt{1-\frac{(\Delta-K)^2}{4}}}}\Biggr].
\label{th14}
\end{equation}
Then, with some angle visualizing one arrives at the bisector angle difference as,
\begin{equation}
\beta_{12}-\beta_{34}=[2\pi-(\theta_{23}+\theta_{14})]/2
\label{bisd}
\end{equation}
This is what is plotted as the solid curve in Figure~\ref{bisector_twofold},
and subsequent similar figures.
\section{Real quads}\label{realquads}
Our quad lenses are taken from the CASTLeS data set \citep{castles}.
We used all quads, except,
PMNJ0134-0931, whose lensing galaxy's position is ambiguous;
B0128+437, whose lens center is unknown;
SDSS1406+6126, which has partial data; and
Q0047-2808, SDSS1029+2623, SDSS1402+6321 which have no data at all.
We also used two lenses that are not in CASTLeS:
SDSS J125107.57+293540.5 \citep{k07}, and
HE1113-0641 \citep{bws07}.
Cluster lens SDSS J1004+4112, with QSO image separation of $\sim 15''$ was
excluded because the images are formed by the central part of a galaxy cluster,
not a galaxy. The source in B1933+503 is a double lobed radio source,
whose core and one of the lobes are each lensed into quads. These two quads
were included as two separate lenses. This gives us a total of 26 quad lenses
listed in Tables~\ref{table1} and \ref{table2}. Lenses in Table~\ref{table1} have
unambiguous arrival time ordering of images.
In some cross-like quads it is hard to know what the correct numbering of
images should be. In the most ambiguous cases we can only be certain that images
1 and 2 should lie across from one another, and so should images 3 and 4. Using
this as the only rule gives us four distinct
$(\beta_{12}-\beta_{34},~\theta_{23})$ pairs. However, two of these have
unrealistically large $\theta_{23}$ values, generally in excess of $100^\circ$,
and can therefore be discarded, leaving us with two possibilities for the
$(\beta_{12}-\beta_{34},~\theta_{23})$ pair. There are 10 ambiguous lenses,
and each one generates two lines in Table~\ref{table2}.
The quad data is shown in the bisector plot of Figure~\ref{bisector_data}.
The unambiguous arrival time order lenses are represented by bold star symbols.
Each one of the 10 ambiguous time order lenses is represented by two smaller
star symbols, connected by a thin line.
It is apparent from Figure~\ref{bisector_data} that the real quads are not
drawn from the quad distribution generated by two-fold symmetric
lenses. This is most clearly seen close to the 'apex' of the bisector
plot, near $(\beta_{12}-\beta_{34},~\theta_{23})=(90^\circ,90^\circ)$.
Here, nearly all star symbols lie outside of the apex outlined by two-fold
symmetric lenses. The lower portion of the two-fold symmetric lens bisector
plot, roughly below $\theta_{23}\approx 60^\circ$ also appears to be inconsistent
with the observed quad population: the latter are distributed more or less
evenly in the region below the envelope, whereas the density of small points
(from two-fold symmetric lenses) in Figure~\ref{bisector_twofold} increases
sharply as one approaches the envelope from below. The final major difference
is that there is an apparent dearth of real lenses with $\theta_{23}\sim 50^\circ$,
which is not reproduced in the two-fold symmetric lenses.
The two solid line histograms in (the two side panels of)
Figure~\ref{bisector_data} represent two-fold symmetric lenses, while the
histogram delineated with star symbols are the quad data. The Kolmogorov-Smirnov
(KS) test as applied to the $\theta_{23}$ distribution states
that the real quads could not have been drawn from the two-fold symmetric
lenses at 95\% confidence level. The main reason for this is the lack of
real quads with $\theta_{23}$ around $50^\circ$, exactly where the two-fold
symmetric lenses predict most of the quads to lie.
The KS test applied to the $(\beta_{12}-\beta_{34})$ distribution is far less
conclusive, but note that the KS test is not the optimal test here. In
Section~\ref{twofold} we saw that no strictly two-fold symmetric lens can produce
$(\beta_{12}-\beta_{34})$ even a degree smaller than $90^\circ$. So the
presence of real quads with $(\beta_{12}-\beta_{34})\sim 85^\circ$ rules
out these lenses. We conclude that the population of real quads could not have
been generated by two-fold symmetric galaxy lenses only. Many lensing galaxies
must have more complicated mass distributions.
In the next section we explore lenses with twisting isodens and lenses
with various degrees of substructure. That substructure may be important is
already suggested by HE0230. This lenses' image time ordering is unambiguous.
Its coordinates in the bisector plot of Figure~\ref{bisector_data} are at
approximately ($116^\circ$,$41^\circ$), quite far above the envelope.
According to the arguments of Section~\ref{twofold}, the lens mass distribution
must deviate strongly from two-fold symmetric. And if fact, looking
at the optical image of the lens (see CASTLeS) it is apparent that in
addition to the main lensing galaxy there is a secondary galaxy, located
close to image 4. The spectroscopic data of \citep{e06} shows that the main
lensing galaxy and the smaller secondary one are most probably members of a
galaxy group. A tentative conclusion, to be tested in the next section, is
that lens substructure in HE0230 and other lenses is responsible for the
disagreement between the bisector plots of two-fold symmetric lenses
and the real quad population.
\section{Mass distribution: Lenses lacking two-fold symmetry}\label{notwofold}
This is a large class of lens models, for example, lenses with twisting
density contours, lenses with internal and external shear of different
amplitudes and PAs, lenses with substructure, etc. Many real lenses belong in
this vast category.
As a first example we take a synthetic galaxy lens with highly twisting
isodens, the one shown in Figure~\ref{fourpanels}, and also in the
lower left inset in Figure~\ref{bisectorRT}. The thick (blue) contour has
the surface mass density equal to critical for lensing.
The main portion of the same
figure is the bisector plot. The single peak of
Figure~\ref{bisector_twofold} has now split into two peaks. The upper right inset
in a plain line box shows the source plane caustics. In contrast to the
caustics of two-fold symmetric lenses, this diamond caustic is not
two-fold symmetric, for example, the lines connecting its opposite cusps are not
perpendicular to each other.
The left and bottom side panels of Figure~\ref{bisectorRT} show, in bold,
the $\theta_{23}$ and $\beta_{12}-\beta_{34}$ histograms for this lens.
As in the case of two-fold symmetric lenses, the real quad
$\theta_{23}$ distribution does not match that of the synthetic lens with
twisting isodens, because the latter peaks, instead of dipping around $50^\circ$.
The mass distribution of the Figure~\ref{bisectorRT} lens was not meant
to represent any real projected galaxy. Isoden twists in real galaxies result
from the projection of intrinsically triaxial galaxies with radially
dependent axes ratios. To produce a more realistic isoden twisting we start
with a three dimensional mass distribution given by,
\begin{equation}
\rho(r)=(1+r/r_0)^{-2},\quad\quad\quad {\rm and} \quad
~r^2={x^2\over{a^2}}+{y^2\over{b^2/t}}+{z^2\over{c^2/t}},
\end{equation}
where $t$, a parameter proportional to $x$, governs the rate of change of
axis ratios with radius. We used $a:b:c=1:10:2$. Projecting this triaxial
galaxy on to the plane of the
sky using Euler angles $\phi=30^\circ$, $\theta=40^\circ$ and $\psi=100^\circ$
we get the mass map shown in the lower left inset of Figure~\ref{bisectorX1}.
The normalization of the mass distribution is such that the thick (blue) contour
has the critical surface mass density for lensing. The difference in the PA of
the inner and outer isodens is about $70^\circ$, consistent with what is
observed for nearby galaxies \citep{l05}. For our purposes, this synthetic galaxy
is a reasonable approximation for a typical projected triaxial galaxy.
Sampling the source plane caustic, shown in the upper right inset, using
randomly placed sources we get the main panel of Figure~\ref{bisectorX1}.
This bisector plot looks similar to the one in Figure~\ref{bisectorRT},
only the separation of the peaks around $\beta_{12}-\beta_{34}=90^\circ$ is
smaller. In general, the spread of the peaks is directly related to the degree
of isoden twisting in the lens. Just as in the case of Figure~\ref{bisectorRT},
this lens model, and by extension the population of realistic triaxial galaxies
cannot reproduce the bisector plot distribution of the real quads, primarily
because of the dearth of observed quads with $\theta_{23}$ near $50^\circ$.
Before we leave lenses with twisting isodens we note that elliptical lenses
with external shear whose axis does not coincide with the PA of the lens
produce bisector plots similar to the ones in Figures~\ref{bisectorRT} and
~\ref{bisectorX1}.
Next, we turn to lenses with substructure lumps, like secondary or satellite
galaxies located close the primary lens galaxy. Our goal here is to consider a few
representative substructure types. A systematic exploration of the substructure
and what matches observations best will be done in a later paper.
Figures~\ref{bisectorR5} and \ref{bisectorR7} show results for lenses with one
subclump each. In the first case, Figure~\ref{bisectorR5}, the subclump
represents a small perturbation to the lens, so the caustic is only slightly
distorted from its two-fold symmetric diamond shape. Because the lens is now
more complex, the bisector plot is also more complex. However, the
$\theta_{23}$ distribution still does not look like that of the real quads.
In the second case, Figure~\ref{bisectorR7}, the subclump
is compact and relatively more massive. Here, the lens' $\theta_{23}$
distribution (left side panel) looks quantitatively different from all the ones
we have considered so far; it is not a single peaked distribution, centered at
about $55^\circ$. The main peak has moved to $40^\circ$, and there is an
incipient second peak close to $\theta_{23}=90^\circ$. Furthermore, the bisector
plot points are beginning to extend far above the envelope, almost reaching
HE0230, the 'outlier' at ($116^\circ,~41^\circ$). Perhaps it is not surprising
that this lens model (almost) reproduces HE0230; the lens model contains a major
secondary perturber, just as the real lens in the HE0230 system.
Figure~\ref{bisectorR6} shows the results for a lens with two substructure
clumps. The caustic bears no resemblance to a diamond shape, and the bisector
plot distribution is very complex. This lens model reproduces, at least
qualitatively, major features of the observed quad distribution in the bisector
plane. Note that we did not aim to do so; no effort was put
into matching the observed distribution in any detail. The dearth of quads at
$\theta_{23}\sim 50^\circ$ is present in the synthetic lens, and the distribution
of points in the bisector plane extends all the way to HE0230, something that even
the lens of Figure~\ref{bisectorR7} could not do.
Figures~\ref{bisectorX1}--\ref{bisectorR6} are meant only as qualitative
guides to different types of non two-fold symmetric lenses. Based on these
we tentatively conclude that the real population of quad lenses requires
lumpy substructure; features like twisting isodens and external shear are
not enough. However, a thorough exploration of the parameter space of lenses
is needed to make robust conclusions. This will be the subject of a later paper.
\section{Real doubles}\label{doub}
As the source of a quad system moves further away from the lens center
images 2 and 3 move closer to each other, and closer to the critical
line, and eventually disappear, transforming the lens into a double.
As a quad turns into a double, $\theta_{23}=0$ and the remaining images,
1 and 4, become the two images of a double. Figure~\ref{bisector_twofold}
tells us that the largest bisector difference in a quad is $120^\circ$.
Combining this with eq.~\ref{bisd} tells us that ``newly formed'' doubles
should have $(2\pi-\theta_{14})/2=120^\circ$, i.e. their image separation
should be at least $\theta=120^\circ$. So, there should be no doubles with
image separation $<120^\circ$. If the lens is not two-fold symmetric this
limiting angle can change a little.
Because doubles have only two images there is no such thing as a bisector
plot for doubles, however, one can make a plot equivalent to the bottom
panels of Figures~\ref{bisector_twofold}-\ref{bisectorR6}. This is shown in
Figure~\ref{doubles}. The thick solid line histograms the angle between the
two images of 39 doubles taken from CASTLeS. As expected, the angle between
the two images generally stays above $120^\circ$.
The other four histograms in Figure~\ref{doubles} represent synthetic lenses.
The two thin solid line histograms correspond to galaxy lenses whose
projected density profile is proportional to $\exp(-R^{0.25})$.
The two dashed histograms represent ``isothermal'' lenses with a small core;
outside the core the projected density scales as $R^{-1}$. Each one of these
density profiles was given two, constant in radius, ellipticities:
$\epsilon=0.1$ (axis ratio, $r=0.82$) and $\epsilon=0.2$ (axis ratio, $r=0.67$).
Each one of the two
shallower lenses were given the same ellipticities. The ellipticities are
labeled in the plot. All four synthetic lenses are two-fold symmetric,
but, in contrast to the quads, the distributions of these lenses in the
equivalent $\beta_{12}-\beta_{34}$ are different.
The conclusion we draw is that the distribution of doubles in angles is a
more complex function of the galaxy lens parameter that is the case for quads.
A more detailed exploration of the doubles distribution in angles, perhaps
coupled to the analysis of the quads, will be a subject of another paper.
\section{Summary and Conclusions}
We introduce a novel way of analyzing the projected mass distribution in galaxy
lenses that relies on the angular distribution of images in quads and doubles
around the lens center. If the images of a quad are numbered in order of arrival,
as $\theta_1$, through $\theta_4$, and $\theta_{ij}$ is the angle between images
$i$ and $j$ then we define the bisector plane whose axes are linear combinations
of $\theta_{23}$ and $\theta_{14}$. We show empirically that all two-fold symmetric
lenses with convex isodensity contours are identical when considered in the
bisector plane. We derive an analytical expression for the boundary envelope of
the allowed region, for a specific type of lens. These results concerning the
invariance of the bisector plane for two-fold symmetric lenses is one of the main
findings of the paper. It means, for example, that from the point of view of
$\theta_{23}$ and $\theta_{14}$ of quads, a Pseudo Isothermal Elliptical Mass
Distribution is identical to a circular lens, with any density profile plus an
external shear.
This invariance of the bisector planes of two-fold symmetric lenses can be used
to examine the structure of the real galaxy lenses. We conclude that the
observed quad population was not produced by two-fold symmetric lenses.
We also look at three realistic types of non two-fold symmetric mass distributions,
(1) galaxies with twisting isodensity contours, and elliptical galaxies with
external shear axis, (2) galaxies with single substructure clumps, and
(3) galaxies with two substructure clumps. It appears that only the last type
of lenses is able to reproduce the real quad population. This of course does not
mean that all galaxies with observed quads are of type (3), but it does
suggest that kpc-scale substructure is a common feature in galaxy lenses.
To confirm and quantify this conclusion a much more detailed exploration of the
parameter space of non two-fold symmetric lenses is needed. Such a study should
also include potential sources of bias in the quads. For example,
in this paper we have assumed that the real lenses represent a random
sampling of the relevant region in the source plane; in other words, all
sources have the same weights. This means that we have neglected magnification
bias, which makes sources at certain source plane locations more magnified,
and hence more likely to enter a magnitude limited sample. The bias is probably
negligible for quads, since they are already highly magnified; after all,
quads are closely related to Einstein rings. It is unlikely that there is a
missing population of faint quads. However, the magnification bias could be an
issue for the doubles, and will need to be taken into account in future work.
Two final notes are in order. First, the lumpy substructure we refer to here
is different from that searched for using image flux anomalies, e.g. \citet{m04}.
In the latter case substructure lumps are small, and have to lie close to the
line of sight to the images. Our substructure lumps are larger, kpc-sized, more
extended and can live anywhere within the central several kpc of the galaxy lens
center. Second, the varied and complex lumpy substructure that our analysis implies
the lenses should have argues strongly for using non-parametric, or
semi-parametric modeling techniques.
\acknowledgements
This work was supported in part by NSF grant AST 03-07604, and
HST-AR-10985.01-A.
|
1,477,468,750,664 | arxiv | \section{Introduction}
\label{intro}
\IEEEPARstart{I}{n} the context of exploration, ergodic trajectory
optimization computes control laws that drive a dynamic system along
trajectories such that the amount of time spent in regions of the state
space is proportional to the expected information gain in those regions.
Using ergodicity as a metric encodes both exploration and
exploitation---both the need for nonmyopic search when variance is high
and convexity is lost, as well as myopic search when variance is low and
the problem is convex. By encoding these needs into a metric
\cite{Mathew}, generalization to nonlinear dynamics is possible using
tools from optimal control. We show here that different dynamical
systems can achieve nearly identical estimation performance using EEDI.
\begin{figure}[!b] \centering \vspace{-10pt}
\subfloat[The SensorPod robot (white cylinder at end of vertical Z stage) is used
to demonstrate EEDI for active search.] {\label{FishB}\includegraphics[trim=.18in .4in
.02in .33in,clip=true,width= \columnwidth ]{sensorpod.png}}\\
\subfloat[\emph{Apteronotus albifrons} (photograph courtesy
of Per Erik Sviland.) ]
{\label{FishA}\includegraphics[width= \columnwidth ]{FishA2}}
\caption{ The SensorPod (a) uses a sensing modality
inspired by weakly electric fish such as the black ghost
knifefish (b). The SensorPod is mounted on a 4DOF gantry and submerged within a 1.8 m x 2.4 m
x 0.9 m (l,w,h) tank (see multimedia attachment).
}
\label{FIG}
\end{figure}
The SensorPod robot (Fig. \ref{FishB}), which we use as a motivating
example and an experimental platform in Section \ref{results}, measures
disturbances in a self-generated, weak electric field. This sensing
modality, referred to as electrolocation, is inspired by a type of
freshwater tropical fish (Fig. \ref{FishA}, \cite{Krah13a,
Nels06a,Neve13a}), and relies on the coupled emission and detection of
a weak electric field. Electrolocation is ideally suited for low
velocity, mobile vehicles operating in dark or cluttered environments
\cite{Solb08a,MacI04a,Neve13a}. The sensing range for electrolocation is
small, however, so the fish or robot must be relatively close to an
object to localize it. Also, as the sensors are rigid with respect to
the body, the movement of those sensors involves the dynamics of the
entire robot. As we will see in Section \ref{expsetup}, the measurement
model for electrolocation is also highly nonlinear and the dynamics of
both biological fish and underwater robots are generally nonlinear.
Consideration of sensor physics and robot dynamics when planning
exploration strategies is therefore particularly important. The same
applies to many near-field sensors such as tactile sensors,
ultra-shortwave sonar, and most underwater image sensors (e.g.
\cite{Cowen97}). Experiments carried out using the SensorPod robot
demonstrate that the ergodic exploration of distributed information
(EEDI) algorithm is successful in several challenging search scenarios
where other algorithms fail.
\medbreak
\noindent The contributions of this paper can be summarized as
follows:
\begin{enumerate}
\item application of ergodic exploration for general, nonlinear,
deterministic control systems to provide closed-loop coverage with
respect to the evolving expected information density, and
\item validation of ergodic search in an experimental and simulated
underwater sensing setting. We demonstrate both that ergodic search performs as well as alternative algorithms in nominal scenarios, and that ergodic search outperforms alternatives when distractors are present.
\end{enumerate}
Section \ref{relatedwork} begins with a discussion of related work.
Ergodicity as an objective for active sensing is presented in Section
\ref{ergodicitydiscussion}, including the benefits and distinguishing
features of ergodic trajectory optimization. Section \ref{trajopt}
includes an overview of ergodic trajectory optimization. In Section
\ref{expsetup}, we describe the SensorPod experimental platform and
nonlinear measurement model, and introduce the stationary target
localization task used to demonstrate EEDI. We also discuss the
components of closed-loop EEDI for target localization using the
SensorPod in Section \ref{expsetup}. In Section \ref{results}, we
present data from multiple estimation scenarios, including comparison to
several alternative algorithms, and closed-loop EEDI implementation
using different dynamic models for the SensorPod. We also include a
multimedia video attachment with an extended description of the
SensorPod platform and measurement model used in Sections \ref{expsetup}
and \ref{results}, and an animated overview of the steps of the EEDI
algorithm for this system.
\section{Motivation \& related work}
\label{relatedwork}
The ability to actively explore and respond to uncertain scenarios is
critical in enabling robots to function autonomously. In this paper, we
examine the problem of control design for mobile sensors carrying out
active sensing tasks. Active sensing \cite{kreucher05s, Fox98} or sensor
path planning \cite{Cai2009}, refers to control of sensor parameters,
such as position, to acquire information or reduce uncertainty.
Applications include prioritized decision making during search and
rescue \cite{Toh06, Cooper08}, inspection for flaws
\cite{hollinger2013}, mine detection \cite{Cai2009}, object
recognition/classification \cite{Denzler02, Arbel99, Ye99},
next-best-view problems for vision systems \cite{vazquez2001,
massios1998, takeuchi1998}, and environmental modeling/field
estimation \cite{Cao2013, bender2013,marchant2014}. Planning for
search/exploration is challenging as the planning step necessarily
depends not only on the sensor being used but on the quantity being
estimated, such as target location versus target size. Methods for
representing and updating the estimate and associated uncertainty---the
belief state---and a way of using the belief state to determine expected
information are therefore required.
Figure \ref{flow} illustrates the high level components for a general
estimation or mapping algorithm that iteratively collects sensor
measurements, updates an expected information map, and decides how to
acquire further measurements based on the information map. In this
section, we touch on related work for components A-C, although the
differentiating feature of the EEDI algorithm is the way in which
control decisions are made based on the expected information (step C in
Fig. \ref{flow}). The advantages of EEDI are discussed in Section
\ref{ergodicitydiscussion}.
\subsection{Representing the belief state}\label{relatedworkbelief}
The best choice for representing and updating the belief state for a
given application will depend on robot dynamics, sensor physics, and the
estimation task (modeling a field vs. target localization). Designing
appropriate representations for active sensing is a well-studied area of
research. For many applications, such as active sensing for
localization, parametric filters (e.g. the extended Kalman filter (EKF))
\cite{Sim05, Leun06a, Feder99, VanderHook2012} may be used. When the
posterior is not expected to be approximately Gaussian, nonparametric
filters, e.g. Bayesian filters \cite{Marchant2012, Wong05}, histogram
filters \cite{stachniss2003}, or particle filters \cite{kreucher2007,
Roy2006, lu11} are often used. Mapping applications often use
occupancy grids \cite{Bourgault02i, Elfes89} or coverage maps
\cite{stachniss2003}, and much of the most recent work utilizes
Gaussian processes to represent spatially varying phenomena or higher
dimensional belief spaces, and the associated uncertainty
\cite{Cao2013, Singh2009, Hoang2014, bender2013, low2008, souza2014}.
For the experimental work presented in this paper using the SensorPod
robot, we use a Bayesian filter as it is appropriate for general
(non-Gaussian) PDFs, sufficient for representing stationary estimation
objectives, and allows us to take into account sensor physics and
uncertainty in the estimate (see Section \ref{expsetup}). The
differentiating feature of the EEDI algorithm however---the ergodic
trajectory optimization---does not depend on the choice of belief
representation, so long as the choice enables calculation of an expected
information density map.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\tikzstyle{every node}=[font=\small] [node distance = 2cm,
auto]
\node [block2] (EID) {A) Update belief };
\node [block2, right of=EID,node distance=4cm] (control) {B) Calculate
expected information };
\node [block,below of=control,node distance=3cm] (explore) {C) Plan
control action };
\node [block2, left of=explore, node distance=4cm] (update) {D)
Execute \& collect measurements};
\path[->] (EID.east) edge[bend left=45] node[above] {} (control.west);
\path[->] (control.south) edge[bend
left=45]
(explore.north);
\path[->] (explore.west) edge[bend left=45] node[above] {}
(update.east);
\path[->] (update.north) edge[bend left=45] node[above] {}
(EID.south);
\end{tikzpicture}
\caption{Illustration of the necessary components for a general
closed-loop, information-based sensing algorithm. Our primary
contribution in this paper is using ergodic trajectory optimization
for estimation (step C). We demonstrate implementation of closed-loop
estimation for a particular sensing task using the SensorPod robot
(Fig. \ref{FishB}), where the sensing task motivates choice of steps
A, B, D. Section \ref{relatedwork} discusses alternative choices for
steps A through C. \vspace{-10 pt} }\label{flow}
\end{figure}
\subsection{ Calculating expected measurement utility}\label{measutcalc}
For a given sensing task and belief state, not all measurements are
equally informative. The quality of a measurement depends on the sensor
and may be distance, orientation, or motion dependent. To ensure useful
measurements are obtained given realistic time or energy restrictions,
sensing strategies for mobile sensors should seek to optimize
measurement quality \cite{Bajcsy88, spletzer03}. In some cases, it is
sufficient to consider only sensor field-of-view (i.e. useful
measurements can be obtained anywhere within a given distance from a
target), often called ``geometric sensing'' \cite{lu11, dasgupta2006,
zhang09, hager91}. In many scenarios---if the search domain is
significantly larger than the sensor range---a geometric sensing
approach is sufficient. Many sensors, however, have sensitivity
characteristics within the range threshold that affect sensing efficacy.
Infrared range sensors, for example, have a maximum sensitivity region
\cite{Benet02}, and cameras have an optimal orientation and focal length
\cite{Denzler03}.
There are several different entropy-based measures from information
theory and optimal experiment design that can be used to predict
expected information gain prior to collecting measurements. Shannon
entropy has been used to measure uncertainty in estimation problems
\cite{Fox98, Arbel99, vazquez2001, takeuchi1998}, as well as
entropy-related metrics including Renyi Divergence\cite{Leun06a,
kreucher05s}, mutual information, \cite{Toh06, Denzler02, zhang09,
Tisd09a, Grocholsky06, Singh2009, Roy2006, lu2014}, entropy reduction
or information gain maximization \cite{zhang09B, hollinger2013}. In our
problem formulation we use Fisher information \cite{liao04, emery1998,
Ucinski1999, Ucinski2000} to predict measurement utility. Often used
in maximum likelihood estimation, Fisher information quantifies the
ability of a random variable, in our case a measurement, to estimate an
unknown parameter \cite{Frie04, emery1998, liao04}. Fisher information
predicts that the locations where the ratio of the derivative of the
expected signal to the variance of the noise is high will give more
salient data (see Appendix \ref{Fisher Information} and the multimedia
attachment), and thus will be more useful for estimation.
In this paper, the Bayesian update mentioned in Section
\ref{relatedworkbelief} and the use of Fisher information to formulate
an information map are tools that allow us to close the loop on ergodic
control (update the map, step A in Fig. \ref{flow}), in a way that is
appropriate for the experimental platform and search objective (see
Appendix \ref{eediappendix}). The Bayesian update and the Fisher
information matter only in that they allow us to create a map of
expected information for the type of parameter estimation problems
presented in the examples in Section \ref{results}. Ergodic exploration
could, however, be performed over the expected information calculated
using different methods of representing the belief and expected
information, and for different applications such as those mentioned in
\ref{relatedworkbelief}.
\subsection{ Control for information acquisition }\label{measforcontrol}
In general, the problem of exhaustively searching for an optimally
informative solution over sensor state space and belief state is a
computationally intensive process, as it is necessary to calculate an
expectation over both the belief and the set of candidate control
actions \cite{Leun06a,Tisd09a,Singh2009,Atanasov2014}. Many algorithms
therefore rely on decomposing/discretizing the search space, the action
space, or both, and locally selecting the optimal sensing action
myopically (selecting only the optimal next configuration or control
input) \cite{kreucher05s, Feder99}. The expected information gain can,
for example, be locally optimized by selecting a control action based on
the gradient of the expected information \cite{Grocholsky06,
kreucher2007, lu11, Bourgault02i}. As opposed to local information
maximization, a sensor can be controlled to move to that state which
maximizes the expected information globally over a bounded workspace
\cite{vazquez2001, Li05, liao04, Wong05, VanderHook2012}. Such global
information maximizing strategies are generally less sensitive to local
minima than local or gradient based strategies, but can result in
longer, less efficient trajectories when performed sequentially
\cite{Roy2006, stachniss2003}. While myopic information maximizing
strategies have an advantage in terms of computational tractability,
they are typically applied to situations where the sensor dynamics are
not considered \cite{vazquez2001, Li05, liao04, Wong05, VanderHook2012},
and even the global strategies are likely to suffer when uncertainty is
high and information diffuse (as argued in \cite{Rahimi2005,
stachniss2003, low2008}, when discussing re-planning periods), as we
will see in Section \ref{results}.
To avoid sensitivity of single-step optimization methods to local
optima, methods of planning control actions over longer time
horizons---nonmyopic approaches---are often used. A great deal of
research in search strategies point out that the most general approach
to solving for nonmyopic control signals would involve solving a dynamic
programming problem \cite{Singh2009, Cao2013, low2008}, which is
generally computationally intensive. Instead, various heuristics are
used to approximate the dynamic programming solution \cite{Cao2013,
low2008, Hoang2014, stachniss2003}. Variants of commonly used
sampling-based motion planners for maximizing the expected information
over a path for a mobile sensor have also been applied to sensor path
planning problems \cite{Cai2009, zhang09B, Hollinger2014, Sim05,
Leun06a, Ryan2010}.
Search-based approaches are often not suitable for systems with dynamic
constraints; although they can be coupled with low-level (e.g. feedback
control) planners \cite{MartinezCantin2009,lu2014}, or dynamics can be
encoded into the cost of connecting nodes in a search graph
(``steering'' functions) \cite{Hollinger2014}, solutions are not
guaranteed to be optimal even in a local sense---both in terms of the
dynamics and the information---without introducing appropriate
heuristics \cite{Cao2013, low2008, Hoang2014, stachniss2003}. As we will
see in Section \ref{dynamics}, one of the advantages of EEDI is that it
naturally applies to general, nonlinear systems. We take advantage of
trajectory optimization techniques, locally solving for a solution to
the dynamic programming problem---assuming that the current state of the
system is approximately known.
Use of an ergodic metric for determining optimal control strategies was
originally presented in \cite{Mathew} for a nonuniform coverage problem.
The strategy in \cite{Mathew} involves discretizing the exploration time
and solving for the optimal control input at each time-step that
maximizes the rate of decrease of the ergodic metric. A similar method
is employed in \cite{Jacobs}, using a Mix Norm for coverage on
Riemannian manifolds. While our objective function includes the same
metric as \cite {Mathew}, the optimal control problem and applications
are different, notably in that we compute the ergodic trajectory for the
entire planning period $T$, and apply it to a changing belief state.
Additionally, the feedback controllers derived in \cite{Mathew} are
specific to linear, first- or second-order integrator systems, whereas
our method applies to general, nonlinear dynamic systems.
\section{Ergodic optimal control}
\label{ergodicitydiscussion}
Ergodic theory relates the time-averaged behavior of a system to the
space of all possible states of the system, and is primarily used in the
study of fluid mixing and communication. We use ergodicity to compare
the statistics of a search trajectory to a map of expected information
density (EID). The idea is that an efficient exploration strategy---the
path followed by a robot---should spend more time exploring regions of
space with higher expected information, where useful measurements are
most likely to be found. The robot should not, however, only visit the
highest information region (see Fig. \ref{infomaxcartoon}), but
distribute the amount of time spent searching proportional to the
overall EID (Fig. \ref{ergodiccartoon}).
\label{infomaxcompare} This is the key distinction
between using ergodicity as an objective and previous work in active
sensing (e.g. information maximization); the ergodic metric encodes the
idea that, unless the expected information density is a delta function,
measurements should be \emph{distributed} among regions of high expected
information. Information maximizing strategies (that are also nonmyopic)
otherwise require heuristics in order to force subsequent measurements
away from previously sampled regions so as not to only sample the
information maxima.
\begin{figure}
\centering \subfloat [Ergodic trajectory ]
{\label{ergodiccartoon}\includegraphics[width=.7\columnwidth
]{erg.png}}\vfill
\subfloat
[Information maximizing trajectory
]
{\label{infomaxcartoon}\includegraphics[width=.7\columnwidth ]{nonerg.png}}
\caption{Two candidate trajectories $x(t)$ for exploring the EID
(depicted as level sets) are plotted in (a) and (b), both from $t=0$
to $t=T$. Ergodic control provides a way of designing trajectories
that spend time in areas proportional to how useful potential
measurements are likely to be in those areas (a). This is in
contrast to many alternative algorithms, which directly maximize
integrated information gain over the trajectory based on the
current best estimate, as in \ref{infomaxcartoon}. As illustrated in
\ref{ergodiccartoon}, A trajectory $x(t)$ is \emph{ergodic} with
respect to the PDF (level sets) if the percentage of time spent in
any subset $N$ from $t=0$ to $t=T$ is equal to the measure of $N$;
this condition must hold for all possible subsets. \vspace{-10 pt} }
\label{cartoon}
\end{figure}
As mentioned in Section \ref{relatedwork}, many commonly used algorithms
for active sensing, e.g. \cite{Feder99, Bourgault02i,
Wong05,wilson2014}, involve a version of the type of behavior
illustrated in Fig. \ref{infomaxcartoon}, iteratively updating the EID
and maximizing information gain based on the current best estimate,
whether or not that estimate is correct. While computationally
efficient, globally information maximizing approaches are likely to fail
if the current estimate of the EID is wrong. In Section \ref{results},
for example, we show that even when the information map is updated while
calculating the information maximizing control, the estimate may get
trapped in a local maxima, e.g. when there is a distractor object that
is similar but not exactly the same as the target object.
Many sampling-based algorithms for information gathering therefore rely
on heuristics related to assuming submodularity between measurements,
e.g. assuming no additional information will be obtained from a point
once it has already been observed \cite{Singh2009, Sim05,
Hollinger2014}. This assumption forces subsequent measurements away
from previously sampled regions so as not to only sample the information
maxima. As another way to distribute measurements, many nonmyopic
strategies select a set of waypoints based on the expected information,
and drive the system through those waypoints using search-based
algorithms \cite{zhang09B, zhang09, lu2014, dasgupta2006, Rahimi2005,
souza2014, MartinezCantin2009}. Such approaches result in a predefined
sequence that may or may not be compatible with the system dynamics. If
the ordering of the waypoints is not predefined, target-based search
algorithms may require heuristics to avoid the combinatoric complexity
of a traveling salesman problem \cite{Song12, Kim2014}. In some cases,
search algorithms are not well-posed unless both an initial and final
(goal) position are specified \cite{zhang09, Cao2013}, which is not
generally the case when the objective is exploration.
Ergodic control enables how a robot searches a space to depend directly
on the dynamics, and is well posed for arbitrary dynamical systems. In
the case of nonlinear dynamics and nontrivial control synthesis,
encoding the search ergodically allows control synthesis to be solved
directly in terms of the metric, instead of in a hierarchy of problems
(starting with target selection and separately solving for the control
that acquires those targets, for example \cite{zhang09B,zhang09,
dasgupta2006, Rahimi2005, souza2014, MartinezCantin2009}). In ergodic
trajectory optimization, the distribution of samples results from
introducing heuristics into the trajectory optimization, but of encoding
the statistics of the trajectory and the information map directly in the
objective. Using methods from optimal control, we directly calculate
trajectories that are ergodic with respect to a given information
density \cite{miller13R, miller13SE}. It is noteworthy, however, that
even if one wants to add waypoints to a search objective, ergodic search
is an effective means to drive the system to each waypoint in a
dynamically admissible manner (by replacing each waypoint with a
low-variance density function, thus avoiding the traveling salesman
problem). Further, ergodic control can be thought of as a way to
generate a continuum of dynamically compatible waypoints; it is similar
to \cite{zhang09B, zhang09, Rahimi2005}, but allows the number of
waypoints to go to $\infty$, making the control synthesis more tractable
for a broad array of systems.
Many active sensing algorithms are formulated to
either prioritize exploitation (choosing locations based on the current
belief state) or exploration (choosing locations that reduce uncertainty
in the belief state); they are best suited for greedy, reactive
sampling, requiring a prior estimate \cite{Sim05}, or for coverage
\cite{Singh2009, Acar2003, choset2001}. Algorithms that balance both
exploration and exploitation typically involve encoding the two
objectives separately and switching between them based on some condition
on the estimate, \cite{Krause2007, low2008}, or defining a (potentially
arbitrary) weighted cost function that balances the tradeoff between the
two objectives \cite{Hoang2014, MartinezCantin2009, souza2014,
marchant2014}. Using ergodicity as an objective results in an
algorithm that is suitable for both exploration-prioritizing coverage
sampling or exploitation-prioritizing ``hotspot'' sampling, without
modification (policy switching or user-defined weighted objectives
\cite{Krause2007, low2008, Hoang2014, MartinezCantin2009,
marchant2014}). Moreover, the ergodic metric can be used in
combination with other metrics, like a tracking cost or a terminal cost,
but does not require either to be well-posed.
\subsection{Measuring Ergodicity}
We use the \emph{distance from ergodicity} between the time-averaged
trajectory and the expected information density as a metric to be
minimized. We assume a bounded, $n$-dimensional workspace (the search
domain) $X \subset {\mathbb{R}}^{n}$ defined as
$[0,L_1]\times[0,L_2]...\times[0,L_n]$. We define $\bm x(t)$ as the
sensor trajectory in workspace coordinates, and the density function
representing the expected information density as $EID(\bm x)$.
The spatial statistics of a trajectory $\bm x(t)$ are quantified by the
percentage of time spent in each region of the workspace,
\begin{equation}\label{timeavedist}C(\bm x)=\frac{1}{T}\int_0^T \delta\left[\bm x-\bm x(t))\right]dt,
\end{equation}
where $\delta$ is the Dirac delta \cite{Mathew}. The goal is to drive
the spatial statistics of a trajectory $\bm x(t)$ to match those of the
distribution $EID(\bm x)$; this requires choice of a norm on the
difference between the distributions $EID(\bm x)$ and $C(\bm x)$. We
quantify the difference between the distributions, i.e. the distance
from ergodicity, using the sum of the weighted squared distance between
the Fourier coefficients $\phi_{\bm k}$ of the EID, and the coefficients
$c_k$ of distribution representing the time-averaged
trajectory.\footnote{The Fourier coefficients
$\phi_{\bm k}$ of the distribution $\Phi(\bm x)$ are computed using
an inner product,
$\phi_{\boldsymbol k}=\int_{X}\phi(\bm x)F_{\boldsymbol k}(\bm x)d
\bm x,$
and the Fourier coefficients of the basis functions along a
trajectory $\bm x(t)$, averaged over time, are calculated as
$\label{ck2} c_{\boldsymbol k}(\bm
x(t))=\frac{1}{T}\int_{0}^{T}F_{\boldsymbol k}(\bm x(t))dt, $
where $T$ is the final time and $F_k$ is a Fourier basis function.}
The ergodic metric will be defined as $\mathcal{E}$, as follows:
\begin{equation}\label{ephi}
\mathcal{E}(\bm x(t))=\sum_{\boldsymbol k=0 \in \mathbb{Z}^n}^{\boldsymbol K \in \mathbb{Z}^n}\Lambda_k\left[c_{\boldsymbol k}(\bm x(t))-\phi_{\boldsymbol k}\right]^2
\end{equation}
where $\boldsymbol K$ is the number of coefficients calculated along each of the $n$
dimensions, and $\boldsymbol {k}$ is a multi-index
$(k_1,k_2,...,k_n)$. Following \cite{Mathew},
$\Lambda_{\boldsymbol k} = \frac{1}{(1+||{\boldsymbol k}||^2)^s}$ is a
weight where $s=\frac{n+1}{2}$, which places larger weight on lower
frequency information.
Note that the notion of ergodicity used here does not strictly require
the use of Fourier coefficients in constructing an objective function.
The primary motivation in using the norm on the Fourier coefficients to
formulate the ergodic objective is that it provides a metric that is
differentiable with respect to the trajectory $\bm x(t)$. This
particular formulation is not essential---any differentiable method of
comparing the statistics of a desired expected information density to
the spatial distribution generated by a trajectory will suffice, however
finding such a method is nontrivial. The Kullback-Leibler (KL)
divergence or Jensen-Shannon (JS) divergence \cite{bender2013}, for
example, commonly used metrics on the distance between two
distributions, are not differentiable with respect to the trajectory
$\bm x(t)$.\footnote{Due to the Dirac delta in Eq. \eqref{timeavedist},
the JS divergence ends up involving evaluating the EID along the
trajectory $\bm x(t)$. In general we do not expect to have a closed form
expression for the EID, so this metric is not differentiable in a way
that permits trajectory optimization. Alternatively, replacing the Dirac
delta in Eq. \eqref{timeavedist} with a differentiable approximation
(e.g. a Gaussian) would expand the range of metrics on ergodicity, but
would introduce additional computational expense of evaluating an $N$
dimensional integral when calculating the metric and its derivative.} On
the other hand, by first decomposing both distributions into their
Fourier coefficients, the inner product between the transform and the
expression for the time-averaged distribution results in an objective
that is differentiable with respect to the trajectory.
\subsection{Trajectory Optimization} \label{trajopt}
For a general, deterministic, dynamic model for a mobile sensor
$ \dot{\bm x}(t)=f(\bm x(t),\bm u(t)) $, where $\bm x\in\mathbb{R}^N$ is
the state and $\bm u\in\mathbb{R}^n$ the control, we can solve for a
continuous trajectory that minimizes an objective function based on both
the measure of the ergodicity of the trajectory with respect to the EID
and the control effort, defined as
\begin{equation}\label{Jdis2}
J(\bm x(t))=\underbrace{\gamma \mathcal{E}[\bm x(t)]
}_\text{ergodic cost} +\underbrace{\int_{0}^{T}\frac{1}{2}
\bm u(\tau)^TR \bm u(\tau)d\tau}_\text{control effort}.
\end{equation}
In this equation, $\gamma \in \mathbb{R}$ and
$R(\tau)\in \mathbb{R}^{m\times m}$ are arbitrary design parameters that
affect the relative importance of minimizing the distance from
ergodicity and the integrated control effort. The choice of ratio
of $\gamma$ to $R$ plays the exact same role in ergodic control as it
does in linear quadratic control and other methods of optimal control;
the ratio determines the balance between the objective---in this case
ergodicity---and the control cost of that objective. Just as in these
other methods, changing the ratio will lead to trajectories that perform
better or worse with either more or less control cost.
In \cite{miller13R} we show that minimization of Eq. \eqref{Jdis2} can
be accomplished using an extension of trajectory optimization
\cite{Hauser}, and derive the necessary conditions for optimality. The
extension of the projection-based trajectory optimization method from
\cite{Hauser} is not trivial as the ergodic metric is not a Bolza
problem; however, \cite{miller13R} proves that the first-order
approximation of minimizing Eq. \eqref{Jdis2} subject to the dynamics
$ \dot{\bm x}(t)=f(\bm x(t),\bm u(t))$ is a Bolza problem and that
trajectory optimization can be applied to the ergodic objective. The
optimization does not require discretization of search space or control
actions in space or time. While the long time horizon optimization we
use is more computationally expensive than the myopic, gradient-based
approach in \cite{Mathew}, each iteration of the optimization involves a
calculation with known complexity. The EID map and ergodic objective
function could, however, also be utilized within an alternative
trajectory optimization framework (e.g. using sequential quadratic
programming). Additionally, for the special case of
$\dot{\bm x} = \bm u$, sample-based algorithms \cite{Hollinger2014} may
be able to produce locally optimal ergodic trajectories that are
equivalent (in ergodicity) to the solution obtained using trajectory
optimization methods; this would not however, be the case for general,
nonlinear dynamics $ \dot{\bm x}(t)=f(\bm x(t),\bm u(t))$.
Ergodic optimal control allows for the time of exploration to be
considered as an explicit design variable. It can, of course, be of
short duration or long duration, but our motivation is largely long
duration. The idea is that one may want to execute a long exploration
trajectory prior to re-planning. The most straightforward motivation is
that integrating measurements and updating the belief may be the more
computationally expensive part of the search algorithm \cite{Roy2006,
stachniss2003, Sim05, low2008}. Overly reactive/adaptive
strategies---strategies that incorporate measurements as they are
received---are also likely to perform poorly when the estimate
uncertainty is high \cite{Rahimi2005, stachniss2003,low2008} or in the
presence of (inevitable) modeling error. If, for example, the
measurement uncertainty is not perfectly captured by the measurement
model, the idealized Bayesian update can lead to overly reactive control
responses. Instead, one may wish to take enough data such that the
central limit theorem can be applied to the measurement model, so that
the measurement model is only anticipated to be applicable on average
over the length of the exploratory motion \cite{Chirikjian2009}. Future
work will involve exploring the effects of reducing the re-planning
horizon on the success of the estimation algorithm.
\subsection{Assumptions: Ergodic Trajectory Optimization}\label{ergoassumptions}
Ergodic trajectory optimization requires a controllable motion model for
the robot, and an expected information density function defined over the
sensor state space. The motion model can be nonlinear and/or dynamic,
one of the primary benefits of a trajectory optimization approach. For
this paper we consider calculating search trajectories in one and two
dimensions (although the sensor dynamics can be higher dimensional). The
trajectory optimization method can be extended to search in higher
dimensional search spaces such as ${\mathbb{R}}^3$ and $SE(2)$, so long as a
Fourier transform \cite{Chirikjian} exists for the manifold
\cite{miller13SE}. We consider only uncertainty of static environmental
parameters (e.g. fixed location and radius of an external target)
assuming noisy measurements. We assume deterministic dynamics.
\section{Experimental methods: search for stationary targets using the
Sensorpod Robot }
\label{expsetup}
Although ergodic trajectory optimization is general to sensing
objectives with spatially distributed information, we describe an
application where the belief representation and expected information
density (EID) calculation (steps A, B, and D in Fig. \ref{flow}) are
chosen for active localization of stationary targets using the
SensorPod robot. This allows us to experimentally test and validate a
closed-loop version of the ergodic optimal control algorithm, described
in Section \ref{ergodicitydiscussion}, against several established
alternatives for planning control algorithms based on an information
map.
\begin{figure}[!t]
\centering
\vspace{-0pt}
\subfloat[+0.2 mV expected voltage difference between the sensors on the SensorPod for a target located as shown.] {\label{isopotentialsb}\includegraphics[trim=-.3in 0in -.3in
.0in,clip=true,width=1\columnwidth ]{isopotentialsa.png}}\\
\subfloat[-0.2 mV expected voltage difference between the sensors (A-B) on the SensorPod for a target located as shown.] {\label{isopotentialsa}\includegraphics[trim=-.3in 0in -.3in
0in,clip=true,width=1\columnwidth ]{isopotentialsb.png}}
\caption{The SensorPod (grey) measures the difference between the
field voltage at the two sensors (A-B). The field is generated by
two excitation electrodes on the SensorPod body. The field
(simulated isopotential lines plotted in black) changes in the
presence of a target with different conductivity than the
surrounding water. The 0V line is bolded. The perturbation cause by
the same object results in a different differential measurement
between sensors A and B based on the position of the object relative
to the SensorPod. For more information and an animation of this
plot, please see the multimedia video attachment. Note that the
SensorPod is not measuring the field itself (which is emitted by,
and moves with, the robot), but the voltage differential between two
sensors induced by disturbances in the field. }
\label{isopotentials}
\vspace{-10pt}
\end{figure}
\begin{figure}[!t] \centering \vspace{-0 pt} \subfloat[The expected
differential voltage measurement (A-B from Fig. \ref{isopotentials}) is plotted as a function of robot
centroid for a target (pink) located at X,Y=(0,0). Two possible
SensorPod trajectories are plotted in black (solid and dashed). The target is placed below the robot's plane of motion to prevent collisions.]
{\label{Fishmeasa}\includegraphics[trim=-1in .02in -1in
.0in,clip=true,width=\columnwidth ]{bettermeasa.png}}\\
\subfloat[Simulated differential voltage measurements for the trajectories in
(a) are plotted as a function of time. ]
{\label{Fishmeasb}\includegraphics[trim=-1in .02in -1in
.025in,clip=true,width= \columnwidth ]{bettermeasb.png}}
\caption{
Measurements collected by the SensorPod have a nonlinear
and non-unique mapping to target
location. The dashed trajectory in Fig. \ref{Fishmeasa}
yields uninformative measurements (possible to observe for many potential target locations); the solid trajectory in Fig.
\ref{Fishmeasa}, produces a series of measurements that are unique
for that target position, and therefore useful for estimation. }
\label{Fishmeas} \vspace{-10pt}
\end{figure}
Inspired by the electric fish, the SensorPod (Fig. \ref{FishB}) has two
excitation electrodes that create an oscillating electric field. We use
a single pair of voltage sensors---hence, a \emph{one-dimensional
signal} ---on the body of the SensorPod to detect perturbations in the
field due to the presence of underwater, stationary, nonconducting
spherical targets. The expected measurement depends on the location,
size, shape, and conductivity of an object as well as the strength of
the electric field generated by the robot; for details, see
\cite{Bai2015}. The perturbed electric fields and resulting differential
measurements for targets in two locations relative to the SensorPod are
shown in Fig. \ref{isopotentials}, and the differential voltage
measurement is plotted in Fig. \ref{Fishmeasa}. Figure \ref{Fishmeasb}
shows the expected differential measurement for two candidate sensor
trajectories. The multimedia attachment provides additional intuition
regarding the SensorPod and and the observation model. The solid line
trajectory is more informative, as measured using Fisher Information,
than the dashed line; our goal is to automatically synthesize
trajectories that are similarly more informative.
The objective in the experimental results presented in Section
\ref{results} is to estimate a set of unknown, static, parameters
describing individual spherical underwater targets. Details and
assumptions for implementation of both the Bayesian filter and the
calculation of the expected information density for the SensorPod robot,
including for the multiple target case, can be found in Appendix
\ref{eediappendix}; an overview of the algorithm is provided here,
corresponding to the diagram in Fig. \ref{flow}). For a graphical,
animated overview of the algorithm, please also see the attached
multimedia video.
The algorithm is initialized with the sensor state at the
initial time $\bm x(0)$ and an initial probability distribution
$p(\bm \theta)$ for the parameters $\bm\theta$. We represent and update
the parameter estimate using a Bayesian filter, which updates the
estimated belief based on collected measurements (Fig. \ref{flow}, step
A). The initial distribution can be chosen based on prior information
or, in the case of no prior knowledge, assumed to be uniform on
bounded domains. At every iteration of the EEDI algorithm, the EID is
calculated by taking the expected value of the Fisher information with
respect to the belief $p(\bm \theta)$ (Fig. \ref{flow}, step B). For
estimation of multiple parameters, we use the D-optimality metric on the
expected Fisher information matrix, equivalent to maximizing the
determinant of the expected information \cite{emery1998}.\footnote{Note
that alternative choices of optimality criteria may result in
different performance for different problems based on, for example,
the conditioning of the information matrix. D-optimality is commonly
used for similar applications and we found it to work well
experimentally; however the rest of the EEDI algorithm is not
dependent on this choice of optimality criterion.} In Fig.
\ref{FIsmaps}, the corresponding EIDs for two different belief maps for
2D target location (Figs. \ref{FItight} and \ref{FIsmear}), as well as
the EID for estimating both 2D target location and target radius
(\ref{FIrad}), are shown. The EID is always defined over the sensor
configuration space (2D), although the belief map may be in a different
or higher dimensional space (e.g. over the 2D workspace and the space of
potential target radii). The normalized EID is used to calculate an
optimally ergodic search trajectory for a finite time horizon (Fig.
\ref{flow}, step C). The trajectory is then executed, collecting a
series of measurements (Fig. \ref{flow}, step D, for time $T$).
Measurements collected in step D are then used to update the belief
$p(\bm \theta)$, which is then used to calculate the EID in the next
EEDI iteration. The algorithm terminates when the norm on the of the
estimate falls below a specified value.
For localizing and estimating parameters for multiple targets, we
initialize the estimation algorithm by assuming that there is a single
target present, and only when the norm on the variance of the parameters
describing that target fall below the tolerance $\epsilon$ do we
introduce a second target into the estimation. The algorithm stops
searching for new targets when one of two things happen: 1) parameters
for the last target added converge to values that match those describing
a target previously estimated (this would only happen if all targets
have been found, as the EID takes into account expected measurements
from previous targets), or 2) parameters converge to an invalid value
(e.g. a location outside of the search domain), indicating failure. The
algorithm terminates when the entropy of the belief map for all targets
falls below a chosen value; for the 0 target case, this means that the
SensorPod has determined that there are no objects within the search
space. Note that the EID for new targets takes into account the
previously located targets
\subsection{Assumptions: stationary target localization using the SensorPod (EEDI example)}\label{expassumptions}
We make a number of assumptions in choosing steps A, B, and D in Fig.
\ref{flow}, detailed in Appendix \ref{eediappendix}. We assume a
measurement is made according to a known, differentiable measurement
model (a function of sensor location and parameters), and assume the
measurements have zero-mean, Gaussian, additive
noise.\footnote{Related work in active
electrosense has shown that zero mean Gaussian is a reasonable
assumption for sensor noise \cite{Solb08a}.} We assume independence
between individual measurements, given that the SensorPod state is known
and the measurement model is not time-varying. Measurement independence
is commonly assumed, for example in occupancy grid problems
\cite{stachniss2003}, however more sophisticated likelihood functions
that do not rely on this assumption of independence \cite{Thrun2003}
could be used without significantly changing the structure of the
algorithm.
\begin{figure}[!t] \centering \vspace{-10 pt}
\subfloat[A low-variance PDF of 2D target location ]
{\label{tightprob}\includegraphics[trim=.1in .02in .27in
.025in,clip=true,width=.31 \columnwidth ]{tightprob.png}}
\hspace{2 pt}
\subfloat[ EID for target location for the PDF in (a)]
{\label{FItight}\includegraphics[trim=-.0in .02in .4in
.025in,clip=true,width=.31 \columnwidth
]{tightfisher.png}}
\hspace{2 pt}
\subfloat[EID for target
location and radius. ]{\label{FIrad}
\includegraphics[trim=-0.in .02in .4in
.065in,clip=true,width=.31\columnwidth ]{radiusfisher.png}}
\vspace{-5 pt}\\
\subfloat[ A higher-variance PDF of 2D target location.] {\label{smearprob}\includegraphics[trim=-.1in
.02in .3in .025in,clip=true,width=.34 \columnwidth
]{smearprob.png}} \quad
\subfloat[ EID map for the PDF in
(c) ]{\label{FIsmear}\includegraphics[trim=-.1in .02in .37in
.025in,clip=true,width=0.33\columnwidth ]{smearfisher.png}
}\vspace{-0 pt}
\caption{
The EID is dependent on the measurement model and the current
probabilistic estimate.
Figures \ref{FItight}, \ref{FIrad}, \ref{FIsmear} show examples of the
EID for different PDFs and estimation tasks for the SensorPod
measurement model. For \ref{FIrad}, the projection of the corresponding PDF (defined in three-dimensions) onto the 2D
location space would be similar to (a). The EID is calculated according to Eq. \eqref{eideq}.
In all cases, calculation of the EID produces a map over the search
domain, regardless of the estimation task.
}
\label{FIsmaps} \vspace{-10pt}
\end{figure}
\begin{figure*}[!t] \vspace{-10pt} \centering \subfloat[For estimation of
target location in 1D (Sections \ref{POC1D} and \ref{IC}), the target
object (green) was placed at a fixed distance of $y= 0.2$ m from the
SensorPod line of motion, and the distractor (pink) at $y_d= 0.25$ m.
] {\label{1dtanks} \includegraphics[width=.62 \columnwidth
]{Figure5p41.pdf}} \quad \subfloat[ Expected voltage
measurement over 1D sensor state for the target (pink) and distractor
(green) objects alone, and when both target and distractor are present
(black). ] {\label{fig:voltagetrace} \includegraphics[width=.7
\columnwidth ]{pertplot.pdf}}\quad \subfloat[Example of
tank configuration for 2D localization of two targets. For all trials,
SensorPod and object locations are measured from the center of the
tank.\vspace{-0 in }] {\label{2dtanks}\includegraphics[width=.62
\columnwidth ]{Figure5p6.pdf}}
\caption{
Targets were placed below the robot's plane of motion to prevent
collisions. The orientation of the robot is held constant. The
voltage sensors sample at 100 Hz, with an assumed standard deviation
of $100$ $\mu V$ for the measurement noise, the experimentally
observed noise level of the SensorPod sensors. \vspace{-10 pt}}
\label{1dtraj1}
\vspace{-0cm}
\end{figure*}
For the single target case, we maintain a joint probability
distribution for parameters describing the same target as they are
likely to be highly correlated. In order to make the problem of finding
an arbitrary number of objects tractable, we assume that the parameters
describing different targets are independent and that a general
additive measurement model may be used, similar to \cite{Leun06a,
thrun05p, Wong05}. Although the voltage perturbations from multiple
objects in an electric field do not generally add linearly, we make the
assumption that the expected measurement for multiple objects can be
approximated by summing the expected measurement for individual
objects, which simplifies calculations in Appendix
\ref{eediappendix}.\footnote{Additional experimental work (not
shown), demonstrated that at a minimum of 6 cm separation between
objects, there is no measurable error using this approximation; in
the experimental and simulated trials, we use a minimum separation
of 12 cm.} While the computation of Fisher Information and the
likelihood function used for the Bayesian update depend on the
assumptions mentioned above, the ergodic optimal control calculations
do not, and only depend on the existence of an EID map.
The SensorPod is attached to a 4-DOF gantry system, which allows use of
a kinematic model of the SensorPod in Eq. \eqref{Jdis2}, i.e. the
equations of motion for the experimental system are
$\dot{\bm x}(t)=\bm u(t)$, where $\bm x$ is the SensorPod position in
1D (Sections \ref{POC1D} and \ref{IC}) or 2D (Sections \ref{POC2D},
\ref {POCmultiple}, and \ref{2dtargetradius}). The kinematic model and
2D search space also enable comparison with other search methods;
however, it should be noted that EEDI is applicable to dynamic,
nonlinear systems as well, as will be demonstrated in simulation in
Section \ref{dynamics}.
Ergodic trajectory optimization, presented in Section
\ref{ergodicitydiscussion}, calculates a trajectory for a fixed-length
time horizon $T$, assuming that the belief, and therefore the EID map,
remains stationary over the course of that time horizon. In the
following experiments, this means that each iteration of the closed
loop algorithm illustrated in Fig. \ref{flow} involves calculating a
trajectory for a fixed time horizon, executing that trajectory in its
entirety, and using the series of collected measurements to update the
EID map before calculating the subsequent trajectory. The complete
search trajectory, from initialization until termination, is therefore
comprised of a series of individual trajectories of length $T$, where
the belief and EID are updated in between (this is also true for the
alternative strategies used for comparison in Section \ref{results}).
The EID map could alternatively be updated and the ergodic trajectory
re-planned after each measurement or subset of measurements, in a more
traditional receding horizon fashion, or the time horizon (for planning
and updating) could be optimized.
\subsection{Performance assessment}
In the experiments in Section \ref{results}, we assess performance using
\textbf{\emph{time to completion}} and \textbf{\emph{success rate}}.
Time to completion refers to the time elapsed before the termination
criterion is reached, and a successful estimate obtained. We present
results for time until completion as the “slowdown factor.” The
\textbf{\emph{slowdown factor}} is a normalization based on minimum time
until completion for a particular set of experiments or simulation. For
a trial to be considered successful, the mean of the estimate must be
within a specified range of the true target location, and in Section
\ref{POCmultiple}, the number of targets found must be correct. The
tolerance used on the distance of the estimated parameter mean to the
true parameter values was 1 cm for the 1D estimation experiments and 2
cm for 2D experiments. In both cases this distance was more than twice
the standard deviation used for our termination criterion.
A maximum run-time was enforced in all cases (100 seconds for 1D and
1000 seconds for 2D experiments). For simple experimental scenarios,
e.g. estimation of the location of a single target in 2D (Section
\ref{POC2D}), these time limits were longer than the time to completion
all algorithms in simulation. Additional motivation for limiting the run
time were constraints on the physical experiment, and the observation
that when algorithms failed they tended to fail in such a way that the
estimate variance never fell below a certain threshold (excluding the
random walk controller), and the success criteria listed above could not
be applied.
\section{Trial scenarios \& results}
\label{results}
Experiments were designed to determine whether the EEDI algorithm
performs at least as well as several alternative choices of controllers
in estimating of the location of stationary target(s), and whether there
were scenarios where EEDI outperforms these alternative controllers,
e.g. in the presence of distractor objects or as the number of targets
increases. Experiments in Sections \ref{POC1D} through
\ref{2dtargetradius} are performed using the kinematically controlled
SensorPod robot and simulation, and these results are summarized in
Section \ref{expsummary}. In Section \ref{dynamics}, we transition to
simulation-only trials to demonstrate successful closed-loop estimation
of target location, but compare trajectories and performance using three
models of the robot; the kinematic model of the experimental system, a
kinematic unicycle model, and a dynamic unicycle model.
\label{altalgs}
In Sections \ref{POC1D} through \ref{2dtargetradius} we compare the performance
of EEDI to the following three implementations of information maximizing controllers and a random walk controller:
\begin{enumerate}[I.]
\item \textbf{Information Gradient Ascent Controller (IGA)} The IGA
controller drives the robot in the direction of the gradient of the
EID at a fixed velocity of 4 cm/s, inspired by
controllers used in \cite{Grocholsky06, kreucher2007, lu11,
Bourgault02i}.
\item \textbf{Information Maximization Controller (IM)} The SensorPod is
controlled to the location of the EID maximum, at a constant velocity
for time $T$, similar to \cite{Li05,liao04,Wong05,VanderHook2012}.
\item \textbf{Greedy Expected Entropy Reduction (gEER)} At each
iteration, fifty locations are randomly selected, within a fixed
distance of the current position. The SensorPod is controlled to the
location that maximizes the expected change in entropy, integrated
over the time horizon T.\footnote{The expected entropy reduction is
$H(\theta)-E[H(\theta) | V^+(t)]$ where
$H(\theta)=-\int p(\theta) \log p(\theta)d\theta$ is the entropy of
the unknown parameter $\theta$ \cite{thrun05p,Tisd09a} and $V^+(t)$
is the expected measurement, calculated for each candidate
trajectory $x^+(t)$, the current estimate $p(\theta)$, and the
measurement model.} This approach is similar to the method of
choosing control actions in \cite{Fox98, kreucher05s,
Feder99,souza2014}
\item \textbf{Random Walk (RW)} The SensorPod executes a randomly
chosen, constant velocity trajectory from the current sensor position
for time $T$, similar to \cite{Solb08a}.
\end{enumerate}
The planning horizon $T$ was the same for all controllers, so that the
same number of measurements is collected.
Alternative algorithms, for example a greedy gradient-based controller
(IGA) or a random walk (RW), produce control signals with less
computational overhead than the EEDI algorithm because the
EEDI involves solving a continuous trajectory optimization problem and
evaluating an additional measure on ergodicity. In the next section we
demonstrate several scenarios in which the tradeoff in computational
cost is justified if the estimation is likely to fail or suffer
significantly in terms of performance using less expensive control
algorithms. Additionally, the alternative algorithms, while appropriate
for the kinematically-controlled SensorPod robot, do not generalize in
an obvious way to nonlinear dynamic models. This is one of our reasons
for desiring a measure of nonmyopic search that can be expressed as an
objective function (i.e. ergodicity). Given an objective, optimal
control is a convenient means by which one makes dynamically dissimilar
systems behave similarly to each other according to a metric of
behavior. In the case of exploration, the measure is of coverage
relative to the EID---however it is constructed.
\subsection{Performance comparison for 1D target estimation in the
presence of an unmodeled distractor}\label{POC1D}
In this section, the robot motion is constrained to a single dimension,
and the estimation objective is the 1D location $\theta$ of a single,
stationary target with known radius. A {\it distractor object} is placed
in the tank, as an unmodeled disturbance, in addition to the (modeled)
{\it target object}. Both the target and the distractor were identical
non-conductive, 2.5 cm diameter spheres, placed at different fixed
distances from the SensorPod's line of motion (see Fig. \ref{1dtanks}).
The voltage signal from the distractor object is similar but not
identical to that of the target (see Fig. \ref{fig:voltagetrace}).
Placing the distractor object further from the SensorPod line of motion
results in decreased magnitude and rate of change of the voltage trace.
Introducing an unmodeled distractor even in a one-dimensional sensing
task was enough to illustrate differences in the performance of the EEDI
Algorithm and Algorithms I-IV.
\begin{figure}[t]
\centering
\vspace{-.0cm}
\subfloat[Simulation ]
{\label{FIG10_2}\includegraphics[trim=.58in
.1in .5in
.09in,clip=true,width=0.5\columnwidth ]{sim1D.pdf}}
\subfloat[ Experiment ] {\label{FIG10_1}\includegraphics[trim=.32in
.1in .3in
.09in,clip=true,width=0.5\columnwidth ]{exp1D.pdf}}
\caption{Examples of closed-loop optimally ergodic search in
simulation and experiment. The EID is shown as a density plot in
blue, and the search trajectory in red. The belief and trajectory are
recalculated every second in simulation and every 8 seconds in
experiment.\vspace{-10 pt}}
\label{1dtraj1}
\vspace{-0cm}
\end{figure}
\begin{table}[t]
\caption{
Performance measures for estimation of single target location in 1D for 100 simulated and 10 experimental trials. Results for time until completion (slowdown factor) are only shown for successful trials. Slowdown factor of 1 corresponds to 15.2 s in experiment, 7.6 s in simulation.}
\begin{center}
\begin{tabular}{|c|c | c| c|c|c|} \hline Description & EEDI&gEER &IM&IGA&RW \\
\hline \hline
Exp. Success \% & 100 &50 & 60 & 50 & 80 \\
Sim. Success \% &100 & 60 & 71 & 66 & 99 \\\hline
Exp. Slowdown Factor & 1 &1.4 &2.1 & 2.7 &2.7 \\
Sim. Slowdown Factor & 1 &2.1 &2.1 & 2.3 & 6.3\\ \hline
\end{tabular}
\end{center}
\label{tableBIG} \vspace{-15 pt}
\end{table}
\begin{figure*}[t]\vspace{-0 pt}
\centering
\includegraphics[
width=1\textwidth ]{modifiedevol.png}\vspace{-15pt}
\caption{A progression of the estimate of the two-dimensional target
location using the EEDI algorithm. As the algorithm progresses,
collected measurements evolve the estimate from a uniform
distribution over the workspace (top-leftmost figure), to a
concentrated distribution at the correct location. At each interval,
the EID is calculated from the updated estimate, which is then used
to calculate an ergodic search trajectory. \vspace{-10
pt}}\label{2dtraj}
\end{figure*}
We performed 100 trials in simulation and 10 in experiment, with the
target position held constant and the distractor's position along the
SensorPod's line of motion randomized.\footnote{ The only additional
consideration in the experimental scenario was that the tank walls
and the water surface have a non-negligible effect on measurements.
We compensate for this by collecting measurements on a fine grid in
an empty tank, and subtracting these measurements at each measurement
point during estimation.} The results for success rate and average
slowdown factor (for successful trials), averaged over all trials in
simulation and experiment, are summarized in Table \ref{tableBIG}. The
slowdown factor is the average time until completion, normalized by the
minimum average time until completion in experiment or simulation.
Results presented in Table \ref{tableBIG} demonstrate that the EEDI
algorithm localizes the target successfully 100\% of the time, and does
so more quickly than Algorithms I-IV.
Differences in time to completion between experimental and simulated
trials are due to experimental velocity constraints. In simulation, the
time horizon used for trajectory optimization, and therefore between
PDF updates, was one $T=1$ second. A longer ($T=8$ seconds) time
horizon was used for experimental trajectory optimization, avoiding the
need to incorporate fluid dynamics in the measurement model; at higher
velocities the water surface is sufficiently disturbed to cause
variations in the volume of fluid the field is propagating through,
causing unmodeled variability in sensor measurements.
Figure \ref{1dtraj1} shows experimental and simulated examples of
closed-loop one-dimensional trajectories generated using the EEDI
algorithm. Given no prior information (a uniform belief), the ergodic
trajectory optimization initially produces uniform-sweep-like behavior.
In the experimental trial shown in Fig. \ref{FIG10_1}, the termination
criteria on the variation of the PDF is reached in only two iterations
of the EID algorithm, a result of the longer time horizon and resulting
higher density measurements. The distributed sampling nature of the EEDI
algorithm can be better observed in the simulated example shown in Fig.
\ref{FIG10_2}, where shorter time horizons and therefore more sparse
sampling over the initial sweep require more iterations of shorter
trajectories. As the EID evolves in Fig. \ref{FIG10_2}, the shape of the
sensor trajectory changes to reflect the distribution of information.
For example, the sensor visits the local information maximum resulting
from voltage perturbations due to the target and the local information
maximum due to the distractor between 1 and 4 seconds. Experimental
results for this trial were presented in \cite{silverman13}.
\begin{table}[t]\vspace{-0 pt}
\caption{Performance measures for estimation of
single target location in 2D for 10 simulated and 10 experimental
trials. Results for time until completion (slowdown factor) are only shown for successful trials. Slowdown factor of 1 corresponds to 64 s in experiment, 65.2 s in simulation.}
\begin{center}
\begin{tabular}{|c|c | c| c|c|c|}
\hline
Description & EEDI &gEER&RW \\
\hline
\hline
Exp. Success \% & 100 & 90 & 90 \\
Sim. Success \% &100 & 100 & 100 \\
\hline
Exp. Slowdown Factor& 1 &1.2 & 2.9\\
Sim. Slowdown Factor & 1 &1.1 & 2.6\\
\hline
\end{tabular}
\end{center}\vspace{-15 pt}
\label{table2DND}
\end{table}
\subsection{Performance comparison for estimating the 2D location of
a single target} \label{POC2D}
In this section, the robot is allowed to move through a
2D workspace and the objective was to compare the performance of EEDI
to gEER and RW for 2D, stationary target localization, i.e.
$\bm \theta=(\theta_x,\theta_y)$. No distractor object was present as
the difference in performance between algorithms was notable even
without introducing a distractor object. Fig. \ref{2dtanks} shows an
example tank configuration for multiple target estimation in 2D. We
omit comparison to IGA and IM for 2D experiments; RW is the simplest
controller to calculate and resulted in high success percentage for
1D trials, and gEER, with performance similar to IGA and IM on
average in 1D trials, is qualitatively similar to our approach and
more commonly used.
Ten trials were run for each of the EEDI, gEER, and RW algorithms, both
in experiment and simulation, with the target location randomly chosen.
Figure \ref{2dtraj} shows the convergence of the belief at 10 second
intervals ($T=10$), as well as the corresponding EID and ergodic
trajectory. The performance measures for experimental and simulated
trials using the EEDI, gEER, and RW algorithms are shown in Table
\ref{table2DND}. In simulation, all three algorithms have 100\% success
rate, while the gEER and RW controllers have a 10\% lower success rate
in the experimental scenario. The gEER controller requires roughly
10-20\% more time to meet the estimation criteria, whereas the RW
controller requires about 2-3 times more time. As mentioned in the
previous section, although gEER performs well in this scenario, it did
not perform as well with distractors.
\subsection{Performance comparison for estimating the 2D location of
multiple targets} \label{POCmultiple} Having demonstrated that the
EEDI algorithm modestly outperforms gEER and drastically outperforms RW
(in terms of time) for localizing a single, stationary target in 2D, we
next sought to compare EEDI performance localizing multiple targets (see
Fig. \ref{2dtanks}). We compare the EEDI algorithm to the gEER and RW
controllers, again leaving out IM and IGA because of their poor
performance in Section \ref{POC1D}. We performed localization estimation
for scenarios where there were either 0, 1, 2, or 3 targets in the tank,
all 2.5 cm diameter. We conducted 5 trials in simulation and experiment,
for each number of targets (with all locations selected randomly).
Figure \ref{multplots} shows the percentage of successful trials and
average slowdown factor as a function of the number of targets in the
tank. The slowdown factor is calculated by normalizing average time
until completion by the minimum average time until completion for all
algorithms and all target numbers.
In the experimental scenario, Fig. \ref{multplotsa}, the EEDI
algorithm had a higher success rate than both the gEER and RW
controllers for higher numbers of objects. The slowdown factor
using the EEDI algorithm was very similar to the gEER algorithm for
0-2 objects (the gEER controller never successfully localized 3
objects), and much shorter than the RW controller. In simulation, Fig.
\ref{multplotsb}, the success rate of the EEDI algorithm matched that
of the RW, however the RW slowdown factor was much greater.
\begin{figure}[!t]\vspace{-15 pt}
\centering
\subfloat[Experiment] {\label{multplotsa}\includegraphics[trim=.05in
.051in .05in
.1in,clip=true,width=0.5\columnwidth ]{multexp.png}}
\subfloat[Simulation] {\label{multplotsb}\includegraphics[trim=.05in
.051in .05in
.1in,clip=true,width=0.5\columnwidth ]{multsim.png}}
\caption{Performance measures for estimation of multiple target
locations in 2D for five experimental and
five simulated trials. Slowdown factor of 1 corresponds to 50 seconds in simulation, 60 seconds in experiment.\vspace{-00 pt}}
\label{multplots}
\vspace{-0cm}
\end{figure}
\begin{figure}[t]
\begin{center}
\vspace{-0 in}
\includegraphics[width=.7\columnwidth ]{radplotsucc.png}
\end{center}
\caption{Success Rate for estimation of location and radius, as a function of target radius, for simulated trials only. \vspace{-10 pt} }\label{figss}
\end{figure}
\subsection{Performance degradation with signal to noise ratio}\label{2dtargetradius}
The next trial is used to demonstrate an extension of the EEDI Algorithm
to non-location parameters, and to examine performance degradation as a
function of the signal to noise ratio. As mentioned, the EEDI algorithm
can also be used to determine additional parameters characterizing
target shape, material properties, etc. The only requirement is that
there be a measurement model dependent on---and differentiable with
respect to---that parameter. Parameters are incorporated into the
algorithm in the same way as the parameters for the spatial location of
a target (see Appendix \ref{Fisher Information}). We therefore
demonstrate effectiveness of the EEDI algorithm for estimation of target
radius as well as 2D target location. We estimated target location and
radius for ten different radii varying between 0.5 cm to 1.5 cm. Five
trials were performed for each target radius, with randomly chosen
target locations. By varying the radius of the target, which for our
sensor results in scaling the signal to noise ratio,\footnote{the signal
drops approximately with the fourth power of the distance from a
spherical target, and increases with the third power of target radius
\cite{Nelson2006}} we are able to observe relative performance of
several comparison algorithms as the signal to noise ratio drops off.
Trials were performed in simulation only. Figure \ref{figss} shows the
mean success rate of the five simulated trials as a function of target
radius. For EEDI, gEER, and RW the success rate decreased as the radius
decreased. This is expected, as the magnitude of the voltage
perturbation, and therefore the signal to noise ratio, scales with
$r^3$. For objects with $r<0.9$ cm, the peak of the expected voltage
perturbation is less than the standard deviation of the sensor noise.
Nevertheless, the EEDI Algorithm had a higher success rate than gEER and
RW for radii between 0.5 cm and 1 cm.
\begin{figure}[t]
\centering
\vspace{-0pt}
\subfloat[\vspace{-10 pt}] {\label{edittext1Da}\includegraphics[width=0.7\columnwidth ]{ICplotsucc.png}}\\
\subfloat[] {\label{edittext1Db}\includegraphics[width=0.7\columnwidth ]{ICplotstime.png}}
\caption{Performance measures for estimation of single target location
in 1D are shown for the EEDI algorithm and Algorithms I-IV. Results
of 110 simulated trials are shown for each algorithm; for each of 11
target locations, 10 simulated trials were performed with the 1D
distractor object location randomized (a fixed distance from the
SensorPod line of motion was maintained). A slowdown factor of 1
corresponds to 5.61 seconds; slowdown factor is not shown for target
distances with less than 10\% success rate.\vspace{-10 pt}}
\label{edittext1D}
\vspace{-0cm}
\end{figure}
\subsection{Comparison of sensitivity to
initial conditions }\label{IC}
Finally, we use the one-dimensional estimation scenario (the same as
that in Section \ref{POC1D}) to illustrate the relative sensitivity of
the EEDI algorithm and Algorithms I-IV to the initial conditions of the
sensor with respect to locations of the target and an unmodeled
disturbance. This captures the likelihood of different controllers to
become stuck in local minima resulting from the presence of a distractor
object which produces a measurement similar but not identical to the
target.
We executed a total of 110 simulated trials for each algorithm. 10
trials were simulated for 11 equally spaced target locations. For each
target location, the distractor location was randomized, with a minimum
distance of 25 centimeters distance from the target (along the SensorPod
line of motion, to prevent electrosensory occlusion). 110 trials allowed
significant separation of the results from different controllers. For
all trials, the SensorPod position was initialized to $(x,y) = 0$.
Figure \ref{edittext1D} shows the performance measures for Algorithms
I-IV. The slowdown factor is calculated by normalizing average time
until completion by the minimum average time over all algorithms and all
target locations. When the target was located near the SensorPod initial
position, EEDI, gEER, IGA, and RW perform comparably in terms of success
percentage and time, with the exception being the RW controller, which
is predictably slower. Success rate drops off using gEER and IGA for
target positions further from the SensorPod initial position. Note that
IM performs poorly if the target is located exactly at the robot start
position, due to the nonlinear characteristics of electrolocation. A 0 V
measurement would be observed for a target located at the sensor
position or anywhere sufficiently far from the sensor; this means that
the initial step of the IM algorithm has a high probability of driving
the sensor to a position far from the target. EEDI, on the other hand,
localized the target in essentially constant time and with 0\% failure
rate regardless of location. The RW algorithm performs as well as the
EEDI algorithm in terms of success rate, but is approximately eight
times slower.
\subsection{Summary of experimental results}\label{expsummary}
In Sections \ref{POC1D} and \ref{POC2D}, we provide examples of
successful estimation trajectories for the EEDI algorithm. In the
two-dimensional estimation problem in Section \ref{POC2D}, we observe
that both success rate and time until completion are comparable using
both EEDI and gEER algorithms (with time being much longer for the
random walk controller). While this scenario illustrates that our
algorithm performs at least as well as a greedy algorithm in a simple
setting, and more efficiently than a random controller, where we really
see the benefit in using the EEDI algorithm is when the robot is faced
with more difficult estimation scenarios. Experiments in Section
\ref{IC} showed that the EEDI algorithm was robust with respect to
initial conditions (i.e. whether or not the sensor happens to start out
closer to the distractor or target object) where Algorithms I-IV are
sensitive. For Algorithms I-IV, the further the target was from the
initial SensorPod position, the more likely the estimation was to fail
or converge slowly due to local information maxima caused by the
distractor. Similarly, when the estimation objective was target
localization for varying numbers of targets in Section \ref{POCmultiple}
(a scenario where many local information maxima are expected), the
success rate of the EEDI algorithm is higher than expected entropy
reduction and completion time is shorter than the random walk as the
number of targets increased. Lastly, the success rate of the EEDI
algorithm degraded the least quickly as the signal to noise ratio
declined. In addition to outperforming alternative algorithms in the
scenarios described, the ergodic trajectory optimization framework
enables calculation of search trajectories for nonlinear,
dynamically-constrained systems.
\subsection{Comparison of different dynamic models}\label{dynamics}
One of the benefits of ergodic optimal control is that the control
design does not change when we switch from a kinematic robot model to a
dynamic robot model. While the physical SensorPod robot is controlled
kinematically due to the gantry system, we can simulate nonlinear and
dynamic models to see how dynamics might influence information gathering
during untethered movement for future SensorPod iterations. We simulate
automated target localization using the EEDI algorithm for the SensorPod
robot using three different models for the robot dynamics. All
parameters in the ergodic optimal control algorithm are exactly the same
in all three cases: the weights on minimizing control effort vs.
maximizing ergodicity, in Eq. \eqref{Jdis2}, were set to $\gamma = 20$,
$R =0.01 \mathbb I$, (where $\mathbb I$ is a $2 \times 2$ identity
matrix), and the planning time horizon was $T=10$. In all three cases
below, the measurement model was identical and defined relative to the
X-Y position of the robot, although the system state differs. The only
changes in the implementation are the robot's state and equations of
motion for the three systems, defined below.
\begin{figure}[!t]
\centering
\vspace{-0pt}
\subfloat[Linear, kinematic system \vspace{-0 pt}] {\label{simkinematic}\includegraphics[trim=.15in .0in .2in
.0in,clip=true,width=\columnwidth ]{Kinematic3.png}}\\
\subfloat[Kinematic unicycle model (nonlinear, kinematic system)] {\label{simnonlinear}\includegraphics[trim=.15in .0in .2in
.0in,clip=true,width=\columnwidth ]{KinematicCar3.png}}\\
\subfloat[Dynamic unicycle model (nonlinear, dynamic system)\vspace{-
0 pt}] {\label{simdynamic}\includegraphics[trim=.15in .0in .2in
.0in,clip=true,width=\columnwidth ]{Dynamic3.png}}
\caption{A progression of the estimate of the two-dimensional target
location using the EEDI algorithm, in simulation, for three
different systems performing the same task. As the algorithm
progresses, collected measurements evolve the estimate (heatmap)
from a uniform distribution over the workspace (top-leftmost figure
in each (a),(b),(c)), to a concentrated distribution at the correct
location. At each interval, the EID (grayscale) is calculated from
the updated estimate, which is then used to calculate an ergodic
search trajectory (green).\vspace{-0 pt}}
\label{simdiffsystems}
\vspace{-0cm}
\end{figure}
\begin{figure}[!t]
\centering
\vspace{-0pt}
\includegraphics[width=.7\columnwidth ]{covarplot3.png}
\caption{The trace of the covariance of the two-dimensional target
location estimate is plotted as a function of time. We observe
similar overall convergence behavior for all three systems for this
particular set of initial conditions and weighted objective function.
The covariance is updated after executing each 10-second long
trajectory. \vspace{-10 pt}}
\label{systemconvergenceplot}
\vspace{-0cm}
\end{figure}
\subsubsection{Linear kinematic system}
The state is $\bm x(t) = (x(t),y(t))$ where $ x(t)$ and $y(t)$ are Cartesian
coordinates, and the equations of motion are $\bm{\dot{x}(t)}=\bm u(t).$ The initial conditions were $\bm x(0) = (0.1, 0.1)$.
\subsubsection{Nonlinear kinematic system}
We use the standard kinematic unicycle model. The state is
$ \bm x(t)=(x(t),y(t),\theta(t))$ where $x(t)$ and $y(t)$ are Cartesian
coordinates and $\theta(t)$ is a heading angle, measured from the $x$
axis in the global frame. The control $ \bm u(t)=(v(t), \omega(t))$
consists of a forward velocity $v(t)$ and angular velocity $\omega(t)$.
The equations of motion are \begin{align}\label{dyneq}
\dot{\bm{x}}(t)=\begin{bmatrix}
\dot x(t) \\
\dot y(t) \\
\dot \theta(t)
\end{bmatrix}=\begin{bmatrix} v(t) \cos\theta(t) \\ v(t)\sin\theta(t) \\ \omega(t)
\end{bmatrix}.
\end{align}
The initial conditions were $\bm x(0) = (0.1, 0.1, 0)$.
\subsubsection{Nonlinear Dynamic System}
We use a dynamic variation on the unicycle model. In this case the state
is $\bm x(t)=(x(t),y(t),\theta(t),v(t),\omega(t))$ where
$x,y,\theta,v,\omega$ are the same as in the kinematic unicycle model.
The control inputs are $ \bm u(t) = (a(t),\alpha(t))$, with the
equations of motion \begin{align}\label{dyneq}
\dot{\boldsymbol{x}}(t)=\begin{bmatrix}
\dot x(t) \\
\dot y(t) \\
\dot \theta(t)\\
\dot v(t)\\
\dot \omega(t)
\end{bmatrix}=\begin{bmatrix}
v(t) \cos\theta(t) \\
v(t) \sin\theta(t) \\
\omega(t)\\
\tfrac{1}{2} a(t)\\
\alpha(t)
\end{bmatrix}.
\end{align}
The initial conditions were $\bm x(0) = (0.1, 0.1, 0, 0, 0)$.
Figure \ref{simdiffsystems} illustrates the progression of the
EEDI algorithm for static, single target localization for all
three systems. In all cases, we use a finite time horizon of
$T=10$ seconds for trajectory optimization, and the PDF is
initialized to a uniform distribution. While the types of
trajectories produced are qualitatively different because of the
different dynamic constraints, we observe similar convergence
behavior for all three systems for this particular set of initial
conditions and weights in the objective function. In Fig.
\ref{systemconvergenceplot}, the trace of the estimate covariance
is plotted as a function of EEDI iterations.
\section{Conclusion}
\label{conclusion}
We present a receding horizon control algorithm for active estimation
using mobile sensors. The measurement model and belief on the estimates
are used to create a spatial map of expected information gain. We
implement our algorithm on a robot that uses a bio-inspired sensing
approach called electrolocation \cite{Neve13a}. Ergodic trajectory
optimization with respect to the expected information distribution, as
opposed to information maximization, is shown to outperform alternative
information maximization, entropy minimization, and random walk
controllers in scenarios when the signal to noise ratio is low or in the
presence of disturbances.
One major advantage of ergodic trajectory optimization is that the
formulation is suitable for systems with linear or nonlinear, kinematic
or dynamic motion constraints, as shown in Section \ref{dynamics}.
Additionally, the method does not formally rely on discretization of the
search space, the action space, or the belief space. Although numerical
integration schemes are used in solving differential equations or
updating the belief, discretization is an implementation decision as
opposed to a part of the problem statement or its solution. Another
benefit is that neither assuming information submodularity \cite{Sim05,
Singh2009,Hollinger2014}) nor selecting waypoints \cite{zhang09B,
zhang09} are required to distribute measurements among different
regions of high expected information when planning over a long time
horizon. Using ergodicity as an objective also means that the algorithm
is suitable for both coverage \cite{Acar2003, choset2001} or ``hotspot''
sampling, without modification. If the information density is very
concentrated, the optimally ergodic trajectory will be similar to an
information maximizing solution.\footnote{Note that this would only
happen for measurement models that cause the EID to converge to a
low-variance, unimodal distribution that approximates a delta function
(where the equivalence between an information maximizing solution and
an ergodic solution follows directly from their definitions); this
does not happen in the examples shown in Section \ref{results}.
Because of the highly nonlinear measurement model, the EID converges
to a multimodal density function, as shown in Fig \ref{FItight}.} On
the other hand, if the information density is diffuse (or the planning
time horizon very long), the optimally ergodic solution will approximate
a coverage solution. In Figs. \ref{2dtraj} and \ref{simdiffsystems},
coverage-like solutions are observed for the initial, nearly-uniform
belief; although the belief converges, the EID does not converge to a
unimodal distribution due to nonlinearities in the measurement model.
This paper deals exclusively with finding information
about a finite set of stationary targets. However, ergodic search
generalizes to both time-varying systems as well as estimation of a
continuum of targets (e.g., fields \cite{Cao2013, bender2013}) in a
reasonably straightforward fashion. Field exploration can be achieved by
using an appropriate choice of measurement model and belief update in
the EID calculation \cite{Cao2013, Singh2009, Hoang2014, bender2013,
low2008, souza2014}. Time can be incorporated into the measurement model
describing not just \emph{where} information about a parameter might be
obtained, but also \emph{when}---by extending the state in Section
\ref{trajopt} to use time as a state.
The formulation of ergodic exploration provided in this paper also
assumes that the dynamics are deterministic. However, the determinism
restriction primarily makes calculations and exposition simpler. Adding
stochastic process noise to the model can be achieved by replacing the
deterministic, finite-dimensional equations of motion with the
Fokker-Planck equations \cite{Chirikjian2009} for the nonlinear
stochastic flow, without changing the mathematical formulation of
ergodic control. Moreover, stochastic flows can be efficiently computed
\cite{zhou2003, Wang2002} for a wide variety of robotic problems. Even
when they cannot be, a wide class of stochastic optimal control problems
are easily computable \cite{todorov2005,horowitz2014}, though for
different objectives than ergodicity. Although generalization will be
easier in some cases than others, the generalization of ergodic control
to uncertain stochastic processes may be initially approached rather
procedurally. Generalizing ergodic control to more general uncertain
(non-stochastic) systems, such as robust control strategies
\cite{zhou1998}, would substantially complicate matters and would
require a much more challenging generalization that would be a very
promising avenue of future research.
In addition to the various limiting assumptions mentioned in Sections
\ref{ergoassumptions} and \ref{expassumptions} in constructing the EEDI
algorithm for target localization, one of the major limitations of the
current formulation is computational expense. Computational expense
stems both from the need to calculate a map of the expected information
density over the workspace in order to formulate the ergodic objective
function, and the need to calculate trajectories over a finite time
horizon. The projection-based trajectory optimization involves solving a
set of differential equations, which scale quadratically with the state.
This is not necesserily a problem for applications where offline control
calculations are acceptable, or in a receding horizon framework that
uses efficient numerical methods. To that end, preliminary work has
explored solving a discrete version of ergodic trajectory optimization
using variational integrators \cite{Prabhakar15}. Nevertheless, for
applications that have a linear measurement model, trivial dynamics, and
a simple environment, standard strategies like gradient-based approaches
that only locally approximate the expected information
\cite{Grocholsky06, kreucher2007, lu11, Bourgault02i} would be effective
and much more computationally efficient. The advantage of using ergodic
trajectory optimization is that it is possible to formulate and solve
exploration tasks whether or not the environment is simple or the
measurement model linear, and to perform robustly when these ``nice''
conditions cannot be guaranteed, as in the experimental work featured in
this paper.
|
1,477,468,750,665 | arxiv | \section{Introduction}
The ``Klein's Paradox'' is a counter-intuitive relativistic phenomenon related to scattering theory for high-barrier (or equivalently low-well) potentials for the Dirac equation. When an electron is approaching to a barrier, its wave function can be split in two parts: the reflected one and the transmitted one. In a non-relativistic situation, it is well known that the transmitted wave-function decays exponentially depending on the high of the potential, see \cite{thaller2005advanced} and the references therein. In the case of the Dirac equation it has been observed, in \cite{klein1929reflexion} for the first time, that the transmitted wave-function depends weakly on the power of the barrier, and it becomes almost transparent for very high barriers. This means that outside the barrier the wave-function behaves like an electronic solution and inside the barrier it behaves like a positronic one, violating the principle of the conservation of the charge. This incongruence comes from the fact that, in the Dirac equation, the behaviour of electrons and positrons is described by different components of the same spinor wave-function, see \cite{katsnelson2006chiral}. Roughly speaking, this contradiction derives from the fact that even if a very high barrier is reflective for electrons, it is attractive for the positrons.
From a mathematical perspective, the problem appears when approximating the Dirac operator coupled with a $\delta$-shell potential by the corresponding operator using local potentials with shrinking support.
The idea of coupling Hamiltonians with singular potentials supported on subsets of lower dimension with respect to the ambient space (commonly called {\em singular perturbations}) is quite classic in quantum mechanics. One important example is the model of a particle in a one-dimensional lattice that analyses the evolution of an electron on a straight line perturbed by a potential caused by ions in the periodic structure of the crystal that create an electromagnetic field. In 1931, Kronig and Penney \cite{kronig1931quantum} idealized this system: in their model the electron is free to move in regions of the whole space separated by some periodical barriers which are zero everywhere except at a single point, where they take infinite value. In a modern language, this corresponds to a $\delta$-point potential. For the Shr\"oedinger operator, this problem is described in the manuscript \cite{albeverio2012solvable} for finite and infinite $\delta$-point interactions and in \cite{exner2007leaky} for singular potentials supported on hypersurfaces. The reader may look at \cite{ditexnseb, amv1, amv2} and the references therein for the case of the Dirac operator, and to \cite{posilicano1} for a much more general scenario.
Nevertheless, one has to keep in mind that, even if this kind of model is easier to be mathematically understood, since the analysis can be reduced to an algebraic problem, it is and ideal model that cannot be physically reproduced. This is the reason why it is interesting to approximate this kind of operators by more regular ones. For instance, in one dimension, if $V\in C^\infty_c({\mathbb R})$ then
\begin{equation}
V_\epsilon(t):=\textstyle{\frac{1}{\epsilon}\,V\big(\frac{t}{\epsilon}\big)
\to(\int V)}\delta_0\quad\text{when }\epsilon\to0
\end{equation}
in the sense of distributions, where $\delta_0$ denotes the Dirac measure at the origin.
In \cite{albeverio2012solvable} it is proved that
$\Delta+V_\epsilon\to\Delta+(\int V)\delta_0$ in the norm resolvent sense when $\epsilon\to0$, and in \cite{approximation} this result is generalized to higher dimensions for singular perturbations on general smooth hypersurfaces.
These kind of results do not hold for the Dirac operator. In fact, in \cite{sebaklein} it is proved that, in the $1$-dimensional case, the convergence holds in the norm resolvent sense but the coupling constant does depend non-linearly on the potential $V$, unlike in the case of Schr\"oedinger operators. This non-linear phenomenon, which may also occur in higher dimensions, is a consequence of the fact that, in a sense, the free Dirac operator is critical with respect to the set where the $\delta$-shell interaction is performed, unlike the Laplacian (the Dirac/Laplace operator is a first/second order differential operator, respectively, and the set where the interaction is performed has codimension $1$ with respect to the ambient space).
The present paper is devoted to the study of the $3$-dimensional case, where we investigate if it is possible obtain the same results as in one dimension. We advance that, for $\delta$-shell interactions on bounded smooth hypersurfaces, we get the same non-linear phenomenon on the coupling constant but we are only able to show convergence in the strong resolvent sense.
Given $m\geq0$, the free Dirac operator in ${\mathbb R}^3$ is defined by
\begin{equation}
H:=-i\alpha\cdot\nabla+m\beta,
\end{equation}
where $\alpha=(\alpha_1,\alpha_2,\alpha_3)$,
\begin{equation}
\alpha_j=\left(\begin{array}{cc}
0& {\sigma}_j\\
{\sigma}_j&0
\end{array}\right)\quad\text{for }j=1,2,3,\quad \beta=\left(\begin{array}{cc}
\mathbb{I}_2&0\\
0&-\mathbb{I}_2
\end{array}\right),\quad \mathbb{I}_2:=\left(
\begin{array}{cc}
1 & 0\\
0 & 1
\end{array}\right),
\end{equation}
\begin{equation}\label{paulimatrices}
\text{and}\quad{\sigma}_1 =\left(
\begin{array}{cc}
0 & 1\\
1 & 0
\end{array}\right),\quad {\sigma}_2=\left(
\begin{array}{cc}
0 & -i\\
i & 0
\end{array}
\right),\quad{\sigma}_3=\left(
\begin{array}{cc}
1 & 0\\
0 & -1
\end{array}\right)
\end{equation}
is the family of \textit{Pauli's matrices}. It is well known that $H$ is self-adjoint on the Sobolev space $H^1({\R}^3)^4=:D(H)$, see \cite[Theorem 1.1]{thaller}. Throughout this article we assume that $m>0$.
In the sequel $\Omega\subset{\R}^3$ denotes a bounded $C^2$ domain and $\Sigma:=\partial\Omega$ denotes its boundary. By a $C^2$ domain we mean the following: for each point $Q\in\S$ there exist
a ball $B\subset{\mathbb R}^3$ centered at $Q$, a $C^2$ function
$\psi:{\mathbb R}^{2}\to{\mathbb R}$ and a coordinate system $\{(x,x_3):\,x\in{\mathbb R}^{2},\,x_3\in{\mathbb R}\}$ so that, with respect to this coordinate system, $Q=(0,0)$ and
\begin{equation}
\begin{split}
B\cap\Omega=B\cap\{(x,x_3):\,x_3>\psi(x)\},\\
B\cap\S=B\cap\{(x,x_3):\,x_3=\psi(x)\}.
\end{split}
\end{equation}
By compactness, one can find a finite covering of $\S$ made of such coordinate systems, thus the Lipschitz constant of those $\psi$ can be taken uniformly bounded on $\S$.
Set
$\Omega_\epsilon:=\{x \in{\R}^3 : \,d(x,\Sigma)<{\epsilon}\}$
for $\epsilon>0$. Following \cite[Appendix B]{approximation}, there exists $\eta>0$ small enough depending on $\S$ so that for every $0<\epsilon\leq\eta$ one can parametrize $\Omega_\epsilon$ as
\begin{equation}\label{C^2 domain properties}
\Omega_\epsilon =\{x_\Sigma+t \nu (x_\Sigma): \,x_\Sigma\in \Sigma,\,t\in(-\epsilon,\epsilon)\},
\end{equation}
where $\nu (x_\Sigma)$ denotes the outward (with respect to $\Omega$) unit normal vector field on $\Sigma$ evaluated at $x_\Sigma$. This parametrization is a bijective correspondence between $\Omega_\epsilon$ and $\S\times(-\epsilon,\epsilon)$, it can be understood as {\em tangential} and {\em normal coordinates}. For $t\in\left[-\eta,\eta\right]$, we set
\begin{equation}\label{C^2 domain properties2}
\Sigma_t:=\{x_\Sigma+t \nu (x_\Sigma): \,x_\Sigma\in \Sigma\}.
\end{equation}
In particular, $\Sigma_t=\partial\Omega_t\setminus\Omega$ if $t>0$, $\Sigma_t=\partial\Omega_{|t|}\cap\Omega$ if $t<0$ and $\Sigma_0=\Sigma$. Let $\upsigma_t$ denote the surface measure on $\Sigma_t$ and, for simplicity of notation, we set $\upsigma:=\upsigma_0$, the surface measure on $\S$.
Given $V\in L^\infty({\mathbb R})$ with ${\rm supp} V\subset[-\eta,\eta]$ and $0<\epsilon\leq\eta$ define
\begin{equation}
V_\epsilon(t):=\frac{\eta}{\epsilon}\,V\Big(\frac{\eta t}{\epsilon}\Big)
\end{equation}
and, for $x\in{\R}^3$,
\begin{equation}\label{def bigV}
\mathbf{V}_{\!\epsilon}(x):=
\begin{cases}
V_\epsilon (t) & \mbox{if } x\in\Omega_\epsilon,\text{ where }x=x_\Sigma+t\nu (x_\Sigma)\text{ for a unique }(x_\Sigma,t)\in\Sigma\times(-\epsilon,\epsilon),\\
0 & \mbox{if } x\not\in\Omega_\epsilon.
\end{cases}
\end{equation}
Finally, set
\begin{equation}\label{eq u,v}
\begin{split}
\mathbf{u}_\epsilon:=|\mathbf{V}_{\!\epsilon}|^{1/2},&\quad
\mathbf{v}_{\!\epsilon}:=\mathop{\textrm{sign}}(\mathbf{V}_{\!\epsilon})|\mathbf{V}_{\!\epsilon}|^{1/2},\\
u(t):=|\eta V (\eta t)|^{1/2},&\quad v(t):=\mathop{\textrm{sign}}(V(\eta t))u(t).
\end{split}
\end{equation}
Note that $\mathbf{u}_\epsilon,\mathbf{v}_{\!\epsilon}\in L^\infty({\R}^3)$ are supported in $\overline{\Omega_\epsilon}$ and $u,v\in L^\infty({\mathbb R})$ are supported in $[-1,1]$.
\begin{definition}\label{deltasmall}
Given $\eta,\,\delta>0$, we say that $V\in L^\infty({\mathbb R})$ is $(\delta,\eta)$-small if
\begin{equation}
{\rm supp} V\subset[-\eta,\eta]\quad\text{and}\quad\|V\|_{L^\infty({\mathbb R})}\leq\frac{\delta}{\eta}.
\end{equation}
\end{definition}
Observe that if $V$ is $(\delta,\eta)$-small then
$\|V\|_{L^1({\mathbb R})}\leq2\delta$, this is the reason why we call it a ``small'' potential.
In this article we study the asymptotic behaviour, in a strong resolvent sense, of the couplings of the free Dirac operator with electrostatic and Lorentz scalar short-range potentials of the form
\begin{equation}\label{correc1}
H+\mathbf{V}_{\!\epsilon}\qquad\text{and}\qquad
H+\beta \mathbf{V}_{\!\epsilon},
\end{equation}
respectively, where ${V}_{\!\epsilon}$ is given by \eqref{def bigV} for some $(\delta,\eta)$-small $V$ with $\delta$ and $\eta$ small enough only depending on $\S$.
By \cite[Theorem 4.2]{thaller}, both couplings in \eqref{correc1} are self-adjoint operators on $H^1({\R}^3)^4$.
Given $\eta>0$ small enough so that \eqref{C^2 domain properties} holds, and given $u$ and $v$ as in \eqref{eq u,v} for some $V\in{L^\infty({\mathbb R})}$ with ${\rm supp} V\subset[-\eta,\eta]$, set
\begin{equation}\label{correc3}
\mathcal{K}_V f(t):=\frac{i}{2}\int_{\mathbb R} u(t)\mathop{\textrm{sign}}(t-s)v(s)f(s)\,ds
\quad\text{ for $f\in L^1_{loc}({\mathbb R})$}.
\end{equation}
The main result in this article reads as follows.
\begin{theorem}\label{Main theorem}
There exist $\eta_0,\,\delta>0$ small enough only depending on $\S$ such that, for any $0<\eta\leq\eta_0$ and $(\delta,\eta)$-small $V$,
\begin{align}\label{main eq}
&H+\mathbf{V}_{\!\epsilon}\to H+\lambda_e\delta_\Sigma\quad\text{in the strong resolvent sense when $\epsilon\to0$},\\\label{main eq 2}
&H+\beta\mathbf{V}_{\!\epsilon}\to H+\lambda_s\beta\,\delta_\Sigma\quad\text{in the strong resolvent sense when $\epsilon\to0$},
\end{align}
where
\begin{align}\label{def lambda elec}
\lambda_e &:=\textstyle{\int_{\mathbb R}}v(t)\,((1-\mathcal{K}_V^2)^{-1}u)(t)\,dt\in{\mathbb R}, \\\label{def lambda scalar}
\lambda_s&:=\textstyle{\int_{\mathbb R}}v(t)\,((1+\mathcal{K}_V^2)^{-1}u)(t)\,dt\in{\mathbb R}
\end{align}
and $H+\lambda_e\delta_\Sigma$ and $H+\lambda_s\beta\,\delta_\Sigma$ are the electrostatic and Lorentz scalar shell interactions given by \eqref{eq defi electro} and \eqref{eq defi scalar}, respectively.
\end{theorem}
To define $\lambda_e$ in \eqref{def lambda elec} and $\lambda_s$ in \eqref{def lambda scalar}, the invertibility of $1\pm\mathcal{K}_V^2$ is required. However, since $\mathcal{K}_V$ is a Hilbert-Schmidt operator, we know that
$\|\mathcal{K}_V\|_{L^2({\mathbb R})\to L^2({\mathbb R})}$ is controlled by the norm of its kernel in $L^2({\mathbb R}\times{\mathbb R})$, which is exactly
$\|u\|_{L^2({\mathbb R})}\|v\|_{L^2({\mathbb R})}=\|V\|_{L^1({\mathbb R})}\leq 2\delta<1$, assuming that $\delta<1/2$ and that $V$ is $(\delta,\eta)$-small with $\eta\leq\eta_0$. We must stress that the way to construct $\lambda_e$ and $\lambda_s$ is the same as in the one dimensional case, see \cite[Theorem 1]{sebaklein}.
From \Cref{Main theorem} we deduce that if $a\in\sigma(H+\lambda_e\delta_{\Sigma})$, where $\sigma(\cdot)$ denotes the spectrum, then there exists a sequence $\seq{a_\epsilon}$ such that $a_\epsilon\in \sigma(H+\mathbf{V}_{\!\epsilon})$ and $a_\epsilon\to a$ when $\epsilon\to0$. Contrary to what happens if norm resolvent convergence holds, the vice-versa spectral implication may not hold. That is, if $a_\epsilon \to a$ with $a_\epsilon\in \sigma(H+\mathbf{V}_{\!\epsilon})$, it may occur that $a\notin\sigma(H+\lambda_e\delta_{\Sigma})$. The same happens for the Lorentz scalar case.
We should highlight that the kind of instruments we used to prove \Cref{Main theorem} suggest us that the norm resolvent convergence may not hold in general. Nevertheless, if $\Sigma$ is a sphere, the vice-versa spectral implication does hold. That means that, passing to the limit, we don't lose any element of the spectrum for electrostatic and scalar spherical $\delta$-shell interactions, see \cite{sphericalnotes}.
The non-linear behaviour of the limiting coupling constant with respect to the approximating potentials mentioned in the first paragraphs of the introduction is depicted by \eqref{def lambda elec} and \eqref{def lambda scalar}; the reader may compare this to the analogous result \cite[Theorem 1.1]{approximation} in the non-relativistic scenario.
However, unlike in \cite[Theorem 1.1]{approximation}, in Theorem \ref{Main theorem} we demand an smallness assumption on the potential, the $(\delta,\eta)$-smallness from Definition \ref{deltasmall}.
We use this assumption in Corollary \ref{convergence main} below, where the strong convergence of some inverse operators $(1+B_\epsilon(a))^{-1}$ when $\epsilon\to0$ is shown. The proof of Theorem \ref{Main theorem} follows the strategy of \cite[Theorem 1.1]{approximation}, but dealing with the Dirac operator instead of the Laplacian makes a big difference at this point. In the non-relativistic scenario, the fundamental solution of $-\Delta+a^2$ in ${\R}^3$ for $a>0$ has exponential decay at infinity and behaves like $1/|x|$ near the origin, which is locally integrable in ${\mathbb R}^2$ and thus its integral tends to zero as we integrate on shrinking balls in ${\mathbb R}^2$ centered at the origin. This facts are used in \cite{approximation} to show that their corresponding $(1+B_\epsilon(a))^{-1}$ can be uniformly bounded in $\epsilon$ just by taking $a$ big enough. In our situation, the fundamental solution of
$H-a$ in ${\R}^3$ can still be taken with exponential decay at infinity for $a\in{\mathbb C}\setminus{\mathbb R}$, but it is not locally absolutely integrable in ${\mathbb R}^2$. Actually, its most singular part behaves like $x/|x|^3$ near the origin, and thus it yields a singular integral operator in ${\mathbb R}^2$. This means that the contribution near the origin can not be disesteemed as in \cite{approximation} just by shrinking the domain of integration and taking $a\in{\mathbb C}\setminus{\mathbb R}$ big enough, something else is required. We impose smallness on $V$ to obtain smallness on $B_\epsilon(a)$ and ensure the uniform invertibility of $1+B_\epsilon(a)$ with respect to $\epsilon$; this is the only point where the $(\delta,\eta)$-smallenss is used.
Let $\eta_0,\,\delta>0$ be as in Theorem \ref{Main theorem}. Take $0<\eta\leq\eta_0$ and $V=\frac{\tau}{2} \chi_{(-\eta,\eta)}$ for some $\tau\in{\mathbb R}$ such that $0<|\tau|\eta\leq2\delta$. Then, arguing as in \cite[Remark 1]{sebaklein}, one gets that
\begin{equation}
\int_{\mathbb R}\!v\,(1-\mathcal{K}_V^2)^{-1}u=\sum_{n=0}^\infty\int_{\mathbb R}\!v\, \mathcal{K}_V^{2n}u=2\tan\Big(\frac{\tau\eta}{2}\Big).
\end{equation}
Since $V$ is $(\delta,\eta)-$small,
using \eqref{def lambda elec} and \eqref{main eq} we obtain that \begin{equation}H+\mathbf{V}_{\!\epsilon}\to H+2\tan(\textstyle{\frac{\tau\eta}{2}})\delta_\Sigma\quad\text{ in the strong resolvent sense when $\epsilon\to0$,}
\end{equation}
analogously to {\cite[Remark 1]{sebaklein}}.
Similarly, one can check that $\int\!v\,(1+\mathcal{K}_V^2)^{-1}u=2\tanh(\textstyle{\frac{\tau\eta}{2}})$.
Then, \eqref{def lambda scalar} and \eqref{main eq 2} yield
\begin{equation}H+\beta\,\mathbf{V}_{\!\epsilon}\to H+2\tanh(\textstyle{\frac{\tau\eta}{2}})\beta\delta_\Sigma\quad\text{ in the strong resolvent sense when $\epsilon\to0$.}
\end{equation}
Regarding the structure of the paper, Section \ref{s preli} is devoted to the preliminaries, which refer to basic rudiments with a geometric measure theory flavour and spectral properties of the short range and shell interactions appearing in Theorem \ref{Main theorem}.
In Section \ref{s main deco} we present the first main step to prove Theorem \ref{Main theorem}, a decomposition of the resolvent of the approximating interaction into three concrete operators. This type of decomposition, which is made through a scaling operator, already appears in \cite{approximation, sebaklein}. Section \ref{s main deco} also contains some auxiliary results concerning these three operators, whose proofs are carried out later on, and the proof of Theorem \ref{Main theorem}, see Section \ref{s2 ss1}. Sections \ref{ss C}, \ref{ss B}, \ref{ss A} and \ref{s proof corol}
are devoted to prove all those auxiliary results presented in Section \ref{s main deco}.
\section*{Acknowledgement}
We would like to thank Luis Vega for the enlightening discussions. Both authors were partially supported by the ERC Advanced Grant 669689 HADE (European Research Council).
Mas was also supported by the {\em Juan de la Cierva} program JCI2012-14073 and the project MTM2014-52402 (MINECO, Gobierno de Espa\~na).
Pizzichillo was also supported by the MINECO project MTM2014-53145-P, by the Basque Government through the BERC 2014-2017 program and by the Spanish Ministry of Economy and Competitiveness MINECO: BCAM Severo Ochoa accreditation SEV-2013-0323.
\section{Preliminaries}\label{s preli}
As usual, in the sequel the letter `$C$' (or `$c$') stands
for some constant which may change its value at different
occurrences. We will also make use of constants with subscripts, both to highlight the dependence on some other parameters and to stress that they retain their value from one equation to another. The precise meaning of the subscripts will be clear from the context in each situation.
\subsection{Geometric and measure theoretic considerations}\label{s1 ss1}
\mbox{}
In this section we recall some geometric and measure theoretic properties of $\Sigma$ and the domains presented in \eqref{C^2 domain properties}. At the end, we provide some growth estimates of the measures associated to the layers introduced in \eqref{C^2 domain properties2}.
The following definition and propositions correspond to Definition 2.2 and Propositions 2.4 and 2.6 in \cite{approximation}, respectively. The reader should look at \cite{approximation} for the details.
\begin{definition}[Weingarten map]\label{defi weingarten}
Let $\Sigma$ be parametrized by the family $\{\varphi_i,U_i,V_i\}_{i\in I}$, that is, $I$ is a finite set, $U_i\subset{\mathbb R}^2$, $V_i\subset{\R}^3$, $\S\subset\cup_{i\in I}V_i$ and $\varphi_i(U_i)=V_i\cap\S$ for all $i\in I$. For \[x=\varphi_i(u)\in \Sigma\cap V_i\] with $u\in U_i$, $i\in I$, one defines the Weingarten map $W(x): T_x\to T_x$, where $T_x$ denotes the tangent space of $\S$ on $x$, as the linear operator acting on the basis vector $\{\partial_j\varphi_i(u)\}_{j=1,2}$ of $T_x$ as
\[
W(x)\partial_j\varphi_i(u):=-\partial_j\nu(\varphi_i(u)).
\]
\end{definition}
\begin{proposition}\label{weingarten map}
The Weingarten map $W(x)$ is symmetric with respect to the inner product induced by the first fundamental form and its eigenvalues are uniformly bounded for all $x\in\Sigma$.
\end{proposition}
Given $0<\epsilon\leq\eta$ and $\Omega_\epsilon$ as in \eqref{C^2 domain properties}, let $i_\epsilon: \Sigma\times(-\epsilon,\epsilon)\to \Omega_\epsilon$ be the bijection defined by
\begin{equation}\label{i epsilon}
i_\epsilon(x_\Sigma,t):=x_\Sigma+t \nu (x_\Sigma).
\end{equation}
For future purposes, we also introduce the projection $P_\Sigma:\Omega_\epsilon \to \Sigma$ given by
\begin{equation}\label{P Sigma}
P_\Sigma (x_\S+t\nu(x_\S)):=x_\S.
\end{equation}
For $1\leq p<+\infty$, let $L^p(\Omega_\epsilon)$ and $L^p(\Sigma\times(-1,1))$ be the Banach spaces endowed with the norms
\begin{equation}\label{eqn:coaeraqqq}
\|f\|_{L^p(\Omega_\epsilon)}^p
:=\int_{\Omega_\epsilon}|f|^p\,d\mathcal{L},\qquad
\|f\|_{L^p(\Sigma\times(-1,1))}^p
:=\int_{-1}^1\int_{\Sigma}|f|^p\,d\upsigma\,dt,
\end{equation}
respectively, where $\mathcal{L}$ denotes the Lebesgue measure in ${\mathbb R}^3$. The Banach spaces corresponding to the endpoint case $p=+\infty$ are defined, as usual, in terms of essential suprema with respect to the measures associated to $\Omega_\epsilon$ and $\Sigma\times(-1,1)$ in \eqref{eqn:coaeraqqq}, respectively.
\begin{proposition}\label{prop:coarea}
If $\eta>0$ is small enough, there exist $0<c_1,c_2<+\infty$ such that
\[
c_1\|f\|_{L^1(\Omega_\epsilon)}\leq\|f \circ i_\epsilon\|_{L^1(\Sigma\times (-\epsilon,\epsilon))}\leq c_2\|f\|_{L^1(\Omega_\epsilon)}\quad\text{for all }f \in L^1(\Omega_\epsilon),\, 0<\epsilon\leq\eta.
\]
Moreover, if $W$ denotes the Weingarten map associated to $\Sigma$ from {\em Definition \ref{defi weingarten}},
\begin{equation}\label{eqn:coaera}
\int_{\Omega_\epsilon}f(x)\,dx=\int_{-\epsilon}^\epsilon\int_{\Sigma} f(x_\Sigma+t\nu(x_\Sigma))\det(1-tW(x_\Sigma))\,d\upsigma(x_\Sigma)\,dt\quad\text{for all }f \in L^1(\Omega_\epsilon).
\end{equation}
\end{proposition}
The eigenvalues of the Weingarten map $W(x)$ are the principal curvatures of $\S$ on $x\in\S$, and they are independent of the parametrization of $\S$. Therefore, the term $\det(1-tW(x_\Sigma))$ in \eqref{eqn:coaera} is also independent of the parametrization of $\S$.
\begin{remark}
Let $h:\Omega_\epsilon\to(-\epsilon,\epsilon)$ be defined by
$h(x_\Sigma+t\nu(x_\Sigma)):=t$. Then $|\nabla h|=1$ in $\Omega_\epsilon$, so the coarea formula (see \cite[Remark 2.94]{ambrosiofuscopallara}, for example) gives
\begin{equation}\label{eqn:coaera3}
\int_{\Omega_\epsilon}f(x)\,dx=\int_{-\epsilon}^\epsilon\int_{\Sigma_t} f(x)\,d\upsigma_t(x)\,dt\quad\text{for all }f \in L^1(\Omega_\epsilon).
\end{equation}
In view of \eqref{eqn:coaera}, one deduces that
\begin{equation}\label{eqn:coaera2}
\int_{\Sigma_t} f\,d\upsigma_t
=\int_{\Sigma}f(x_\Sigma+t\nu(x_\Sigma))\det(1-tW(x_\Sigma))\,d\upsigma(x_\Sigma)
\end{equation}
for all $t\in(-\epsilon,\epsilon)$ and all $f \in L^1(\Sigma_t)$.
\end{remark}
In the following lemma we give uniform growth estimates on the measures $\upsigma_t$, for $t\in[-\eta,\eta]$, that exhibit their 2-dimensional nature. These estimates will be used many times in the sequel, mostly for the case of $\upsigma$.
\begin{lemma}\label{2d AD regularity}
If $\eta>0$ is small enough, there exist $c_1,c_2>0$ such that
\begin{eqnarray}\label{sigma_t in Delta}
& &\upsigma_t(B_r(x))\leq c_1 r^2\quad \text{for all }x\in {\R}^3,\, r>0,\, t\in[-\eta,\eta],\label{sigma_t in Sigma2}\\
& &\upsigma_t(B_r(x))\geq c_2{r^2}\quad \text{for all }x\in \Sigma_t,\,0<r<2{\rm diam}(\Omega_\eta),\,t\in[-\eta,\eta],\label{sigma_t in Sigma}
\end{eqnarray}
being $B_r(x)$ the ball of radius $r$ centred at $x$.
\end{lemma}
\begin{proof}
We first prove \eqref{sigma_t in Delta}. Let $r_0>0$ be a constant small enough to be fixed later on.
If $r\geq r_0$, then
\[
\upsigma_t(B_r(x))\leq \max_{t\in[-\eta,\eta]}\upsigma_t({\R}^3)\leq C=\frac{C}{r_0^2}\,r_0^2\leq C_0r^2,
\]
where $C_0:=C/{r_0^2}>0$ only depends on $r_0$ and $\eta$. Therefore, we can assume that $r< r_0$.
Let us see that we can also suppose that $x\in\Sigma_t$.
In fact, if $\eta$ and $r_0$ are small enough and $0<r<r_0$, given $x\in{\R}^3$ one can always find $\tilde{x}\in \Sigma_t$ such that $\upsigma_t(B_r(x))\leq 2\upsigma_t(B_{{r}}(\tilde{x}))$ (if $x\in\Omega_\eta$ just take $\tilde{x}=P_\S x+t\nu(P_\S x)$).
Then if \eqref{sigma_t in Delta} holds for $\tilde{x}$, one gets
$\upsigma_t(B_r(x))\leq2\upsigma_t(B_{{r}}(\tilde{x}))\leq C r^2,$ as desired.
Thus, it is enough to prove \eqref{sigma_t in Delta} for $x\in \Sigma_t$ and $r<r_0$. If $r_0$ and $\eta$ are small enough, covering $\S_t$ by local chards we can find an open and bounded set $V_{t,r}\subset {\mathbb R}^2$ and a $C^1$ diffeomorphism $\varphi_t:{\mathbb R}^2\to \varphi_t({\mathbb R}^2)\subset{\R}^3$ such that $\varphi_t(V_{t,r})=\Sigma_t\cap B_r(x)$. By means of a rotation if necessary, we can further assume that $\varphi_t$ is of the form $\varphi_t(y')=(y',T_t(y'))$, i.e. $\varphi_t$ is the graph of a $C^1$ function $T_t:{\mathbb R}^2\to {\mathbb R}$, and that
$\max_{t\in [-\eta,\eta]}\|\nabla T_t\|_\infty\leq C$ (this follows from the regularuty of $\S$).
Then, if $x'\in V_{t,r}$ is such that $\varphi_t(x')=x$, for any $y'\in V_{t,r}$ we get
\[
r^2\geq|\varphi_t(y')-\varphi_t(x')|^2\geq |y'-x'|^2,
\]
which means that $V_{t,r}\subset\{y'\in{\mathbb R}^2:\,|x'-y'|<r\}=:B'\subset{\mathbb R}^2$.
Denoting by $\mathcal{H}^2$ the 2-dimensional Hausdorff measure, from \cite[Theorem 7.5]{mattila} we get
\[
\upsigma_t(B_r(x))=\mathcal{H}^2(\varphi_t(V_{t,r}))\leq \mathcal{H}^2(\varphi_t(B')) \leq \|\nabla\varphi_t\|_\infty^2 \mathcal{H}^2(B')\leq C r^2
\]
for all $t\in[-\eta,\eta]$,
so \eqref{sigma_t in Delta} is finally proved.
Let us now deal with \eqref{sigma_t in Sigma}. Given $r_0>0$, by the regularity and boundedness of $\Sigma$ it is clear that
$\inf_{t\in[-\eta,\eta],\,x\in\Sigma_t}\upsigma_t(B_{r_0}(x))\geq C>0$. As before, for any $r_0\leq r<2{\rm diam}(\Omega_\eta)$ we easily see that
\begin{equation}\upsigma_t(B_r(x))\geq \upsigma_t(B_{r_0}(x))\geq C=\frac{C}{4{\rm diam}(\Omega_\eta)^2}\,4{\rm diam}(\Omega_\eta)^2\geq C_1r^2,\end{equation}
where $C_1:={C}/{4{\rm diam}(\Omega_\eta)^2}>0$ only depends on $r_0$ and $\eta$. Hence \eqref{sigma_t in Sigma} is proved for all $r_0\leq r<2{\rm diam}(\Omega_\eta)$.
The case $0<r<r_0$ is treated, as before, using the local parametrization of $\S_t$ around $x$ by the graph of a function. Taking $\eta$ and $r_0$ small enough, we may assume the existence of $V_{t,r}$ and $\varphi_t$ as above, so let us set $\varphi_t(x')=x$ for some $x'\in V_{t,r}$. The fact that $\varphi_t$ is of the form $\varphi_t(y')=(y',T_t(y'))$ and that $\varphi_t(V_{t,r})=\Sigma_t\cap B_r(x)$ implies that $B'':=\{y'\in{\mathbb R}^2:\,|x'-y'|<C_2r\}\subset V_{t,r}$ for some $C_2>0$ small enough only depending on $\max_{t\in [-\eta,\eta]}\|\nabla T_t\|_\infty$, which is finite by assumption. Then, we easily see that
\begin{equation}\upsigma_t(B_r(x))=\upsigma_t(\varphi_t(V_{t,r}))
\geq \upsigma_t(\varphi_t(B''))=\int_{B''}\sqrt{1+|\nabla T_t(y')|^2}\,dy'
\geq\int_{B''}dy'=Cr^2,\end{equation}
where $C>0$ only depends on $C_2$. The lemma is finally proved.
\end{proof}
\subsection{Shell interactions for Dirac operators}\label{section shell interactions}\label{s1 ss2}
\mbox{}
In this section we briefly recall some useful instruments regarding the $\delta$-shell interactions studied in \cite{amv1,amv2}. The reader should look at \cite[Section 2 and Section 5]{amv2} for the details.
Let $a\in {\mathbb C}$. A fundamental solution of $H-a$ is given by
\begin{equation}
\phi^a (x)=\frac{e^{-\sqrt{m^2-a^2}|x|}}{4\pi|x|}\Big(a+m\beta +\Big(1+\sqrt{m^2-a^2}|x|\Big)\,i\alpha\cdot\frac{x}{|x|^2}\Big)\quad \text{for }x\in{\R}^3\setminus\{0\},
\end{equation}
where $\sqrt{m^2-a^2}$ is chosen with positive real part whenever $a\in({\mathbb C}\setminus{\mathbb R})\cup\big((-m,m)\times\{0\}\big)$. To guarantee the exponential decay of $\phi^a$ at $\infty$, from now on we assume that $a\in({\mathbb C}\setminus{\mathbb R})\cup\big((-m,m)\times\{0\}\big)$.
Given $G\in L^2({\R}^3)^4$ and $g\in L^2(\upsigma)^4$ we define
\begin{equation}\label{defi Phia}
\Phi^a(G,g)(x):=\int_{{\R}^3}\phi^a (x-y)\, G(y)\,dy
+\int_{\Sigma}\phi^a (x-y)g(y)\,d\upsigma(y)
\quad\text{for }x\in{\R}^3\setminus\Sigma.
\end{equation}
Then, $\Phi^a:L^2({\R}^3)^4\times L^2(\upsigma)^4\to L^2({\R}^3)^4$ is linear and bounded and $\Phi^a(G,0)\in H^1({\R}^3)^4$.
We also set
\begin{equation}
\Phi^a_\upsigma G:=\operatorname{tr}_{\upsigma}(\Phi^a(G,0))\in L^2(\upsigma)^4,
\end{equation}
being $\operatorname{tr}_{\upsigma}$ the trace operator on $\Sigma$.
Finally, given $x\in\Sigma$ we define
\begin{equation}
C_\upsigma^a g(x)
:=\lim_{\epsilon\searrow 0}\int_{\Sigma\cap\{|x-y|>\epsilon\}}\phi^a (x-y) g(y)\,d\upsigma(y)
\quad\text{and}\quad C^a_{\pm}g(x):=\lim_{\Omega_\pm\ni y \overset{nt}{\to}x}\Phi^a(0,g)(y),
\end{equation}
where $\Omega_\pm\ni y \overset{nt}{\to}x$ means that $y$ tends to $x$ non-tangentially from the interior/exterior of $\Omega$, respectively, i.e. $\Omega_+:=\Omega$ and $\Omega_-:={\R}^3\setminus\overline{\Omega}$. The operators $C_\upsigma^a$ and $C^a_\pm$ are linear and bounded in $L^2(\upsigma)^4$. Moreover, the following Plemelj-Sokhotski jump formulae holds:
\begin{equation}\label{Plemelj jump formulae}
C^a_\pm=\mp \frac{i}{2}(\alpha\cdot\nu)+C^a_\upsigma.
\end{equation}
Let $\lambda_e\in{\mathbb R}$. Using $\Phi^a$, we define the electrostatic $\delta$-shell interaction appearing in Therorem \ref{Main theorem} as follows:
\begin{equation}\label{eq defi electro}
\begin{split}
&D(H+\lambda_e\delta_\Sigma):=\{\Phi^0(G,g): \,G\in L^2({\R}^3)^4,\, g \in L^2(\upsigma)^4,\, \lambda_e\Phi^0_\upsigma G=-(1+\lambda_e C^0_\upsigma)g\},\\
&(H+\lambda_e\delta_\Sigma)\varphi:=H\varphi+\lambda_e\frac{\varphi_++\varphi_-}{2}\,\upsigma\quad
\text{for }\varphi\in D(H+\lambda_e\delta_\Sigma),
\end{split}
\end{equation}
where $H\varphi$ in the right hand side of the second statement in \eqref{eq defi electro} is understood in the sense of distributions and $\varphi_\pm$ denotes the boundary traces of $\varphi$ when one approaches to $\Sigma$ from $\Omega_\pm$. In particular, one has $(H+\lambda_e\delta_\Sigma)\varphi=G\in L^2({\R}^3)^4$ for all $\varphi=\Phi^0(G,g)\in D(H+\lambda_e\delta_\Sigma)$.
We should mention that one recovers the free Dirac operator in $H^1({\R}^3)^4$ when $\lambda_e=0$.
From \cite[Section 3.1]{amv2} we know that $H+\lambda_e\delta_\Sigma$ is self-adjoint for all $\lambda_e\neq \pm 2$. Besides, if $\lambda_e\neq0$,
given $a\in(-m,m)$ and $\varphi=\Phi^0(G,g)\in D(H+\lambda_e\delta_\Sigma)$,
\begin{equation}\label{BS principle}
(H+\lambda_e\delta_\Sigma-a)\varphi=0\quad\text{if and only if}\quad(\textstyle{\frac{1}{\lambda_e}}+C^a_\upsigma)g=0.
\end{equation}
This corresponds to the Birman-Swinger principle in the electrostatic $\delta$-shell interaction setting. Since the case $\lambda_e=0$ corresponds to the free Dirac operator, it can be excluded from this consideration because it is well known that the free Dirac operator doesn't have pure point spectrum. Moreover, the relation \eqref{BS principle} can be easily extended to the case of
$a\in({\mathbb C}\setminus{\mathbb R})\cup\big((-m,m)\times\{0\}\big)$ (one still has exponential decay of a fundamental solution of $H-a$).
In the same vein, given $\lambda_s\in{\mathbb R}$, we define the Lorentz scalar $\delta$-shell interaction as follows:
\begin{equation}\label{eq defi scalar}
\begin{split}
&D(H+\lambda_s\beta\,\delta_\Sigma):=\{\Phi^0(G,g): \,G\in L^2({\R}^3)^4,\, g \in L^2(\upsigma)^4,\, \lambda_s\Phi^0_\upsigma G=-(\beta+\lambda_s C^0_\upsigma)g\},\\
&(H+\lambda_s\beta\,\delta_\Sigma)\varphi
:=H\varphi+\lambda_s\beta\,\frac{\varphi_++\varphi_-}{2}\,\upsigma\quad
\text{for }\varphi\in D(H+\lambda_s\beta\,\delta_\Sigma).
\end{split}
\end{equation}
From \cite[Section 5.1]{amv2} we know that $H+\lambda_s\beta\,\delta_\Sigma$ is self-adjoint for all $\lambda_s\in {\mathbb R} $. Besides,
given $\lambda_s\neq0$, $a\in({\mathbb C}\setminus{\mathbb R})\cup\big((-m,m)\times\{0\}\big)$ and $\varphi=\Phi^0(G,g)\in D(H+\lambda_s\beta\,\delta_\Sigma)$, arguing as in \eqref{BS principle} one gets
\begin{equation}\label{BS principle scalar}
(H+\lambda_s\beta\,\delta_\Sigma-a)\varphi=0\quad\text{if and only if}\quad(\textstyle{\frac{\beta}{\lambda_s}}+C^a_\upsigma)g=0.
\end{equation}
The following lemma describes the resolvent operator of the $\delta$-shell interactions presented in \eqref{eq defi electro} and \eqref{eq defi scalar}.
\begin{lemma}
Given $\lambda_e,\,\lambda_s\in{\mathbb R}$ with $\lambda_e\neq \pm 2$, $a\in{\mathbb C}\setminus{\mathbb R}$ and $F\in L^2({\R}^3)^4$, the following identities hold:
\begin{align}\label{resolvent H+lambda delta}
&(H+\lambda_e\delta_\Sigma-a)^{-1}F=(H-a)^{-1}F-\lambda_e\Phi^a\big(0,\left(1+\lambda_e C^a_\upsigma\right)^{-1}\Phi^a_\upsigma F\big),\\\label{resolvent H+lambda beta delta}
&(H+\lambda_s\beta\,\delta_\Sigma-a)^{-1}F=(H-a)^{-1}F-\lambda_s\Phi^a\big(0,\left(\beta+\lambda_s C^a_\upsigma\right)^{-1}\Phi^a_\upsigma F\big).
\end{align}
\end{lemma}
\begin{proof}
We will only show \eqref{resolvent H+lambda delta}, the proof of \eqref{resolvent H+lambda beta delta} is analogous.
Since $H+\lambda_e\delta_\Sigma$ is self-adjoint for $\lambda_e\neq \pm 2$, $(H+\lambda_e\delta_\Sigma-a)^{-1}$ is well-defined and bounded in $L^2({\R}^3)^4$. For $\lambda_e=0$ there is nothing to prove, so we assume $\lambda_e\neq 0$.
Let $\varphi=\Phi^0(G,g)\in D(H+\lambda_e\delta_\Sigma)$ as in \eqref{eq defi electro} and $F=(H+\lambda_e\delta_\Sigma-a)\varphi\in L^2({\R}^3)^4$. Then,
\begin{equation}\label{F-G=a Phi(G+g)}
F=(H+\lambda_e\delta_\Sigma-a)\Phi^0(G,g)=G-a\Phi^0(G,g).
\end{equation}
If we apply $H$ on both sides of \eqref{F-G=a Phi(G+g)} and we use that $H\Phi^0(G,g)=G+g\upsigma$ in the sense of distributions, we get $HF=HG-a(G+g\upsigma)$, that is, $(H-a)G=(H-a)F+aF+ag\upsigma$.
Convolving with $\phi^a$ the left and right hand sides of this last equation, we obtain
$G=F+a\Phi^a(F,0)+a\Phi^a(0,g)$, thus $G-F=a\Phi^a(F,g)$. This, combined with \eqref{F-G=a Phi(G+g)}, yields
\begin{equation}\label{Phi0(G,g)=Phia(F,g)}
\Phi^0(G,g)=\Phi^a(F,g).
\end{equation}
Therefore, taking non-tangential boundary values on $\Sigma$ from inside/outside of $\Omega$ in \eqref{Phi0(G,g)=Phia(F,g)} we obtain \begin{equation}
\Phi^0_\upsigma G+C^0_\pm g=\Phi^a_\upsigma F +C^a_\pm g.\end{equation}
Since $\Phi^0(G,g)\in D(H+\lambda_e\delta_\Sigma)$, thanks to \eqref{eq defi electro} and \eqref{Plemelj jump formulae} we conclude that
\begin{equation}\label{resolvent eq1}
\Phi_\upsigma^a F=-\Big(\frac{1}{\lambda_e}+C^a_\upsigma\Big) g.
\end{equation}
Since $a\in{\mathbb C}\setminus{\mathbb R}$ and $H+\lambda_e\delta_\Sigma$ is self-adjoint for $\lambda_e\neq\pm2$, by \eqref{BS principle} we see that $\text{Kernel}(\frac{1}{\lambda_e}+C^a_\upsigma)=\{0\}$. Moreover, using the ideas of the proof of \cite[Lemma 3.7]{amv1} and that $\lambda_e\neq \pm 2$, one can show that
$\frac{1}{\lambda_e}+C^a_\upsigma$ has closed range.
Finally, since we are taking the square root so that
\begin{equation}
\overline{\sqrt{m^2-a^2}}=\sqrt{m^2-\bar{a}^2},
\end{equation}
following \cite[Lemma 3.1]{amv1} we see that $\overline{(\phi^a)^t}(x)=\phi^{\bar{a}}(-x)$. Here, $(\phi^a)^t$ denotes the transpose matrix of $\phi^a$. Thus we conclude that $(\text{Range}(\frac{1}{\lambda_e}+C^a_\upsigma))^{\perp}=\text{Kernel}(\frac{1}{\lambda_e}+C^{\bar{a}}_\upsigma)=\{0\}$, and so $\frac{1}{\lambda_e}+C^a_\upsigma$ is invertible. Then, by \eqref{resolvent eq1}, we obtain
\begin{equation}\label{resolvent eq2}
g=-\Big(\frac{1}{\lambda_e}+C^a_\upsigma\Big)^{-1}\Phi_\upsigma^a F.
\end{equation}
Thanks to \eqref{Phi0(G,g)=Phia(F,g)} and \eqref{resolvent eq2}, we finally get
\begin{align}
(H+\lambda_e\delta_\Sigma-a)^{-1}F&=\varphi=\Phi^0(G,g)=\Phi^a(F,g)
=\Phi^a\Big( F,-\Big(\frac{1}{\lambda_e}+C^a_\upsigma\Big)^{-1}\Phi_\upsigma^a F\Big)\\
&=\Phi^a(F,0)-\lambda_e\Phi^a\big(0,\left(1+\lambda_e C^a_\upsigma\right)^{-1}\Phi^a_\upsigma F\big),
\end{align}
and the lemma follows because $\Phi^a(\cdot,0)=(H-a)^{-1}$ as a bounded operator in $L^2({\R}^3)^4$.
\end{proof}
\subsection{Coupling the free Dirac operator with short range potentials as in \eqref{correc1}}\label{ss coupling Ve}
\mbox{}
Given $\mathbf{V}_{\!\epsilon}$ as in \eqref{def bigV}, set
\begin{equation}
H^e_\epsilon:=H+\mathbf{V}_{\!\epsilon}\qquad\text{and}\qquad
H^s_\epsilon:=H+\beta \mathbf{V}_{\!\epsilon}.
\end{equation}
Recall that these operators are self-adjoint on $H^1({\R}^3)^4$. In the following, we give the resolvent formulae for $H^e_\epsilon$ and $H^s_\epsilon$.
Throughout this section we make an abuse of notation. Remember that, given $G\in L^2({\R}^3)^4$ and $g\in L^2(\upsigma)^4$, in \eqref{defi Phia} we already defined $\Phi^a(G,g)$. However, now we make the identification $\Phi^a(\cdot)\equiv\Phi^a(\cdot,0)$, that is, in this section we identify $\Phi^a$ with an operator acting on $L^2({\R}^3)^4$ by always assuming that the second entrance in $\Phi^a$ vanishes. Besides, in this section we use the symbol $\sigma(\cdot)$ to denote the spectrum of an operator, the reader sholud not confuse it with the symbol $\upsigma$ for the surface measure on $\S$.
\begin{proposition}\label{propo 28}
Let $\mathbf{u}_\epsilon$ and $\mathbf{v}_{\!\epsilon}$ be as in \eqref{eq u,v}. Then,
\begin{enumerate}[label=$(\roman*)$]
\item $a\in\rho(H^e_\epsilon)$ if and only if $-1\in\rho(\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon})$, where $\rho(\cdot)$ denotes the resolvent set,
\item $a\in\sigma_{pp}(H^e_\epsilon)$ if and only if $-1\in \sigma_{pp}(\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon})$, where $\sigma_{pp}(\cdot)$ denotes the pure point spectrum. Moreover, the multiplicity of $a$ as eigenvalue of $H^e_\epsilon$ coincides with the multiplicity of $-1$ as eigenvalue of $\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon}$.
\end{enumerate}
Furthermore, the following resolvent formula holds:
\begin{equation}\label{Birman Shwinger}
(H^e_\epsilon-a)^{-1}=\Phi^a - \Phi^a \mathbf{v}_{\!\epsilon}\left(1+\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon}\right)^{-1}\mathbf{u}_\epsilon \Phi^a.
\end{equation}
\end{proposition}
\begin{proof}
To prove $(i)$ and $(ii)$ it is enough to verify that the assumptions of \cite[Lemma 1]{konnokuroda} are satisfied. That is, we just need to show that $a\in\sigma_{pp}(H^e_\epsilon)$ if and only if $-1\in \sigma_{pp}(\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon})$ and that there exists $a \in \rho(H^e_\epsilon)$ such that $-1\in \rho(\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon})$.
Assume that $a\in\sigma_{pp}(H^e_\epsilon)$. Then $(H+\mathbf{V}_{\!\epsilon}-a)F=0$ for some $F\in L^2({\R}^3)^4$ with $F\not\equiv 0$, so $(H-a)F=-\mathbf{V}_{\!\epsilon} F$. Using that $\sigma(H)=\sigma_{ess}(H)$, where $\sigma_{ess}(\cdot)$ denotes the essential spectrum, it is not hard to show that indeed $\mathbf{V}_{\!\epsilon} F\not\equiv0$. Since $\mathbf{V}_{\!\epsilon}=\mathbf{v}_{\!\epsilon} \mathbf{u}_\epsilon$, by setting $G=\mathbf{u}_\epsilon F\in L^2({\R}^3)^4$ we get that $G\not\equiv0$ and
\begin{equation}\label{(H-a)F=ve G}
(H-a)F=-\mathbf{v}_{\!\epsilon} G.
\end{equation}
From \cite[Theorem 4.7]{thaller} we know that $\sigma_{ess}(H+\mathbf{V}_{\!\epsilon})=\sigma_{ess}(H)=\sigma(H)$. Since $\sigma(H^e_\epsilon)$ is the disjoint union of the pure point spectrum and the essential spectrum, we resume that $\sigma_{pp}(H^e_\epsilon)\subset\rho(H)$, which means that $(H-a)^{-1}=\Phi^a$ is a bounded operator on $L^2({\R}^3)^4$. By \eqref{(H-a)F=ve G},
$F=-\Phi^a \mathbf{v}_{\!\epsilon} G$. If we multiply both sides of this last equation by $\mathbf{u}_\epsilon$ we obtain $G=\mathbf{u}_\epsilon F=-\mathbf{u}_\epsilon \Phi^a \mathbf{v}_{\!\epsilon} G$, so $-1\in \sigma_{pp}(\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon})$ as desired.
On the contrary, assume now that there exists a nontrivial $G\in L^2({\R}^3)^4$ such that $\mathbf{u}_\epsilon \Phi^a \mathbf{v}_{\!\epsilon} G=-G$. If we take $F=\Phi^a\mathbf{v}_{\!\epsilon} G\in L^2({\R}^3)$, we easily see that $F\not\equiv 0$ and $\mathbf{V}_{\!\epsilon} F=-(H-a)F$, which means that $a$ is an eigenvalue of $H^e_\epsilon$.
To conclude the first part of the proof, it remains to show that there exists $a \in \rho(H^e_\epsilon)$ such that $-1 \in \rho(\mathbf{u}_\epsilon \Phi^a \mathbf{v}_{\!\epsilon})$. By \cite[Theorem 4.23]{thaller} we know that $\sigma_{pp}(H^e_\epsilon)$ is a finite sequence contained in $(-m,m)$, so we can chose $a\in (-m,m)\cap \rho(H^e_\epsilon)$. Moreover, by \cite[Lemma 2]{sebaabsorption},
$\mathbf{u}_\epsilon \Phi^a \mathbf{v}_{\!\epsilon}$ is a compact operator. Then, by Fredholm's alternative, either $-1\in \sigma_{pp}(\mathbf{u}_\epsilon \Phi^a \mathbf{v}_{\!\epsilon})$ or $-1 \in\rho(\mathbf{u}_\epsilon \Phi^a \mathbf{v}_{\!\epsilon})$. But we can discard the first option, otherwise $a\in\sigma_{pp}(H^e_\epsilon)$, in contradiction with $a\in \rho(H^e_\epsilon)$.
Let us now prove \eqref{Birman Shwinger}. Writing $\mathbf{V}_{\!\epsilon}=\mathbf{v}_{\!\epsilon}\mathbf{u}_\epsilon$ and using that $(H-a)^{-1}=\Phi^a$, we have
\begin{align}
(H_\epsilon^e&-a)\big(\Phi^a - \Phi^a \mathbf{v}_{\!\epsilon}(1+\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon})^{-1}\mathbf{u}_\epsilon \Phi^a\big)\\
&= 1-\mathbf{v}_{\!\epsilon}\left(1+\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon}\right)^{-1}\mathbf{u}_\epsilon \Phi^a+\mathbf{v}_{\!\epsilon}\mathbf{u}_\epsilon\Phi^a
-\mathbf{v}_{\!\epsilon}(-1+1+\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon})\left(1+\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon}\right)^{-1}\mathbf{u}_\epsilon \Phi^a\\
&=1-\mathbf{v}_{\!\epsilon}\left(1+\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon}\right)^{-1}\mathbf{u}_\epsilon \Phi^a+\mathbf{v}_{\!\epsilon}\mathbf{u}_\epsilon\Phi^a+\mathbf{v}_{\!\epsilon}\left(1+\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon}\right)^{-1}\mathbf{u}_\epsilon \Phi^a-\mathbf{v}_{\!\epsilon}\mathbf{u}_\epsilon\Phi^a=1,
\end{align}
as desired. This completes the proof of the proposition.
\end{proof}
The following result can be proved in the same way, we leave the details for the reader.
\begin{proposition}\label{propo 28 scalar}
Let $\mathbf{u}_\epsilon$ and $\mathbf{v}_{\!\epsilon}$ be as in \eqref{eq u,v}. Then,
\begin{enumerate}[label=$(\roman*)$]
\item $a\in\rho(H^s_\epsilon)$ if and only if $-1\in\rho(\beta\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon})$,
\item $a\in\sigma_{pp}(H^s_\epsilon)$ if and only if $-1\in \sigma_{pp}(\beta \mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon})$. Moreover, the multiplicity of $a$ as eigenvalue of $H^s_\epsilon$ coincides with the multiplicity of $-1$ as eigenvalue of $\beta\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon}$.
\end{enumerate}
Furthermore, the following resolvent formula holds:
\begin{equation}\label{Birman Shwinger scalar}
(H^s_\epsilon-a)^{-1}=\Phi^a - \Phi^a \mathbf{v}_{\!\epsilon}\left(\beta+\mathbf{u}_\epsilon\Phi^a\mathbf{v}_{\!\epsilon}\right)^{-1}\mathbf{u}_\epsilon \Phi^a.
\end{equation}
\end{proposition}
\section{The main decomposition and the proof of Theorem \ref{Main theorem}}\label{s main deco}
Following the ideas in \cite{sebaklein,approximation}, the first key step to prove Theorem \ref{Main theorem} is to decompose
$(H^e_\epsilon -a)^{-1}$ and $(H^s_\epsilon -a)^{-1}$, using a scaling operator, in terms of the operators $A_\epsilon(a)$, $B_\epsilon(a)$ and $C_\epsilon(a)$ introduced below (see Lemma \ref{lem mel}).
Let $\eta_0>0$ be some constant small enough to be fixed later on. In particular, we take $\eta_0$ so that \eqref{C^2 domain properties} holds for all $0<\epsilon\leq\eta_0$. Given $0<\epsilon\leq\eta_0$, define
\begin{align}
&\mathcal{I}_\epsilon:L^2(\Sigma\times (-\epsilon,\epsilon))^4\to L^2(\Omega_\epsilon)^4
\quad\text{by}\quad (\mathcal{I}_\epsilon f)(x_\S+t\nu(x_\S)):=f(x_\S,t),\\
&\mathcal{S}_\epsilon:L^2(\Sigma\times (-1,1))^4\to L^2(\Sigma\times (-\epsilon,\epsilon))^4
\quad\text{by}\quad (\mathcal{S}_\epsilon g)(x_\S,t):=\frac{1}{\sqrt{\epsilon}}\,g\Big(x_\S,\frac{t}{\epsilon}\Big).
\end{align}
Thanks to the regularity of $\Sigma$, $\mathcal{I}_\epsilon$ is well-defined, bounded and invertible for all $0<\epsilon\leq\eta_0$ if $\eta_0$ is small enough. Note also that $\mathcal{S}_\epsilon$ is a unitary and invertible operator.
Let $0<\eta\leq\eta_0$, $V\in L^\infty({\mathbb R})$ with ${\rm supp} V\subset[-\eta,\eta]$ and $u,v\in L^\infty({\mathbb R})$ be the functions with support in $[-1,1]$ introduced in \eqref{eq u,v}, that is,
\begin{equation}\label{correc6}
u(t):=|\eta V (\eta t)|^{1/2}\quad\text{and}\quad v(t):=\mathop{\textrm{sign}}(V(\eta t))u(t).
\end{equation}
Using the notation related to \eqref{eqn:coaera}, for $0<\epsilon\leq\eta_0$ we consider the integral operators
\begin{equation}\label{ABC espacios}
\begin{split}
&A_\epsilon(a):L^2(\Sigma\times(-1,1))^4\to L^2({\R}^3)^4,\\
&B_\epsilon(a):L^2(\Sigma\times(-1,1))^4\to L^2(\Sigma\times(-1,1))^4,\\
&C_\epsilon(a):L^2({\R}^3)^4\to L^2(\Sigma\times(-1,1))^4
\end{split}
\end{equation}
defined by
\begin{equation}\label{ABCepsilon}
\begin{split}
&(A_\epsilon(a)g)(x):=\int_{-1}^1\int_\Sigma\phi^a(x-y_\S - \epsilon s \nu (y_\S))v(s) \det(1-\epsilon s W(y_\S)) g(y_\S ,s)\,d\upsigma (y_\S)\,ds,\\
&(B_\epsilon (a)g)(x_\S ,t):= u(t)\int_{-1}^1\int_\S\phi^a (x_\S + \epsilon t \nu (x_\S) -y_\S -\epsilon s \nu (y_\S))v(s)\\
&\hskip200pt \times \det(1-\epsilon s W(y_\S)) g(y_\S ,s)\,d\upsigma (y_\S)\,ds,\\
&(C_\epsilon(a)g)(x_\S,t):=u(t)\int_{{\R}^3}\phi^a(x_\S+\epsilon t\nu(x_\S)-y)g(y)\,dy.
\end{split}
\end{equation}
Recall that, given $F\in L^2({\R}^3)^4$ and $f\in L^2(\upsigma)^4$, in \eqref{defi Phia} we defined $\Phi^a(F,f)$. However, in Section \ref{ss coupling Ve} we made the identification $\Phi^a(\cdot)\equiv\Phi^a(\cdot,0)$, which enabled us to write $(H-a)^{-1}=\Phi^a$.
Here, and in the sequel, we recover the initial definition for $\Phi^a$ given in \eqref{defi Phia} and we assume that $a\in{\mathbb C}\setminus{\mathbb R}$; now we must write $(H-a)^{-1}=\Phi^a(\cdot,0)$, which is a bounded operator in $L^2({\R}^3)^4$.
Proceeding as in the proof of \cite[Lemma 3.2]{approximation}, one can show the following result.
\begin{lemma}\label{lem mel}
The following operator identities hold for all $0<\epsilon\leq\eta$:
\begin{equation}\label{correc2}
\begin{split}
&A_\epsilon(a)=\Phi^a(\cdot,0)\mathbf{v}_{\!\epsilon}\,\mathcal{I}_\epsilon\,\mathcal{S}_\epsilon,\\
&B_\epsilon(a) =\mathcal{S}_\epsilon^{-1}\mathcal{I}_\epsilon^{-1} \mathbf{u}_\epsilon \,\Phi^a(\cdot,0) \mathbf{v}_{\!\epsilon}\,\mathcal{I}_\epsilon\,\mathcal{S}_\epsilon,\\
&C_\epsilon(a)=\mathcal{S}_\epsilon^{-1}\mathcal{I}_\epsilon^{-1} \mathbf{u}_\epsilon\, \Phi^a(\cdot,0).
\end{split}
\end{equation}
Moreover, the following resolvent formulae hold:
\begin{align}\label{resolvent formula 2}
&(H^e_\epsilon -a)^{-1}
=(H-a)^{-1}+A_\epsilon(a)\big(1+B_\epsilon(a)\big)^{-1}C_\epsilon(a),\\\label{resolvent formula 2scalar}
&(H^s_\epsilon -a)^{-1}
=(H-a)^{-1}
+A_\epsilon(a)\big(\beta+B_\epsilon(a)\big)^{-1}C_\epsilon(a).
\end{align}
\end{lemma}
In \eqref{correc2}, $A_\epsilon(a)=\Phi^a(\cdot,0)\mathbf{v}_{\!\epsilon}\,\mathcal{I}_\epsilon\,\mathcal{S}_\epsilon$ means that
$A_\epsilon(a)g=\Phi^a(\mathbf{v}_{\!\epsilon}\,\mathcal{I}_\epsilon
\,\mathcal{S}_\epsilon\, g,0)$ for all $g\in L^2(\Sigma\times(-1,1))^4$, and similarly for $B_\epsilon(a)$ and $C_\epsilon(a)$.
Since both $\mathcal{I}_\epsilon$ and $\mathcal{S}_\epsilon$ are an isometry, $V\in L^\infty({\mathbb R})$ is supported in $[-\eta,\eta]$ and $\Phi^a(\cdot,0)$ is bounded by assumption, from \eqref{correc2} we deduce that $A_\epsilon(a)$, $B_\epsilon(a)$ and $C_\epsilon(a)$ are
well-defined and bounded, so \eqref{ABC espacios} is fully justified.
Once \eqref{correc2} is proved, the resolvent formulae \eqref{resolvent formula 2} and \eqref{resolvent formula 2scalar} follow from \eqref{Birman Shwinger} and \eqref{Birman Shwinger scalar}, respectively. We stress that, in \eqref{Birman Shwinger} and \eqref{Birman Shwinger scalar}, there is the abuse of notation in the definition of $\Phi^a$ commented before.
Lemma \ref{lem mel} connects $(H^e_\epsilon -a)^{-1}$ and $(H^s_\epsilon -a)^{-1}$ to $A_\epsilon(a)$, $B_\epsilon(a)$ and $C_\epsilon(a)$. When $\epsilon\to0$, the limit of the former ones is also connected to the limit of the latter ones. We now introduce those limit operators for $A_\epsilon(a)$, $B_\epsilon(a)$ and $C_\epsilon(a)$ when $\epsilon\to0$.
Let
\begin{equation}\label{ABC espacios2}
\begin{split}
&A_0(a) : L^2(\Sigma\times (-1,1))^4\to L^2({\R}^3)^4,\\
&B_0(a) : L^2(\Sigma\times(-1,1))^4\to L^2(\Sigma\times(-1,1))^4,\\
&B': L^2(\Sigma\times(-1,1))^4\to L^2(\Sigma\times(-1,1))^4,\\
&C_0(a):L^2({\R}^3)^4\to L^2(\Sigma\times (-1,1))^4
\end{split}
\end{equation}
be the operators given by
\begin{equation}\label{limit operators defi}
\begin{split}
&(A_0(a) g)(x):= \int_{-1}^1 \int_\Sigma\phi^a(x-y_\Sigma)v(s)g(y_\Sigma,s)\,d\upsigma(y_\Sigma)\,ds,\\
&(B_0(a) g)(x_\S,t):=\lim_{\epsilon\to 0}u(t)\int_{-1}^1\int_{|x_\S-y_\S|>\epsilon}\phi^a (x_\S -y_\S )v(s) g(y_\S ,s)\,d\upsigma (y_\S)\,ds,\\
&(B'g)(x_\S,t):=(\alpha\cdot \nu(x_\S))\,\frac{i}{2}\,u(t)\int_{-1}^1 \mathop{\textrm{sign}}(t-s)v(s) g(x_\S,s)\,ds,\\
&(C_0(a) g)(x_\Sigma,t):=u(t)\int_{{\R}^3}\phi^a(x_\Sigma-y)g(y)\,dy.
\end{split}
\end{equation}
The next theorem corresponds to the core of this article. Its proof is quite technical and is carried out in Sections \ref{ss C}, \ref{ss B} and \ref{ss A}.
We also postpone the proof of \eqref{ABC espacios2} to those sections, where each operator is studied in detail. Anyway, the boundedness of $B'$ is trivial.
\begin{theorem}\label{conv AB th}
The following convergences of operators hold in the strong sense:
\begin{eqnarray}
&&A_\epsilon(a)\to A_0(a)\quad\text{when }\epsilon\to0,\label{convergence A}\\
&&B_\epsilon(a)\to B_0(a)+B'\quad\text{when }\epsilon\to0,\label{conv B th}
\\
&&C_\epsilon(a)\to C_0(a)\quad\text{when }\epsilon\to0.\label{convergence C}
\end{eqnarray}
\end{theorem}
The proof of the following corollary is also postponed to Section \ref{s proof corol}. It combines Theorem \ref{conv AB th}, \eqref{resolvent formula 2} and \eqref{resolvent formula 2scalar}, but it requires some fine estimates developed in Sections \ref{ss C}, \ref{ss B} and \ref{ss A}.
\begin{corollary}\label{convergence main}
There exist $\eta_0,\,\delta>0$ small enough only depending on $\S$ such that, for any $a\in{\mathbb C}\setminus{\mathbb R}$ with $|a|\leq1$, $0<\eta\leq\eta_0$ and $(\delta,\eta)$-small $V$ (see {\em Definition \ref{deltasmall}}), the following convergences of operators hold in the strong sense:
\begin{align}
&(H+\mathbf{V}_{\!\epsilon}-a)^{-1}\to
(H-a)^{-1}+A_0(a)\big(1+B_0(a)+B'\big)^{-1}C_0(a)\quad\text{when }\epsilon\to0,\\
&(H+\beta\mathbf{V}_{\!\epsilon}-a)^{-1}\to
(H-a)^{-1}+A_0(a)\big(\beta+B_0(a)+B'\big)^{-1}C_0(a)\quad\text{when }\epsilon\to0.
\end{align}
In particular, $(1+B_0(a)+B'\big)^{-1}$ and $(\beta+B_0(a)+B'\big)^{-1}$ are well-defined bounded operators in $L^2(\Sigma\times(-1,1))^4$.
\end{corollary}
\subsection{Proof of Theorem \ref{Main theorem}}\label{s2 ss1}
\mbox{}
Thanks to \cite[Theorem VIII.19]{reedsimon1}, to prove the theorem it is enough to show that, for some $a\in{\mathbb C}\setminus{\mathbb R}$, the following convergences of operators hold in the strong sense:
\begin{align}\label{main eq*1}
&(H+\mathbf{V}_{\!\epsilon}-a)^{-1}\to(H+\lambda_e\delta_\Sigma-a)^{-1}\quad\text{when }\epsilon\to0,\\\label{main eq*2}
&(H+\beta\mathbf{V}_{\!\epsilon}-a)^{-1}\to(H+\lambda_s\beta\delta_\Sigma-a)^{-1}\quad\text{when }\epsilon\to0.
\end{align}
Thus, from now on, we fix $a\in{\mathbb C}\setminus{\mathbb R}$ with $|a|\leq1$.
We introduce the operators \begin{equation}\widehat{V}: L^2(\Sigma\times (-1,1))^4\to L^2(\Sigma)^4\quad\text{and}\quad\widehat{U}: L^2(\Sigma)^4\to L^2(\Sigma\times (-1,1))^4\end{equation} given by
\[
\widehat{V}f(x_\Sigma):=\int_{-1}^1 v(s)\,f(x_\Sigma , s) \, ds
\quad\text{and}\quad
\widehat{U}f(x_\Sigma , t):=u(t)\,f(x_\Sigma).
\]
Observe that, by Fubini's theorem,
\begin{equation}\label{ABC_0 aa}
A_0(a) = \Phi^a(0,\cdot)\widehat{V},\qquad
B_0(a)=\widehat{U}{C^a_\upsigma}\widehat{V},\qquad
C_0(a)=\widehat{U}\Phi^a_\upsigma.
\end{equation}
Hence, from \Cref{convergence main} and \eqref{ABC_0 aa} we deduce that, in the strong sense,
\begin{align}\label{eq final}
&(H+\mathbf{V}_{\!\epsilon}-a)^{-1}\to (H-a)^{-1}+\Phi^a(0,\cdot) \widehat{V}\big(1+\widehat{U}C_\upsigma^a\widehat{V}+B'\big)^{-1}
\widehat{U}\Phi^a_\upsigma\quad\text{when }\epsilon\to0,\\ \label{eq final'}
&(H+\beta\mathbf{V}_{\!\epsilon}-a)^{-1}\to (H-a)^{-1}+\Phi^a(0,\cdot) \widehat{V}\big(\beta+\widehat{U}C_\upsigma^a
\widehat{V}+B'\big)^{-1}\widehat{U}\Phi^a_\upsigma\quad\text{when }\epsilon\to0.
\end{align}
For convinience of notation, set
\begin{equation}
\widetilde{\mathcal{K}}g(x_\S,t):=\mathcal{K}_V(g(x_\S,\cdot))(t)\quad\text{ for $g\in L^2(\S\times(-1,1))$,}
\end{equation}
where $\mathcal{K}_V$ is as in \eqref{correc3}. Then, we get
\begin{equation}
1+ B'=\mathbb{I}_4+(\alpha\cdot\nu) \widetilde{\mathcal{K}}\mathbb{I}_4=\left(\begin{matrix}
\mathbb{I}_2 & (\sigma \cdot\nu) \widetilde{\mathcal{K}}\mathbb{I}_2\\
(\sigma \cdot\nu) \widetilde{\mathcal{K}}\mathbb{I}_2 & \mathbb{I}_2
\end{matrix}\right).
\end{equation}
Here, $\sigma:=(\sigma_1,\sigma_2,\sigma_3)$ (see \eqref{paulimatrices}), $\mathbb{I}_4$ denotes the $4\times4$ identity matrix and $\widetilde{\mathcal{K}}\mathbb{I}_4$ denotes the diagonal $4\times4$ operator matrix whose nontrivial entries are $\widetilde{\mathcal{K}}$, and analogously for $\widetilde{\mathcal{K}}\mathbb{I}_2$.
Since the operators that compose the matrix $1+B'$ commute, if we set $\mathcal{K}:=\widetilde{\mathcal{K}}\mathbb{I}_4$, we get
\begin{equation}\label{correc4}
\begin{split}
(1+B')^{-1}&=(1-\widetilde{\mathcal{K}}^2)^{-1}\otimes\left(\begin{matrix}
\mathbb{I}_2 & -(\sigma \cdot\nu) \widetilde{\mathcal{K}}\mathbb{I}_2\\
-(\sigma \cdot\nu) \widetilde{\mathcal{K}}\mathbb{I}_2 & \mathbb{I}_2
\end{matrix}\right)\\
&=(1-\mathcal{K}^2)^{-1}-(\alpha\cdot \nu) (1-\mathcal{K}^2)^{-1} \mathcal{K}.
\end{split}
\end{equation}
With this at hand, we can compute
\begin{equation}\label{final eq1}
\begin{split}
(1+\widehat{U}C^a_\upsigma \widehat{V}+B')^{-1}
&=\Big(1+(1+B')^{-1}\widehat{U}C_\upsigma^a\widehat{V}\Big)^{-1}(1+B')^{-1}\\
&=\Big(1+(1-\mathcal{K}^2)^{-1}
\widehat{U}C_\upsigma^a\widehat{V}-(\alpha\cdot\nu)
(1-\mathcal{K}^2)^{-1}\mathcal{K} \widehat{U}C_\upsigma^a\widehat{V}\Big)^{-1}\\
&\hskip120pt\circ\Big((1-\mathcal{K}^2)^{-1}-(\alpha \cdot\nu)
(1-\mathcal{K}^2)^{-1} \mathcal{K}\Big).
\end{split}
\end{equation}
Note that
\begin{equation}
\begin{split}
\widehat{V}\Big(1+(1-\mathcal{K}^2)^{-1}\widehat{U}C^a_\upsigma&
\widehat{V}-(\alpha\cdot\nu) (1-\mathcal{K}^2)^{-1}\mathcal{K} \widehat{U}C^a_\upsigma\widehat{V}\Big)\\
&=\Big(1+\widehat{V}(1-\mathcal{K}^2)^{-1}\widehat{U} C^a_\upsigma-(\alpha\cdot\nu) \widehat{V}(1-\mathcal{K}^2)^{-1}\mathcal{K}\widehat{U}C_\upsigma^a\Big){\widehat{V}},
\end{split}
\end{equation}
which obviously yields
\begin{equation}\label{final eq2}
\begin{split}
\widehat{V}\Big(1+(1-\mathcal{K}^2)^{-1}\widehat{U}C^a_\upsigma&
\widehat{V}-(\alpha\cdot\nu) (1-\mathcal{K}^2)^{-1}\mathcal{K} \widehat{U}C^a_\upsigma\widehat{V}\Big)^{-1}\\
&=\Big(1+\widehat{V}(1-\mathcal{K}^2)^{-1}\widehat{U} C^a_\upsigma-(\alpha\cdot\nu) \widehat{V}(1-\mathcal{K}^2)^{-1}\mathcal{K}\widehat{U}C_\upsigma^a\Big)^{-1}{\widehat{V}}.
\end{split}
\end{equation}
Besides, by the definition of $\mathcal{K}_V$ in \eqref{correc3}, we see that
\begin{equation}\label{final eq3}
\begin{split}
\widehat{V}(1-\mathcal{K}^2)^{-1}\widehat{U}&
=\Big({\int_{\mathbb R}\!v\,(1-\mathcal{K}_V^2)^{-1}u}\Big)\mathbb{I}_4,\\
\widehat{V}(1-\mathcal{K}^2)^{-1}\mathcal{K}\widehat{U}&
=\Big({\int_{\mathbb R}\!v\,(1-\mathcal{K}_V^2)^{-1}\mathcal{K}_V u}\Big)\mathbb{I}_4.
\end{split}
\end{equation}
From \eqref{def lambda elec} in Theorem \ref{Main theorem}, $\lambda_e=\int_{\mathbb R}\!v\,(1-\mathcal{K}_V^2)^{-1}u$. Observe also that $\int_{\mathbb R}\!v\,(1-\mathcal{K}_V^2)^{-1}\mathcal{K}_V u=0$. Hence, combining \eqref{final eq2} and \eqref{final eq3} we have that
\begin{equation}\label{final eq4}
\widehat{V}\Big(1+(1-\mathcal{K}^2)^{-1}
\widehat{U}C^a_\upsigma\widehat{V}-(\alpha\cdot\nu) (1-\mathcal{K}^2)^{-1}\mathcal{K} \widehat{U}C^a_\upsigma\widehat{V}\Big)^{-1}=(1+\lambda_e C_\sigma^a)^{-1}\widehat{V}.
\end{equation}
Then, from \eqref{final eq1}, \eqref{final eq4} and \eqref{final eq3}, we finally get
\[
\Phi^a(0,\cdot)\widehat{V}(1+\widehat{U}C_\upsigma^a\widehat{V}+B')^{-1}\widehat{U}\Phi^a_\upsigma
= \Phi^a(0,\cdot)(1+\lambda_e C_\upsigma^a)^{-1} \lambda_e \Phi^a_\upsigma.
\]
This last identity combined with \eqref{eq final} and \eqref{resolvent H+lambda delta} yields \eqref{main eq*1}.
The proof of \eqref{main eq*2} follows the same lines. Similarly to \eqref{correc4},
\begin{equation}
(\beta+B')^{-1}=(1+\mathcal{K}^2)^{-1}\beta-(\alpha\cdot\nu)(1+\mathcal{K}^2)^{-1}.
\end{equation}
One can then make the computations analogous to \eqref{final eq1}, \eqref{final eq2}, \eqref{final eq3} and \eqref{final eq4}. Since $\lambda_s=\int_{\mathbb R}\!v\,(1+\mathcal{K}_V^2)^{-1}u$, we now get
\[
\Phi^a(0,\cdot)\widehat{V}(\beta+\widehat{U}C_\upsigma^a\widehat{V}+B')^{-1}\widehat{U}\Phi^a_\upsigma
= \Phi^a(0,\cdot)(\beta+\lambda_s C_\upsigma^a)^{-1} \lambda_s \Phi^a_\upsigma.
\]
From this, \eqref{eq final'} and \eqref{resolvent H+lambda beta delta} we obtain \eqref{main eq*2}. This finishes the proof of \Cref{Main theorem}, except for the boundedness stated in \eqref{ABC espacios2}, the proof of Corollary \ref{convergence main} in Section \ref{s proof corol}, and Theorem \ref{conv AB th}, whose proof is fragmented as follows: \eqref{convergence A} in Section \ref{ss A}, \eqref{conv B th} in Section \ref{ss B} and \eqref{convergence C} in Section \ref{ss C}.
\section{Proof of \eqref{convergence C}: $C_\epsilon (a)\to C_0(a)$ in the strong sense when $\epsilon\to0$} \label{ss C}
Recall from \eqref{ABCepsilon} and \eqref{limit operators defi} that $C_\epsilon(a)$ with $0<\epsilon\leq\eta_0$ and $C_0(a)$ are defined by
\begin{equation}
\begin{split}
&(C_\epsilon(a)g)(x_\S,t)=u(t)\int_{{\R}^3}\phi^a(x_\S+\epsilon t\nu(x_\S)-y)g(y)\,dy,\\
&(C_0(a)g)(x_\S,t)=u(t)\int_{{\R}^3}\phi^a(x_\S-y)g(y)\,dy.
\end{split}
\end{equation}
Let us first show that $C_\epsilon(a)$ is bounded from $L^2({\R}^3)^4$ to $L^2(\Sigma\times(-1,1))^4$ with a norm uniformly bounded on $0\leq\epsilon\leq\eta_0$.
For this purpose, we write
\begin{equation}\label{trace Sobolev 1}
(C_\epsilon(a)g)(x_\S,t)=u(t)(\phi^a*g)(x_\S+\epsilon t\nu(x_\S)),
\end{equation}
where $\phi^a*g$ denotes the convolution of the matrix-valued function $\phi^a$ with the vector-valued function $g\in L^2({\R}^3)^4$.
Since we are assuming that $a\in{\mathbb C}\setminus{\mathbb R}$ and, in the definition of $\phi^a$, we are taking $\sqrt{m^2-a^2}$ with positive real part, the same arguments as the ones in the proof of \cite[Lemma 2.8]{amv1} (essentially Plancherel's theorem) show that
\begin{equation}\|\phi^a*g\|_{H^1({\R}^3)^4}
\leq C\|g\|_{L^2({\R}^3)^4}\quad\text{for all }g\in L^2({\R}^3)^4,\end{equation}
where $C>0$ only depends on $a$. Besides, thanks to the $C^2$ regularity of $\S$, if $\eta_0$ is small enough it is not hard to show that the Sobolev trace inequality from $H^1({\R}^3)^4$ to $L^2(\S_{\epsilon t})^4$ holds for all $0\leq\epsilon\leq\eta_0$ and $t\in[-1,1]$ with a constant only depending on $\eta_0$ (and $\S$, of course). Combining these two facts, we obtain that
\begin{equation}\label{trace Sobolev}
\|\phi^a*g\|_{L^2(\S_{\epsilon t})^4}
\leq C\|g\|_{L^2({\R}^3)^4}\quad\text{for all $g\in L^2({\R}^3)^4$, $0\leq\epsilon\leq\eta_0$ and $t\in[-1,1]$}.
\end{equation}
By Proposition \ref{weingarten map}, if $\eta_0$ is small enough there exists $C>0$ such that
\begin{equation}\label{trace Sobolev 2}
C^{-1}\leq\det(1-\epsilon t W(P_\S x))\leq C\quad\text{for all $0<\epsilon\leq\eta_0$, $t\in(-1,1)$ and $x\in\S_{\epsilon t}$}.
\end{equation}
Therefore, an application of \eqref{trace Sobolev 1}, \eqref{eqn:coaera2}, \eqref{trace Sobolev 2} and \eqref{trace Sobolev} finally yields
\begin{equation}
\begin{split}
\|C_\epsilon(a)g\|^2_{L^2(\Sigma\times(-1,1))^4}
&=\int_{-1}^1\int_\S\big|u(t)(\phi^a*g)(x_\S+\epsilon t\nu(x_\S))\big|^2d\upsigma(x_\S)\,dt\\
&\leq\|u\|_{L^\infty({\mathbb R})}^2\int_{-1}^1\int_{\S_{\epsilon t}}
\big|\det(1-\epsilon t W(P_\S x))^{-1/2}(\phi^a*g)(x)\big|^2d\upsigma_{\epsilon t}(x)\,dt\\
&\leq C\|u\|_{L^\infty({\mathbb R})}^2\int_{-1}^1
\|\phi^a*g\|_{L^2(\S_{\epsilon t})^4}^2\,dt
\leq C\|u\|_{L^\infty({\mathbb R})}^2
\|g\|_{L^2({\R}^3)^4}^2.
\end{split}
\end{equation}
That is, if $\eta_0$ is small enough there exists $C_1>0$ only depending on $\eta_0$ and $a$ such that
\begin{equation}\label{unif estimate Cepsilon}
\|C_\epsilon(a)\|_{L^2({\R}^3)^4\to L^2(\Sigma\times(-1,1))^4}
\leq C_1\|u\|_{L^\infty({\mathbb R})}
\quad\text{for all $0\leq\epsilon\leq\eta_0$.}
\end{equation}
In particular, the boundedness stated in \eqref{ABC espacios2} holds for $C_0(a)$.
In order to prove the strong convergence of $C_\epsilon(a)$ to $C_0(a)$ when $\epsilon\to0$, fix $g\in L^2({\R}^3)^4$. We must show that, given $\delta>0$, there exists $\epsilon_0>0$ such that
\begin{equation}\label{case C eq0}
\|C_\epsilon(a)g-C_0(a)g\|_{L^2(\Sigma\times(-1,1))^4}
\leq\delta\quad\text{for all }0\leq\epsilon\leq\epsilon_0.
\end{equation}
For every $0<d\leq\eta_0$, using \eqref{unif estimate Cepsilon} we can estimate
\begin{equation}\label{case C eq1}
\begin{split}
\|C_\epsilon(a)g-&C_0(a)g\|_{L^2(\Sigma\times(-1,1))^4}\\
&\leq\|C_\epsilon(a)(\chi_{\Omega_d}g)\|_{L^2(\Sigma\times(-1,1))^4}
+\|C_0(a)(\chi_{\Omega_d}g)\|_{L^2(\Sigma\times(-1,1))^4}\\
&\quad+\|(C_\epsilon(a)-C_0(a))(\chi_{{\R}^3\setminus\Omega_d}g)\|_{L^2(\Sigma\times(-1,1))^4}\\
&\leq 2C_1\|u\|_{L^\infty({\mathbb R})}\|\chi_{\Omega_d}g\|_{L^2({\R}^3)^4}
+\|(C_\epsilon(a)-C_0(a))(\chi_{{\R}^3\setminus\Omega_d}g)\|_{L^2(\Sigma\times(-1,1))^4}.
\end{split}
\end{equation}
On one hand, since $g\in L^2({\R}^3)^4$ and $\mathcal{L}(\S)=0$ ($\mathcal{L}$ denotes the Lebesgue measure in ${\mathbb R}^3$), we can take $d>0$ small enough so that
\begin{equation}\label{case C eq2}
\|\chi_{\Omega_d}g\|_{L^2({\R}^3)^4}\leq\frac{\delta}{4C_1\|u\|_{L^\infty({\mathbb R})}}.
\end{equation}
On the other hand, note that
\begin{equation}\label{case C eq2*}
|(x_\S+\epsilon t\nu(x_\S))-x_\S|=\epsilon |t||\nu(x_\S)|
\leq\epsilon\leq\frac{d}{2}=\frac{1}{2}\,{\rm dist}(\S,{\R}^3\setminus\Omega_d)
\leq\frac{1}{2}\,|x_\S-y|
\end{equation}
for all $0\leq\epsilon\leq\frac{d}{2}$, $t\in(-1,1)$, $x_\S\in\S$ and $y\in{\R}^3\setminus\Omega_d$.
As we said before, we are assuming that $a\in{\mathbb C}\setminus{\mathbb R}$ and, in the definition of $\phi^a$, we are taking $\sqrt{m^2-a^2}$ with positive real part, so the components of $\phi^a(x)$ decay exponentially as $|x|\to\infty$. In particular, there exist $C,r>0$ only depending on $a$ such that
\begin{equation}\label{Horm est*}
\begin{split}
&|\partial\phi^a(x)|
\leq Ce^{-r|x|}\quad\text{for all }|x|\geq 1,\\
&|\partial\phi^a(x)|
\leq C|x|^{-3}\quad\text{for all }0<|x|<1,
\end{split}
\end{equation}
where by the left hand side in \eqref{Horm est*} we mean the absolute value of any derivative of any component of the matrix $\phi^a(x)$. Therefore, using the mean value theorem, \eqref{Horm est*} and \eqref{case C eq2*}, we see that there exists $C_{a,d}>0$ only depending on $a$ and $d$ such that
\begin{equation}
|\phi^a(x_\S+\epsilon t\nu(x_\S)-y)-\phi^a(x_\S-y)\big|
\leq C_{a,d}\,\frac{\epsilon}{|x_\S-y|^3}
\end{equation}
for all $0\leq\epsilon\leq\frac{d}{2}$, $t\in(-1,1)$, $x_\S\in\S$ and $y\in{\R}^3\setminus\Omega_d$. Hence, we can easily estimate
\begin{equation}
\begin{split}
|(C_\epsilon(a)-&C_0(a))(\chi_{{\R}^3\setminus\Omega_d}g)(x_\S,t)|\\
&\leq\|u\|_{L^\infty({\mathbb R})}\int_{{\R}^3\setminus\Omega_d}
\big|\phi^a(x_\S+\epsilon t\nu(x_\S)-y)-\phi^a(x_\S-y)\big||g(y)|\,dy\\
&\leq C_{a,d}\|u\|_{L^\infty({\mathbb R})}\int_{{\R}^3\setminus\Omega_d}
\frac{\epsilon|g(y)|}{|x_\S-y|^3}\,dy\\
&\leq C_{a,d}\,\epsilon\|u\|_{L^\infty({\mathbb R})}\Big(\int_{{\R}^3\setminus B_{d}(x_\S)}
\frac{dy}{|x_\S-y|^6}\Big)^{1/2}
\|g\|_{L^2({\R}^3)^4}
\leq C'_{a,d}\,\epsilon\|u\|_{L^\infty({\mathbb R})}\|g\|_{L^2({\R}^3)^4},
\end{split}
\end{equation}
where $C'_{a,d}>0$ only depends on $a$ and $d$. Then,
\begin{equation}\label{case C eq3}
\|(C_\epsilon(a)-C_0(a))(\chi_{{\R}^3\setminus\Omega_d}g)\|_{L^2(\Sigma\times(-1,1))^4}
\leq C'_{a,d}\,\epsilon\|u\|_{L^\infty({\mathbb R})}\|g\|_{L^2({\R}^3)^4}
\end{equation}
for a possibly bigger constant $C'_{a,d}>0$.
With these ingredients, the proof of \eqref{case C eq0} is straightforward. Given $\delta>0$, take $d>0$ small enough so that \eqref{case C eq2} holds. For this fixed $d$, take
\begin{equation}
\epsilon_0=\min\bigg\{\frac{\delta}{2C'_{a,d}\|u\|_{L^\infty({\mathbb R})}\|g\|_{L^2({\R}^3)^4}},\frac{d}{2}\bigg\}.
\end{equation}
Then, \eqref{case C eq0} follows from \eqref{case C eq1}, \eqref{case C eq2} and \eqref{case C eq3}. In conclusion, we have shown that
\begin{equation}\label{0001}
\lim_{\epsilon\to 0}\|(C_\epsilon(a)-C_0(a))g\|_{L^2(\Sigma\times(-1,1))^4}=0\quad\text{for all }g\in L^2({\R}^3)^4,
\end{equation}
which is \eqref{convergence C}.
\section{Proof of \eqref{conv B th}: $B_\epsilon (a)\to B_0(a)+B'$ in the strong sense when $\epsilon\to0$} \label{ss B}
Recall from \eqref{ABCepsilon} and \eqref{limit operators defi} that $B_\epsilon(a)$ with $0<\epsilon\leq\eta_0$, $B_0(a)$ and $B'$ are defined by
\begin{equation}
\begin{split}
&(B_\epsilon (a)g)(x_\S ,t)= u(t)\int_{-1}^1\int_\S\phi^a (x_\S + \epsilon t \nu (x_\S) -y_\S -\epsilon s \nu (y_\S))v(s)\\
&\hskip200pt \times \det(1-\epsilon s W(y_\S)) g(y_\S ,s)\,d\upsigma (y_\S)\,ds,\\
&(B_0(a) g)(x_\S,t)=\lim_{\epsilon\to 0}u(t)\int_{-1}^1\int_{|x_\S-y_\S|>\epsilon}\phi^a (x_\S -y_\S )v(s) g(y_\S ,s)\,ds\,d\upsigma (y_\S),\\
&(B'g)(x_\S,t)=(\alpha\cdot \nu(x_\S))\,\frac{i}{2}\,u(t)\int_{-1}^1 \mathop{\textrm{sign}}(t-s)v(s) g(x_\S,s)\,ds.
\end{split}
\end{equation}
We already know that $B_\epsilon(a)$ and $B'$ are bounded in $L^2(\Sigma\times(-1,1))^4$. Let us postpone to Section \ref{meB} the proof of the boundedness of $B_0(a)$ stated in \eqref{ABC espacios2}.
The first step to prove \eqref{conv B th} is to decompose $\phi^a $ as in {\cite[Lemma 3.2]{amv2}}, that is,
\begin{equation}\label{eqn:break phi}
\begin{split}
\phi^a(x)&=\frac{e^{-\sqrt{m^2-a^2}|x|}}{4\pi|x|}\Big(a+m\beta +\sqrt{m^2-a^2}\,i\alpha\cdot\frac{x}{|x|}\Big)\\
&\quad+\frac{e^{-\sqrt{m^2-a^2}|x|}-1}{4 \pi}\,i\alpha\cdot\frac{x}{|x|^3}+\frac{i}{4\pi}\,\alpha\cdot\frac{x}{|x|^3}
=:\omega^a_1(x)+\omega^a_2(x)+\omega_3(x).
\end{split}
\end{equation}
Then we can write
\begin{equation}\label{eqn:break phi2}
\begin{split}
&B_\epsilon (a)=B_{\epsilon,\omega_1^a}+B_{\epsilon,\omega_2^a}+B_{\epsilon,\omega_3},\\
&B_0 (a)=B_{0,\omega_1^a}+B_{0,\omega_2^a}+B_{0,\omega_3},
\end{split}
\end{equation}
where $B_{\epsilon,\omega_1^a}$, $B_{\epsilon,\omega_2^a}$ and $B_{\epsilon,\omega_3}$ are defined as $B_\epsilon(a)$ but replacing $\phi^a$ by $\omega_1^a$, $\omega_2^a$ and $\omega_3$, respectively, and analogously for the case of $B_0(a)$.
For $j=1,2$, we see that $|\omega_j^a(x)|= O(|x|^{-1})$ and
$|\partial\omega_j^a(x)|= O(|x|^{-2})|$ for $|x|\to 0$, with the understanding that $|\omega_j^a(x)|$ means the absolute value of any component of the matrix $\omega_j^a(x)$ and $|\partial\omega_j^a(x)|$ means the absolute value of any first order derivative of any component of $\omega_j^a(x)$. Therefore, the integrals defining $B_{\epsilon,\omega_j^a}$ and $B_{0,\omega_j^a}$ are of fractional type for $j=1,2$ (recall Lemma \ref{2d AD regularity}) and they are taken over bounded sets, so the strong convergence follows by standard methods.
However, one can also follow the arguments in the proof of {\cite[Lemma 3.4]{approximation}} to show, for $j=1,2$, the convergence of $B_{\epsilon,\omega_j^a}$ to $B_{0,\omega_j^a}$ in the norm sense when $\epsilon\to0$, that is,
\begin{equation}\label{0002}
\lim_{\epsilon\to 0}\|B_{\epsilon,\omega_j^a}-B_{0,\omega_j^a}\|_{L^2(\Sigma\times(-1,1))^4\to L^2(\Sigma\times(-1,1))^4}=0\quad\text{for } j=1,2.
\end{equation}
A comment is in order. Since the integrals involved in \eqref{0002} are taken over $\S\times(-1,1)$, which is bounded, the exponential decay at infinity from {\cite[Proposition A.1]{approximation}} is not necessary in the setting of \eqref{conv B th}, hence the local estimate of $|\omega_j^a(x)|$ and $|\partial\omega_j^a(x)|$ near the origin is enough to adapt the proof of {\cite[Lemma 3.4]{approximation}} to get \eqref{0002}.
Thanks to \eqref{eqn:break phi2} and \eqref{0002}, to prove \eqref{conv B th} we only need to show that
$B_{\epsilon,\omega_3}\to B_{0,\omega_3}+B'$ in the strong sense when $\epsilon\to0$. This will be done in two main steps. First, we will show that
\begin{equation}\label{point limit}
\lim_{\epsilon\to0}B_{\epsilon,\omega_3}g(x_\S,t)
=B_{0,\omega_3}g(x_\S,t)
+B'g(x_\S,t)\quad\text{for allmost all }(x_\S,t)\in\S\times(-1,1)
\end{equation}
and all $g\in L^\infty(\S\times(-1,1))^4$ such that
$\sup_{|t|<1}|g(x_\S,t)-g(y_\S,t)|\leq C|x_\S-y_\S|$ for all $x_\S,\,y_\S\in\S$ and some $C>0$ which may depend on $g$. This is done in Section \ref{pointwise B}. Then, for a general $g\in L^2(\S\times(-1,1))^4$, we will estimate $|B_{\epsilon,\omega_3}g(x_\S,t)|$ in terms of some bounded maximal operators that will allow us to prove the pointwise limit \eqref{point limit} for almost every $(x_\S,t)\in\S\times(-1,1)$ and the desired strong convergence of $B_{\epsilon,\omega_3}$ to $B_{0,\omega_3}+B'$, see Section \ref{meB}.
\subsection{The pointwise limit of $B_{\epsilon,\omega_3}g(x_\S,t)\text{ when }\epsilon\to0$ for $g$ in a dense subspace of $L^2(\S\times(-1,1))^4$}\label{pointwise B}
\mbox{}
Observe that the function $u$ in front of the definitions of $B_{\epsilon,\omega_3}$, $B_{0,\omega_3}$ and $B'$ does not affect to the validity of the limit in \eqref{point limit}, so we can assume without loss of generality that $u\equiv1$ in $(-1,1)$.
We are going to prove \eqref{point limit} by showing the pointwise limit component by component, that is, we are going to work in $L^\infty(\S\times(-1,1))$ instead of $L^\infty(\S\times(-1,1))^4$. In order to do so, we need to introduce some definitions. Set
\begin{equation}\label{CZ kernel1}
k(x):=\frac{x}{4\pi |x|^3}\quad\text{ for $x\in{\R}^3\setminus\{0\}$.}
\end{equation}
Given $t\in(-1,1)$ and $0<\epsilon\leq\eta_0$ with $\eta_0$ small enough and $f\in L^\infty(\S\times(-1,1))$ such that
$\sup_{|t|<1}|f(x_\S,t)-f(y_\S,t)|\leq C|x_\S-y_\S|$ for all $x_\S,\,y_\S\in\S$ and some $C>0$, we define
\begin{equation}
T_t^\epsilon f(x_\Sigma):=\int_{-1}^1\int_\Sigma k (x_\Sigma+\epsilon t\nu(x_\Sigma)-y_\Sigma-\epsilon s \nu(y_\Sigma))f(y_\Sigma,s)\det(1-\epsilon sW(y_\Sigma))\,d\upsigma(y_\Sigma)\,ds.
\end{equation}
By \eqref{eqn:coaera2},
\begin{equation}\label{eqn:det t_eps}
\begin{split}
T_t^\epsilon f(x_\Sigma)=\int_{-1}^1\int_{\Sigma_{\epsilon s}} k (x_{\epsilon t} - y_{\epsilon s})f(P_\Sigma y_{\epsilon s},s)\,d\upsigma_{\epsilon s}(y_{\epsilon s})\,ds,
\end{split}
\end{equation}
where $x_{\epsilon t}:=x_\Sigma+\epsilon t\nu(x_\Sigma)$, $y_{\epsilon s}:=y_\Sigma+\epsilon s\nu(y_\Sigma)$ and $P_\Sigma$ is given by \eqref{P Sigma}. We also set
\begin{equation}
\begin{split}
T_t f(x_\Sigma):=\lim_{\delta\to 0}\int_{-1}^1\!\int_{|x_\Sigma-y_\Sigma|>\delta}\!\!k(x_\Sigma-y_\Sigma)f(y_\Sigma ,s)\,d\upsigma(y_\Sigma)\,ds+\frac{\nu(x_\S)}{2}\int_{-1}^1\!\mathop{\textrm{sign}}(t-s) f(x_\Sigma,s)\,ds.
\end{split}
\end{equation}
We are going to prove that
\begin{equation}\label{eqn:t eps to t t}
\lim_{\epsilon\to 0} T^\epsilon_t f(x_\Sigma)=T_tf(x_\Sigma)
\end{equation}
for almost all $(x_\Sigma,t)\in\S\times(-1,1)$. Once this is proved, it is not hard to get \eqref{point limit}. Indeed, note that $k=(k_1,k_2,k_3)$ with $k_j(x):=\frac{x_j}{4\pi |x|^3}$ being the scalar components of the vector kernel $k(x)$. Thus, we can write
\begin{equation}T_t^\epsilon f(x_\Sigma)=\big((T_t^\epsilon f(x_\Sigma))_1,(T_t^\epsilon f(x_\Sigma))_2,(T_t^\epsilon f(x_\Sigma))_3\big),\end{equation}
where each $(T_t^\epsilon f(x_\Sigma))_j$ is defined as in \eqref{eqn:det t_eps} but replacing $k$ by $k_j$. Then, \eqref{eqn:t eps to t t} holds if and only if $(T^\epsilon_t f(x_\Sigma))_j\to(T_tf(x_\Sigma))_j$ when $\epsilon\to0$ for $j=1,2,3.$ From this limits, if we let $f(y_\S ,s)$ in the definitions of $T_t^\epsilon f$ and $T_tf$ be the different componens of $v(s)g(y_\S ,s)$, we easily deduce \eqref{point limit}. Thus, we are reduced to prove \eqref{eqn:t eps to t t}.
The proof of \eqref{eqn:t eps to t t} follows the strategy of the proof of {\cite[Proposition 3.30]{mitrea}}. Set
\begin{equation}E(x):=-\frac{1}{4\pi |x|}\quad\text{for $x\in{\R}^3\setminus\{0\}$,}\end{equation} the fundamental solution of the Laplace operator in ${\R}^3$. Note that $\nabla E=k=(k_1,k_2,k_3).$ In particular, if we set $\nu=(\nu_1,\nu_2,\nu_3)$ and $x=(x_1,x_2,x_3)$, for $x\in{\R}^3$ and $y\in\S$ with $x\neq y$ we can decompose
\begin{equation}\label{desc K}
\begin{split}
k_j(x&-y)=\partial_{x_j} E(x-y)=|\nu(y)|^2\,\partial_{x_j} E(x-y)\\
&=\sum_n \nu_n(y)^2\partial_{x_j} E(x-y)+\sum_n \nu_j(y)\nu_n(y)\partial_{x_n} E(x-y)-\sum_n \nu_j(y)\nu_n(y)\partial_{x_n}E(x-y)\\
&=\nu_j(y)\sum_n\partial_{x_n}E(x-y)\nu_n(y)+\sum_n\Big( \nu_n(y)\partial_{x_j}E(x-y)-\nu_j(y)\partial_{x_n}E(x-y)\Big)\nu_n(y)\\
&=\nu_j(y)\nabla_{\nu(y)}E(x-y)+\sum_n \nabla^{j,n}_{\nu(y)}E(x-y)\nu_n(y),
\end{split}
\end{equation}
where we have taken
\begin{equation}\label{defi deriva}
\begin{split}
&\nabla_{\nu(y)}E(x-y):=\sum_n \nu_n(y)\partial_{x_n}E(x-y)=\nabla_{\!x} E(x-y)\cdot\nu(y),\\
&\nabla^{j,n}_{\nu(y)}E(x-y):= \nu_n(y)\partial_{x_j}E(x-y)-\nu_j(y)\partial_{x_n}E(x-y).
\end{split}
\end{equation}
For $j,\,n\in\{1,2,3\}$ we define
\begin{equation}\label{defi deriva2}
\begin{split}
&T^\epsilon_\nu f(x_\S,t):= \int_{-1}^1\int_{\Sigma_{\epsilon s}} \nabla_{\nu_{\epsilon s}(y_{\epsilon s})}E (x_{\epsilon t} - y_{\epsilon s})f( P_\S y_{\epsilon s},s)\,d\upsigma_{\epsilon s}(y_{\epsilon s})\,ds,\\
&T^\epsilon_{j,n} f(x_\S,t):=\int_{-1}^1\int_{\Sigma_{\epsilon s}} \nabla^{j,n}_{\nu_{\epsilon s}(y_{\epsilon s})}E (x_{\epsilon t} - y_{\epsilon s})f(P_\S y_{\epsilon s},s)\,d\upsigma_{\epsilon s}(y_{\epsilon s})\,ds,
\end{split}
\end{equation}
being $\nu_{\epsilon s}(y_{\epsilon s}):=\nu(y_\S)$ a normal vector field to $\Sigma_{\epsilon s}$. Besides, the terms $\nabla_{\nu_{\epsilon s}(y_{\epsilon s})}E (x_{\epsilon t} - y_{\epsilon s})$ and $\nabla^{j,n}_{\nu_{\epsilon s}(y_{\epsilon s})}E (x_{\epsilon t} - y_{\epsilon s})$ in \eqref{defi deriva2} are defined as in \eqref{defi deriva} with the obvious replacements.
Given $f\in L^\infty(\S\times(-1,1))$ such that
$\sup_{|t|<1}|f(x_\S,t)-f(y_\S,t)|\leq C|x_\S-y_\S|$ for all $x_\S,\,y_\S\in\S$ and some $C>0$, by \eqref{desc K} we see that
\begin{equation}\label{eqn: def fk}
(T_t^\epsilon f(x_\Sigma))_j=T^\epsilon_\nu h_j(x_\Sigma,t)+\sum_n T_{j,n}^\epsilon h_n(x_\Sigma,t),
\end{equation}
where $h_n(P_\S y_{\epsilon s},s):= (\nu_{\epsilon s}(y_{\epsilon s}))_n\, f(P_\Sigma y_{\epsilon s},s)$ for $n=1,2,3$. We are going to prove that
\begin{align}\label{eqn:tnu convergence}
\lim_{\epsilon\to 0} T^\epsilon_\nu h_j (x_\Sigma,t)
&= \lim_{\delta \to 0} \int_{-1}^1 \int_{|x_\Sigma-y_\Sigma|>\delta}\nabla_{\nu(y_\Sigma)}E(x_\Sigma-y_\Sigma) h_j (y_\Sigma,s)\,d\upsigma(y_\Sigma)\,ds\\
&\quad+\frac{1}{2}\int_{-1}^1\mathop{\textrm{sign}}(t-s)h_j(x_\Sigma,s)\,ds,\notag\\
\label{eqn:tjk convergence}
\lim_{\epsilon\to 0} T_{j,n}^\epsilon h_n(x_\Sigma,t) &=\lim_{\delta \to 0} \int_{-1}^1 \int_{|x_\Sigma-y_\Sigma|>\delta}\nabla^{j,n}_{\nu(y_\Sigma)}E(x_\Sigma-y_\Sigma) h_n (y_\Sigma,s)\,d\upsigma(y_\Sigma)\,ds
\end{align}
for $n=1,2,3$. Then, combining \eqref{eqn: def fk}, \eqref{eqn:tnu convergence} and \eqref{eqn:tjk convergence}, we obtain \eqref{eqn:t eps to t t}. Therefore, it is enough to show \eqref{eqn:tnu convergence} and \eqref{eqn:tjk convergence}.
We first deal with \eqref{eqn:tnu convergence}. Remember that $\nabla E=k$ so, given $\delta>0$, from \eqref{defi deriva} and \eqref{defi deriva2} we can split
\begin{equation}
\begin{split}
T_\nu^\epsilon h_j(x_\S,t)
&=\int_{-1}^1 \int_{|x_{\epsilon s}-y_{\epsilon s}|>\delta} k (x_{\epsilon t} - y_{\epsilon s})\cdot \nu_{\epsilon s}(y_{\epsilon s})\, h_j ( P_\S y_{\epsilon s},s)\,d\upsigma_{\epsilon s}(y_{\epsilon s})\,ds\\
&\quad +\int_{-1}^1 \int_{|x_{\epsilon s}-y_{\epsilon s}|\leq\delta}\!\! k (x_{\epsilon t} - y_{\epsilon s})\cdot \nu_{\epsilon s}(y_{\epsilon s}) \Big(h_j ( P_\S y_{\epsilon s},s) - h_j(P_\S x_{\epsilon s},s)\Big)d\upsigma_{\epsilon s}(y_{\epsilon s})\,ds\\
&\quad+\int_{-1}^1 h_j(P_\S x_{\epsilon s},s) \int_{|x_{\epsilon s}-y_{\epsilon s}|\leq\delta} k (x_{\epsilon t} - y_{\epsilon s})\cdot \nu_{\epsilon s}(y_{\epsilon s})\,d\upsigma_{\epsilon s}(y_{\epsilon s})\,ds\\
&=:\mathscr{A}_{\epsilon,\delta}+\mathscr{B}_{\epsilon,\delta}+\mathscr{C}_{\epsilon,\delta},
\end{split}
\end{equation}
and we easily see that
\begin{equation}\label{T split}
\lim_{\epsilon\to 0}T^\epsilon_{\nu}h_j(x_\S,t)=\lim_{\delta\to 0}\,\lim_{\epsilon\to 0}\big(\mathscr{A}_{\epsilon,\delta}+\mathscr{B}_{\epsilon,\delta}+\mathscr{C}_{\epsilon,\delta}\big).
\end{equation}
We study the three terms on the right hand side of \eqref{T split} separately.
For the case of $\mathscr{A}_{\epsilon,\delta}$, note that $k\in C^\infty({\mathbb R}^3\setminus B_\delta(0))^3$ and it has polynomial decay at $\infty$, so
\begin{equation}|k(x)|+|\partial k(x)|\leq C<+\infty\quad\text{for all $x\in{\mathbb R}^3\setminus B_\delta(0)$,}\end{equation} where $C>0$ only depends on $\delta$, and $\partial k$ denotes any first order derivative of any component of $k$. Moreover, $h_j$ is bounded on $\S\times(-1,1)$
and $\Sigma$ is bounded and of class $C^2$. Therefore, fixed $\delta >0$, the uniform boundedness of the integrand combined with the regularity of $k$ and $\S$ and the dominated convergence theorem yields
\begin{equation}\label{AAA}
\lim_{\epsilon\to 0}\mathscr{A}_{\epsilon,\delta}=\int_{-1}^1 \int_{|x_\S-y_\S|>\delta} k (x_{\Sigma} - y_{\Sigma})\cdot \nu(y_{\Sigma})\, h_j ( y_{\Sigma},s)\,d\upsigma(y_{\Sigma})\,ds.
\end{equation}
Then, if we let $\delta \to 0$, from \eqref{AAA} we get the first term on the right hand side of \eqref{eqn:tnu convergence}.
Recall that the function $h_j$ appearing in $\mathscr{B}_{\epsilon,\delta}$ is constructed from the one in \eqref{point limit} using $v$ (see below \eqref{eqn:t eps to t t}) and $\nu_{\epsilon s}$ (see below \eqref{eqn: def fk}). Hence $h_j\in L^\infty(\S\times(-1,1))$ and
$\sup_{|t|<1}|h_j(x_\S,t)-h_j(y_\S,t)|\leq C|x_\S-y_\S|$ for all $x_\S,\,y_\S\in\S$ and some $C>0$. Thus, if $\eta_0$ and $\delta$ are small enough, by the mean value theorem there exists $C>0$ such that
\begin{equation}\label{eqn:estimatek}
\begin{split}
\big|k (x_{\epsilon t} - y_{\epsilon s})\cdot \nu_{\epsilon s}(y_{\epsilon s})( h_j ( P_\S y_{\epsilon s},s) -h_j(P_\S x_{\epsilon s},s))\big|
\leq C\frac{|P_\S y_{\epsilon s}-P_\S x_{\epsilon s}|}{|x_{\epsilon t}-y_{\epsilon s}|^2}
\leq \frac{C}{|y_{\epsilon s}-x_{\epsilon s}|}
\end{split}
\end{equation}
for all $0\leq\epsilon\leq\eta_0$ and $|x_{\epsilon s}-y_{\epsilon s}|\leq\delta$. In the last inequality in \eqref{eqn:estimatek} we used that $P_\S$ is Lipschitz on $\Omega_{\eta_0}$ and that $|x_{\epsilon s}-y_{\epsilon s}|\leq C|x_{\epsilon t}-y_{\epsilon s}|$ if $|x_{\epsilon s}-y_{\epsilon s}|\leq\delta$ and $\delta$ is small enough (due to the regularity of $\S$).
From the local integrability of the right hand side of \eqref{eqn:estimatek} with respect to $\upsigma_{\epsilon s}$ (see Lemma \ref{2d AD regularity}) and standard arguments, we easily deduce the existence of $C_\delta>0$ such that
$\sup_{0\leq\epsilon\leq\eta_0}|\mathscr{B}_{\epsilon, \delta}|\leq C_\delta$ and $C_\delta\to 0$ when $\delta \to 0$, see \cite[equation (A.7)]{approximation} for a similar argument.
Then, we can resume
\begin{equation}\label{BBB}
\Big|\lim_{\delta\to 0}\lim_{\epsilon\to 0} \mathscr{B}_{\epsilon,\delta}\Big|
\leq \lim_{\delta\to 0}\sup_{0\leq\epsilon\leq\eta_0}|\mathscr{B}_{\epsilon, \delta}|\leq\lim_{\delta\to 0}C_\delta=0.
\end{equation}
Let us finally focus on $\mathscr{C}_{\epsilon,\delta}$.
Since $k=\nabla E$, from \eqref{defi deriva} we get
\[
\int_{|x_{\epsilon s}-y_{\epsilon s}|\leq \delta}k(x_{\epsilon t}-y_{\epsilon s})\cdot\nu_{\epsilon s}(y_{\epsilon s})\,d\upsigma_{\epsilon s}(y_{\epsilon s})=\int_{|x_{\epsilon s}-y_{\epsilon s}|\leq \delta}\nabla_{\nu_{\epsilon s}(y_{\epsilon s})}E(x_{\epsilon t}-y_{\epsilon s})\,d\upsigma_{\epsilon s}(y_{\epsilon s}).
\]
Consider the set
\[
D_\delta^\epsilon(t,s):=\begin{cases}
B_\delta(x_{\epsilon s})\setminus \overline{\Omega({\epsilon,s})} &\text{if } t\leq s,\\
B_\delta(x_{\epsilon s})\cap \Omega({\epsilon, s}) &\text{if } t>s,
\end{cases}
\]
where $\Omega({\epsilon, s})$ denotes the bounded connected component of ${\R}^3\setminus\S_{\epsilon s}$ that contains $\Omega$ if $s\geq 0$ and that is included in $\Omega$ if $s<0$.
\begin{figure}[!h]
\stackunder[5pt]{\includegraphics[scale=0.55]{dra_t_ma_s.eps}}
{$D_\delta^\epsilon(t,s)$ in the case $t>s>0$,}
\stackunder[5pt]{\includegraphics[scale=0.55]{dra_t_min_s.eps}}
{$D_\delta^\epsilon(t,s)$ in the case $s>t>0$.}
\caption{The set $D_\delta^{\epsilon}(t,s)$.}
\label{figura}
\end{figure}
Set $E_x(y):=E(x-y)$ for $x,\,y\in{\R}^3$ with $x\neq y$. Then $\Delta E_{x_{\epsilon t}}=0$ in $D_\delta^\epsilon(t,s)$ and $\nabla E_{x_{\epsilon t}}(y)=-\nabla E(x_{\epsilon t}-y)$.
If $\nu_{\partial D_\delta^\epsilon(t,s)}$ denotes the normal vector field on $\partial{D_\delta^\epsilon(t,s)}$ pointing outside $D_\delta^\epsilon(t,s)$, by the divergence theorem,
\begin{equation}\label{eqn:K sign*}
\begin{split}
0&=\int_{D_\delta^\epsilon(t,s)}\Delta E_{x_{\epsilon t}}(y)\,dy
=-\int_{\partial D_\delta^\epsilon(t,s)}\nabla E(x_{\epsilon t}-y)\cdot\nu_{\partial D_\delta^\epsilon(t,s)}(y)\,d\mathcal{H}^2(y)\\
&=-\mathop{\textrm{sign}} (t-s)\int_{|x_{\epsilon s}-y_{\epsilon s}|\leq\delta}\nabla_{\nu_{\epsilon s}(y_{\epsilon s})} E(x_{\epsilon t}-y_{\epsilon s}) \,d\upsigma_{\epsilon s}(y_{\epsilon s})\\
&\quad-\int_{\{y\in{\R}^3:\,|x_{\epsilon s}-y|=\delta\}\cap A^\epsilon_{t,s}}\nabla E(x_{\epsilon t}-y)\cdot\frac{y-x_{\epsilon s}}{|y-x_{\epsilon s}|}\,d\mathcal{H}^2(y),
\end{split}
\end{equation}
where
\begin{equation}
\text{$A^\epsilon_{t,s}:={\R}^3\setminus\overline{\Omega(\epsilon, s)}$ if $t\leq s$\qquad and \qquad$A^\epsilon_{t,s}:=\Omega(\epsilon, s)$ if $t>s$.}
\end{equation}
Remember also that $\mathcal{H}^2$ denotes the 2-dimensional Hausdorff measure.
Since $\nabla E=k$, from \eqref{eqn:K sign*} and \eqref{defi deriva} we deduce that
\begin{equation}\label{eqn:K sign}
\begin{split}
\int_{|x_{\epsilon s}-y_{\epsilon s}|\leq\delta}k(x_{\epsilon t}-y_{\epsilon s})&\cdot\nu_{\epsilon s}(y_{\epsilon s}) \,d\upsigma_{\epsilon s}(y_{\epsilon s})\\
&=\mathop{\textrm{sign}} (t-s)\int_{\partial B_\delta(x_{\epsilon s})\cap A^\epsilon_{t,s}}
k(x_{\epsilon t }- y)
\cdot\frac{x_{\epsilon s}-y}{|x_{\epsilon s}-y|}\,d\mathcal{H}^2(y).
\end{split}
\end{equation}
Note that $x_{\epsilon t}\not\in D_\delta^\epsilon(t,s)$ by construction, see Figure \ref{figura}. Moreover, by the regularity of $\S$, given $\delta>0$ small enough we can find $\epsilon_0>0$ so that
$|x_{\epsilon t}-y|\geq \delta /2$ for all $0<\epsilon\leq\epsilon_0$, $s,t \in [-1,1]$ and $y\in \partial B_\delta(x_{\epsilon s})\cap A^\epsilon_{t,s}$. In particular,
\begin{equation}\label{correc5}
|k(x_{\epsilon t}-y)|\leq C<+\infty\qquad
\text{for all }y\in \partial B_\delta(x_{\epsilon s})\cap A^\epsilon_{t,s},
\end{equation}
where $C$ only depends on $\delta$ and $\epsilon_0$.
Then,
\begin{equation}\label{lim Kepsi}
\begin{split}
\chi_{\partial B_\delta(x_{\epsilon s})\cap A^\epsilon_{t,s}}(y)\,k(x_{\epsilon t}-y&)\cdot\frac{x_{\epsilon s}-y}{|x_{\epsilon s}-y|}\,d\mathcal{H}^2(y)
\\
&=\chi_{\partial B_\delta(x_{\epsilon s})\cap A^\epsilon_{t,s}}(y)\,\frac{x_{\epsilon t}-y}{4\pi|x_{\epsilon t}-y|^3}\cdot\frac{x_{\epsilon s}-y}{|x_{\epsilon s}-y|}\,d\mathcal{H}^2(y)\\
&\to \frac{\chi_{\partial B_\delta(x_{\S})\cap D({t,s})}(y)}{4\pi |x_\S-y|^2}\,d\mathcal{H}^2(y)\quad\text{when }\epsilon\to 0,
\end{split}
\end{equation}
where
\begin{equation}
\text{$D(t,s):={\R}^3\setminus\overline{\Omega}$ if $t\leq s$\qquad and \qquad$D(t,s):=\Omega$ if $t> s$.}
\end{equation}
The limit in \eqref{lim Kepsi} refers to weak-$*$ convergence of finite Borel measures in ${\mathbb R}^3$ (acting on the variable $y$).
Using \eqref{lim Kepsi}, the uniform estimate \eqref{correc5}, the boundedness of $h_j$ and the dominated convergence theorem, we see that
\begin{equation}
\begin{split}
\lim_{\epsilon\to 0}\int_{-1}^1\mathop{\textrm{sign}} (t-s)h_j(x_\S,&s)
\int_{\partial B_\delta(x_{\epsilon s})\cap A^\epsilon_{t,s}}
k(x_{\epsilon t }- y)\cdot\frac{x_{\epsilon s}-y}{|x_{\epsilon s}-y|}\,d\mathcal{H}^2(y)\,ds\\
&=\int_{-1}^1\mathop{\textrm{sign}}(t-s)h_j(x_\S,s)
\int_{\partial B_\delta(x_{\S})\cap D(t,s)}
\frac{1}{4\pi|x_\S-y|^2}\,d\mathcal{H}^2(y)\,ds\\
&=\int_{-1}^1\mathop{\textrm{sign}}(t-s)h_j(x_\S,s)
\frac{\mathcal{H}^2\big(\partial B_\delta(x_\Sigma)\cap D(t,s)\big)}
{\mathcal{H}^2(\partial B_\delta(x_\Sigma))}\,ds.
\end{split}
\end{equation}
Then, using the regularity of $\S$ and the dominated convergence theorem once again, we get
\begin{equation}\label{lim Kepsi2}
\begin{split}
\lim_{\delta\to 0}\lim_{\epsilon\to 0}\int_{-1}^1\mathop{\textrm{sign}} (t-s)h_j(x_\S,s)\int_{\partial B_\delta(x_{\epsilon s})\cap A^\epsilon_{t,s}}
k(x_{\epsilon t }&- y)\cdot\frac{x_{\epsilon s}-y}{|x_{\epsilon s}-y|}\,d\mathcal{H}^2(y)\,ds\\
&=\frac{1}{2}\int_{-1}^1\mathop{\textrm{sign}}(t-s)h_j(x_\S,s)\,ds.
\end{split}
\end{equation}
By \eqref{eqn:K sign}, \eqref{lim Kepsi2} and the definition of $\mathscr{C}_{\epsilon,\delta}$ before \eqref{T split}, we get
\begin{equation}\label{CCC}
\lim_{\delta\to 0}\lim_{\epsilon\to 0}\mathscr{C}_{\epsilon,\delta}=\frac{1}{2}\int_{-1}^1\mathop{\textrm{sign}}(t-s)h_j(x_\Sigma,s)\,ds.
\end{equation}
The proof of \eqref{eqn:tnu convergence} is a straightforward combination of \eqref{T split}, \eqref{AAA}, \eqref{BBB} and \eqref{CCC}.
To prove \eqref{eqn:tjk convergence} we use the same approach as in \eqref{eqn:tnu convergence}, that is, we split
\begin{equation}T_{j,n}^\epsilon h_n(x_\S,t)=:\mathscr{A}_{\epsilon,\delta}+\mathscr{B}_{\epsilon,\delta}+\mathscr{C}_{\epsilon,\delta}\end{equation} like above \eqref{T split}. The first two terms can be treated analogously and one gets the desired result, the details are left for the reader.
To estimate $\mathscr{C}_{\epsilon,\delta}$ we use the notation introduced before. Recall that $E_{x_{\epsilon t}}$ is smooth in $\overline{D_\delta^\epsilon(t,s)}$ (assuming $t\neq s$) and $k(x_{\epsilon t}-y)=\nabla E(x_{\epsilon t}-y)=-\nabla E_{x_{\epsilon t}}(y)$. So, by the divergence theorem (see also \eqref{defi deriva}),
\begin{equation}\label{aux1}
\begin{split}
\int_{\partial D_\delta^\epsilon(t,s)}&\nabla^{j,n}_{\nu_{\partial D_\delta^\epsilon(t,s)}(y)}E(x_{\epsilon t}-y)
\,d\mathcal{H}^2(y)\\
&=\int_{\partial D_\delta^\epsilon(t,s)}\!\!\Big( (\nu_{\partial D_\delta^\epsilon(t,s)}(y))_n\partial_{x_j}E(x_{\epsilon t}-y)
-(\nu_{\partial D_\delta^\epsilon(t,s)}(y))_j\partial_{x_n}E(x_{\epsilon t}-y)\Big)d\mathcal{H}^2(y)\\
&=\int_{D_\delta^\epsilon(t,s)} \big(\partial_{y_j}\partial_{y_n} E_{x_{\epsilon t}}-\partial_{y_n} \partial_{y_j}E_{x_{\epsilon t}}\big)(y)\,dy=0.
\end{split}
\end{equation}
Since $\partial D_\delta^\epsilon(t,s)=(B_\delta(x_{\epsilon s})\cap \Sigma_{\epsilon s})
\cup (\partial B_\delta(x_{\epsilon s})\cap A^\epsilon_{t,s})$, from \eqref{aux1} we have
\begin{equation}\Big|\int_{|x_{\epsilon s}-y_{\epsilon s}|\leq \delta}\!\!\nabla^{j,n}_{\nu_{\epsilon s}(y_{\epsilon s})} E(x_{\epsilon st}-y_{\epsilon s})\,d\upsigma_{\epsilon s}(y_{\epsilon s})\Big|
=\Big|\int_{\partial B_\delta(x_{\epsilon s})\cap A^\epsilon_{t,s}}\!\!\nabla^{j,n}_{\nu_{\partial D_\delta^\epsilon(t,s)}(y)} E(x_{\epsilon t}-y)\,d\mathcal{H}^2(y)\Big|.
\end{equation}
Observe that
\begin{equation}\label{aux2}
\begin{split}
&\chi_{\partial B_\delta(x_{\epsilon s})\cap A^\epsilon_{t,s}}(y)\,\nabla^{j,n}_{\nu_{\partial D_\delta^\epsilon(t,s)}(y)} E(x_{\epsilon t}-y)\,d\mathcal{H}^2(y)\\
&\qquad=\chi_{\partial B_\delta(x_{\epsilon s})\cap A^\epsilon_{t,s}}(y)
\Big((\nu_{\partial D_\delta^\epsilon(t,s)}(y))_j\partial_{y_n}\!E_{x_{\epsilon t}}(y)
-(\nu_{\partial D_\delta^\epsilon(t,s)}(y))_n\partial_{y_j}\!E_{x_{\epsilon t}}(y)\Big)\,d\mathcal{H}^2(y)\\
&\qquad\to\chi_{\partial B_\delta(x_{\S})\cap D(t,s)}(y)
\Big(\frac{(y-x_\S)_j}{|y-x_\S|}\partial_{y_n}\!E_{x_{\S}}(y)
-\frac{(y-x_\S)_n}{|y-x_\S|}\partial_{y_j}\!E_{x_{\S}}(y)\Big)\,d\mathcal{H}^2(y)=0
\end{split}
\end{equation}
when $\epsilon\to0$. The limit measure in \eqref{aux2} vanishes because its density function corresponds to a tangential derivative of $E_{x_\S}$ on $\partial B_\delta(x_{\S})$, which is a constant function on $\partial B_\delta (x_\S)$.
Therefore, arguing as in the proof of \eqref{eqn:tnu convergence} but replacing \eqref{lim Kepsi} by \eqref{aux2}, we can resume that, now, \begin{equation}\lim_{\delta\to 0}\lim_{\epsilon\to 0}\mathscr{C}_{\epsilon,\delta}=0.\end{equation}
This yields \eqref{eqn:tjk convergence} and concludes the proof of \eqref{point limit}.
\subsection{A pointwise estimate of $|B_{\epsilon,\omega_3}g(x_\S,t)|$ by maximal operators}\label{meB}
\mbox{}
We begin this section by setting
\begin{equation}\label{CZ kernel2}
\text{$k(x):=\frac{x_j}{4\pi|x|^3}$\quad for $j=1,2,3$, $x=(x_1,x_2,x_3)\in{\R}^3\setminus\{0\}$.}
\end{equation}
In \eqref{CZ kernel1} we already introduced a kernel $k$ which, in fact, corresponds to the vectorial version of the ones introduced in \eqref{CZ kernel2}. So, by an abuse of notation, throughout this section we mean by $k(x)$ any of the components of the kernel given in \eqref{CZ kernel1}.
Note that $k(-x)=-k(x)$ for all $x\in{\R}^3\setminus\{0\}$ and, besides,
there exists $C>0$ such that
\begin{equation}\label{Horm est}
\begin{split}
&|k(x-y)|\leq \frac{C}{|x-y|^2}\quad
\text{for all $x,y\in{\R}^3$ such that $|x-y|>0$,}\\
&|k(z-y)-k(x-y)|\leq C\frac{|z-x|}{|x-y|^3}\quad
\text{for all $x,y,z\in{\R}^3$ with $0<|z-x|\leq\frac{1}{2}|x-y|$.}
\end{split}
\end{equation}
As in Section \ref{pointwise B}, we are going to work componentwise. More precisely, in order to deal with the different components of $B_{\epsilon,\omega_3}g(x_\S,t)$ for $g\in L^2(\Sigma\times(-1,1))^4$, we are going to study the following scalar version. Given $0<\epsilon\leq\eta_0$, $g\in L^2(\Sigma\times(-1,1))$ and $(x_\S,t)\in\Sigma\times(-1,1)$, define
\begin{equation}\label{witb epsilon}
\begin{split}
&\widetilde{B}_\epsilon g(x_\S,t):= u(t)\int_{-1}^1\int_\S k(x_\S + \epsilon t \nu (x_\S) -y_\S -\epsilon s \nu (y_\S))\\
&\hskip170pt \times v(s)\det(1-\epsilon s W(y_\S)) g(y_\S ,s)\,d\upsigma (y_\S)\,ds,
\end{split}
\end{equation}
where $u$ and $v$ are as in \eqref{correc6} for some $0<\eta\leq\eta_0$.
It is clear that pointwise estimates of $|\widetilde{B}_\epsilon g(x_\S,t)|$ for a given $g\in L^2(\Sigma\times(-1,1))$ directly transfer to pointwise estimates of $|B_{\epsilon,\omega_3}h(x_\S,t)|$ for a given $h\in L^2(\Sigma\times(-1,1))^4$, so we are reduced to estimate
$|\widetilde{B}_\epsilon g(x_\S,t)|$ for $g\in L^2(\Sigma\times(-1,1))$.
A key ingredient to find those suitable pointwise estimates is to relate $\widetilde{B}_\epsilon$ to the Hardy-Littlewood maximal operator and some maximal singular integral operators from Calder\'on-Zygmund theory. The
Hardy-Littlewood maximal operator is given by
\begin{equation}\label{max hardy}
M_*f(x_\S):=\sup_{\delta>0}\frac{1}{\upsigma(B_\delta(x_\S))}\int_{B_\delta(x_\S)}|f|\,d\upsigma,
\quad\text{$M_*:L^2(\Sigma)\to L^2(\Sigma)$ bounded,}
\end{equation}
see \cite[2.19 Theorem]{mattila} for a proof of the boundedness.
The above mentioned maximal singular integral operators are
\begin{equation}\label{max sio}
T_{*}f(x_\S):=\sup_{\delta>0}\Big|\int_{|x_\S-y_\S|>\delta}k(x_\S-y_\S)f(y_\S)\,d\upsigma(y_\S)\Big|,
\quad\text{$T_*:L^2(\Sigma)\to L^2(\Sigma)$ bounded,}
\end{equation}
see \cite[Proposition 4 bis]{David} for a proof of the boundedness.
We also introduce some integral versions of these maximal operators to connect them to the space $L^2(\Sigma\times(-1,1))$. Set
\begin{equation}\label{max hardy sio}
\begin{split}
&\widetilde{M}_*g(x_\S):=\Big(\int_{-1}^1 M_*(g(\cdot,s))(x_\S)^2\,ds\Big)^{1/2},
\quad\text{$\widetilde{M}_*:L^2(\Sigma\times(-1,1))\to L^2(\Sigma)$ bounded},\\
&\widetilde{T}_{*}g(x_\S):=\int_{-1}^1 T_*(g(\cdot,s))(x_\S)\,ds,
\quad\text{$\widetilde{T}_*:L^2(\Sigma\times(-1,1))\to L^2(\Sigma)$ bounded}.
\end{split}
\end{equation}
Indeed, by Fubini's theorem and \eqref{max hardy},
\begin{equation}
\begin{split}
\|\widetilde{M}_*g\|_{L^2(\Sigma)}^2
&=\int_\S\int_{-1}^1 M_*(g(\cdot,s))(x_\S)^2\,ds\,d\upsigma(x_\S)
=\int_{-1}^1 \|M_*(g(\cdot,s))\|_{L^2(\S)}^2\,ds\\
&\leq C \int_{-1}^1 \|g(\cdot,s)\|_{L^2(\S)}^2\,ds
= C\|g\|_{L^2(\S\times(-1,1))}^2.
\end{split}
\end{equation}
By Cauchy-Schwarz inequality, Fubini's theorem and \eqref{max sio}, we also see that $\widetilde{T}_*$ is bounded, so \eqref{max hardy sio} is fully justified.
Let us focus for a moment on the boundedness of $B_0(a)$ stated in \eqref{ABC espacios2}. The fact that, for $g\in L^2(\S\times(-1,1))^4$, the limit in the definition of $(B_{0}(a)g)(x_\S,t)$ exists for almost every $(x_\S,t)\in\S\times(-1,1)$ is a consequence of the decomposition (see \eqref{eqn:break phi})
\begin{equation}
\phi^a=\omega_{1}^a+\omega_{2}^a+\omega_{3},
\end{equation}
the integrals of fractional type on bounded sets in the case of $\omega_{1}^a$ and $\omega_{2}^a$ and, for $\omega_3$, that
\begin{equation}\label{correc7}
\lim_{\epsilon\to0}\int_{|x_\S-y_\S|>\epsilon}k(x_\S-y_\S)f(y_\S)\,d\upsigma(y_\S)\quad\text{exists for $\upsigma$-almost every $x_\S\in\S$}
\end{equation}
if $f\in L^2(\Sigma)$
(see \cite[20.27 Theorem]{mattila} for a proof) and that
\begin{equation}
\int_{-1}^1v(s) g(\cdot,s)\,ds
\in L^2(\Sigma)^4.
\end{equation}
Of course, \eqref{correc7} directly applies to $B_{0,\omega_3}$ (see \eqref{eqn:break phi2} for the definition).
From the boundedness of $\widetilde{T}_*$ and working component by component, we easily see that $B_{0,\omega_3}$ is bounded in $L^2(\S\times(-1,1))^4$. By the comments regarding $B_{0,\omega_1^a}$ and $B_{0,\omega_2^a}$ from the paragraph which contains \eqref{0002}, we also get that $B_{0}(a)$ is bounded in $L^2(\S\times(-1,1))^4$, which gives \eqref{ABC espacios2} in this case.
With the maximal operators at hand, we proceed to pointwise estimate
$|\widetilde{B}_\epsilon g(x_\S,t)|$ for $g\in L^2(\Sigma\times(-1,1))$.
Set
\begin{equation}\label{witb 0bis}
g_\epsilon(y_\S,s):=v(s)\det(1-\epsilon s W(y_\S)) g(y_\S ,s).
\end{equation}
Then, since the eigenvalues of $W$ are uniformly bounded by Proposition \ref{weingarten map}, there exists $C>0$ only depending on $\eta_0$ such that
\begin{equation}\label{witb 0}
|g_\epsilon(y_\S,s)|
\leq C\|v\|_{L^\infty({\mathbb R})}|g(y_\S,s)|
\quad\text{for all }0<\epsilon\leq\eta_0,\, (y_\S,s)\in\S\times(-1,1).
\end{equation}
Besides, the regularity and boundedness of $\S$ implies the existence of $L>0$ such that
\begin{equation}\label{witb 00}
|\nu(x_\S)-\nu(y_\S)|\leq L|x_\S-y_\S|\quad\text{for all } x_\S,y_\S\in\S.\end{equation}
We make the following splitting of
$\widetilde{B}_\epsilon g(x_\S,t)$ (see \eqref{witb epsilon} for the definition):
\begin{equation}\label{witb 1}
\begin{split}
\widetilde{B}_\epsilon g(x_\S,t)\!
&=u(t)\int_{-1}^1\int_{|x_\S-y_\S|\leq4\epsilon|t-s|} k(x_\S + \epsilon t \nu (x_\S) -y_\S -\epsilon s \nu (y_\S))g_\epsilon(y_\S,s)\,d\upsigma (y_\S)\,ds\\
&\quad+u(t)\int_{-1}^1\int_{|x_\S-y_\S|>4\epsilon|t-s|}
\Big(k(x_\S + \epsilon t \nu (x_\S) -y_\S -\epsilon s \nu (y_\S))\\
&\hskip110pt -k(x_\S + \epsilon s \nu (x_\S) -y_\S -\epsilon s \nu (y_\S))\Big)
g_\epsilon(y_\S,s)\,d\upsigma (y_\S)\,ds\\
&\quad+u(t)\int_{-1}^1\int_{|x_\S-y_\S|>4\epsilon|t-s|}
\Big(k(x_\S + \epsilon s(\nu(x_\S)-\nu (y_\S)) -y_\S)
-k(x_\S-y_\S)\Big)\\
&\hskip268pt\times g_\epsilon(y_\S,s)\,d\upsigma (y_\S)\,ds\\
&\quad+u(t)\int_{-1}^1\int_{|x_\S-y_\S|>4\epsilon|t-s|}
k(x_\S-y_\S)g_\epsilon(y_\S,s)\,d\upsigma (y_\S)\,ds\\
&=:\widetilde{B}_{\epsilon,1}g(x_\S,t)+\widetilde{B}_{\epsilon,2}g(x_\S,t)
+\widetilde{B}_{\epsilon,3}g(x_\S,t)+\widetilde{B}_{\epsilon,4}g(x_\S,t).
\end{split}
\end{equation}
We are going to estimate the four terms on the right hand side of \eqref{witb 1} separately.
Concerning $\widetilde{B}_{\epsilon,1}g(x_\S,t)$, note that
\begin{equation}\epsilon|t-s|={\rm dist}(x_\S + \epsilon t \nu (x_\S),\S_{\epsilon s})
\leq|x_\S + \epsilon t \nu (x_\S) -y_\S -\epsilon s \nu (y_\S))|\end{equation}
for all $(y_\S,s)\in\Sigma\times(-1,1)$, thus
$|k(x_\S + \epsilon t \nu (x_\S) -y_\S -\epsilon s \nu (y_\S))|
\leq\frac{1}{\epsilon^2|t-s|^2}$ by \eqref{Horm est}, and then
\begin{equation}\label{witb 2}
\begin{split}
|\widetilde{B}_{\epsilon,1} g(x_\S,t)|
&\leq \|u\|_{L^\infty({\mathbb R})}\int_{-1}^1\frac{1}{\epsilon^2|t-s|^2}
\int_{|x_\S-y_\S|\leq4\epsilon|t-s|}|g_\epsilon(y_\S,s)|\,d\upsigma (y_\S)\,ds\\
&\leq C\|u\|_{L^\infty({\mathbb R})}\int_{-1}^1 M_*(g_\epsilon(\cdot,s))(x_\S)\,ds
\leq C\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
\widetilde{M}_*g(x_\S),
\end{split}
\end{equation}
where we used the Cauchy-Schwarz inequality and \eqref{witb 0} in the last inequality above.
For the case of $\widetilde{B}_{\epsilon,2}g(x_\S,t)$, we split the integral over $\S$ on dyadic annuli as follows. Set
\begin{equation}\label{witb 6}
\begin{split}
N:=\Big[\Big|\log_2\Big(\frac{{\rm diam}(\Omega_{\eta_0})}{\epsilon|t-s|}\Big)\Big|\Big]+1
\end{split}
\end{equation}
for $t\neq s$, where $[\,\cdot\,]$ denotes the integer part. Then, $2^N\epsilon|t-s|>{\rm diam}(\Omega_{\eta_0})$ and
\begin{equation}\label{witb 4}
\begin{split}
|\widetilde{B}_{\epsilon,2}g(x_\S,t)|
&\leq\|u\|_{L^\infty({\mathbb R})}\int_{-1}^1\sum_{n=2}^N
\int_{2^{n+1}\epsilon|t-s|\geq|x_\S-y_\S|>2^n\epsilon|t-s|}
\cdots\,\,d\upsigma (y_\S)\,ds,
\end{split}
\end{equation}
where ``$\cdots$'' means
$
\big|k(x_\S + \epsilon t \nu (x_\S) -y_\S -\epsilon s \nu (y_\S))
-k(x_\S + \epsilon s \nu (x_\S) -y_\S -\epsilon s \nu (y_\S))\big|
|g_\epsilon(y_\S,s)|.
$
By \eqref{witb 00},
\begin{equation}
\begin{split}
(1-\eta_0 L)|x_\S-y_\S|
&\leq|x_\S-y_\S|-\eta_0|\nu(x_\S)-\nu(y_\S)|\\
&\leq|x_\S+\epsilon s\nu(x_\S)-y_\S-\epsilon s\nu(y_\S)|\\
&\leq|x_\S-y_\S|+\eta_0|\nu(x_\S)-\nu(y_\S)|\leq(1+\eta_0 L)|x_\S-y_\S|,
\end{split}
\end{equation}
thus if we take $\eta_0\leq\frac{1}{2L}$ we get
\begin{equation}\label{witb 3}
\frac{1}{2}|x_\S-y_\S|
\leq|x_\S+\epsilon s\nu(x_\S)-y_\S-\epsilon s\nu(y_\S)|
\leq2|x_\S-y_\S|.
\end{equation}
Besides, for $2^{n+1}\epsilon|t-s|\geq|x_\S-y_\S|>2^n\epsilon|t-s|$, using \eqref{witb 3} we see that
\begin{equation}\label{witb 5}
\begin{split}
|x_\S + \epsilon t \nu (x_\S) -(x_\S + \epsilon s \nu (x_\S))|
&=\epsilon|t-s|<2^{-n}|x_\S-y_\S|\\
&\leq 2^{-n+1}|x_\S+\epsilon s\nu(x_\S)-y_\S-\epsilon s\nu(y_\S)|\\
&\leq\frac{1}{2}|x_\S+\epsilon s\nu(x_\S)-y_\S-\epsilon s\nu(y_\S)|
\end{split}
\end{equation}
for all $n=2,\ldots,N$. Therefore, combining \eqref{witb 5}, \eqref{Horm est} and \eqref{witb 3} we finally get
\begin{equation}
\begin{split}
|k(x_\S + \epsilon t \nu (x_\S) &-y_\S -\epsilon s \nu (y_\S))
-k(x_\S + \epsilon s \nu (x_\S) -y_\S -\epsilon s \nu (y_\S))\big|\\
&\leq C\frac{|x_\S + \epsilon t \nu (x_\S)-(x_\S + \epsilon s \nu (x_\S))|}
{|x_\S+\epsilon s\nu(x_\S)-y_\S-\epsilon s\nu(y_\S)|^3}
\leq\frac{C\epsilon|t-s|}{|x_\S-y_\S|^3}
<\frac{C}{2^{3n}\epsilon^2|t-s|^2}
\end{split}
\end{equation}
for all $s,t\in(-1,1)$, $0<\epsilon\leq\eta_0$, $n=2,\ldots,N$ and
$2^{n+1}\epsilon|t-s|\geq|x_\S-y_\S|>2^n\epsilon|t-s|$. Plugging this estimate into \eqref{witb 4} we obtain
\begin{equation}\label{witb 11}
\begin{split}
|\widetilde{B}_{\epsilon,2}g(x_\S,&t)|
\leq C\|u\|_{L^\infty({\mathbb R})}\int_{-1}^1\sum_{n=2}^N
\int_{2^{n+1}\epsilon|t-s|\geq|x_\S-y_\S|>2^n\epsilon|t-s|}
\frac{|g_\epsilon(y_\S,s)|}{2^{3n}\epsilon^2|t-s|^2}\,d\upsigma (y_\S)\,ds\\
&\leq C\|u\|_{L^\infty({\mathbb R})}\int_{-1}^1\sum_{n=2}^N\frac{1}{2^n}
\int_{|x_\S-y_\S|\leq2^{n+1}\epsilon|t-s|}
\frac{|g_\epsilon(y_\S,s)|}{(2^{n+1}\epsilon|t-s|)^2}\,d\upsigma (y_\S)\,ds\\
&\leq C\|u\|_{L^\infty({\mathbb R})}\sum_{n=2}^\infty\!\frac{1}{2^n}
\int_{-1}^1M_*(g_\epsilon(\cdot,s))(x_\S)\,ds
\leq C\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
\widetilde{M}_*g(x_\S),
\end{split}
\end{equation}
where we used the Cauchy-Schwarz inequality and \eqref{witb 0} in the last inequality above.
Let us deal now with $\widetilde{B}_{\epsilon,3}g(x_\S,t)$. Since $0<\epsilon\leq\eta_0$ and $s\in(-1,1)$, if we take $\eta_0\leq\frac{1}{2L}$ as before, from \eqref{witb 00} we see that
\begin{equation}
\begin{split}
\big|\big(x_\S + \epsilon s(\nu(x_\S)-\nu (y_\S))\big)-x_\S\big|
=\epsilon |s||\nu(x_\S)-\nu (y_\S)|\leq\frac{1}{2}|x_\S-y_\S|,
\end{split}
\end{equation}
and then, by \eqref{Horm est},
\begin{equation}\label{witb 7}
\begin{split}
\big|k(x_\S + \epsilon s(\nu(x_\S)-\nu (y_\S)) -y_\S)
-k(x_\S-y_\S)\big|
\leq C\frac{\epsilon |s||\nu(x_\S)-\nu (y_\S)|}{|x_\S-y_\S|^3}
\leq \frac{C\epsilon}{|x_\S-y_\S|^2}.
\end{split}
\end{equation}
Splitting the integral which defines $\widetilde{B}_{\epsilon,3}g(x_\S,t)$ into dyadic annuli as in \eqref{witb 4}, and using \eqref{witb 7}, \eqref{witb 0} and \eqref{witb 6}, we get
\begin{equation}\label{witb 8}
\begin{split}
|\widetilde{B}_{\epsilon,3}g(x_\S,t)|
&\leq C\|u\|_{L^\infty({\mathbb R})}\int_{-1}^1\sum_{n=2}^N
\epsilon\int_{2^{n+1}\epsilon|t-s|\geq|x_\S-y_\S|>2^n\epsilon|t-s|}
\frac{|g_\epsilon(y_\S,s)|}{|x_\S-y_\S|^2}
\,d\upsigma (y_\S)\,ds\\
&\leq C\|u\|_{L^\infty({\mathbb R})}\int_{-1}^1\epsilon\sum_{n=2}^N
M_*(g_\epsilon(\cdot,s))(x_\S)\,ds\\
&\leq C\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
\int_{-1}^1\epsilon\,\Big|\log_2\Big(\frac{{\rm diam}(\Omega_{\eta_0})}{\epsilon|t-s|}\Big)\Big|
M_*(g(\cdot,s))(x_\S)\,ds.
\end{split}
\end{equation}
Note that
\begin{equation}\epsilon\,\Big|\log_2\Big(\frac{{\rm diam}(\Omega_{\eta_0})}{\epsilon|t-s|}\Big)\Big|
\leq \epsilon\big(C+|\log_2\epsilon|+|\log_2|t-s||\big)
\leq C\big(1+|\log_2|t-s||\big)\end{equation}
for all $0<\epsilon\leq\eta_0$,
where $C>0$ only depends on $\eta_0$. Hence, from \eqref{witb 8} and Cauchy-Schwarz inequality, we obtain
\begin{equation}\label{witb 9}
\begin{split}
|\widetilde{B}_{\epsilon,3}g(x_\S,t)|
&\leq C\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
\int_{-1}^1\big(1+|\log_2|t-s||\big)
M_*(g(\cdot,s))(x_\S)\,ds\\
&\leq C\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
\Big(\int_{-1}^1\big(1+|\log_2|t-s||\big)^2\,ds\Big)^{1/2}
\widetilde{M}_*g(x_\S)\\
&\leq C\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
\widetilde{M}_*g(x_\S),
\end{split}
\end{equation}
where we also used that $t\in(-1,1)$, so
$\int_{-1}^1\big(1+|\log_2|t-s||\big)^2\,ds\leq
C\big(1+\int_{0}^2|\log_2r|^2\,dr\big)<+\infty$, in the last inequality above.
The term $|\widetilde{B}_{\epsilon,4}g(x_\S,t)|$ can be estimated using the maximal operator $\widetilde{T}_*$ as follows. Let $\lambda_1(y_\S)$ and $\lambda_2(y_\S)$ denote the eigenvalues of the Weingarten map $W(y_\S)$. By definition,
\begin{equation}
\begin{split}
g_\epsilon(y_\S,s)&=v(s)\det(1-\epsilon s W(y_\S)) g(y_\S ,s)\\
&=v(s)\big(1+\epsilon^2s^2\lambda_1(y_\S)\lambda_2(y_\S)-\epsilon s\lambda_1(y_\S)-\epsilon s\lambda_2(y_\S)\big) g(y_\S ,s).
\end{split}
\end{equation}
Therefore, the triangle inequality yields
\begin{equation}\label{witb 10}
\begin{split}
|\widetilde{B}_{\epsilon,4}&g(x_\S,t)|\leq \|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
\int_{-1}^1\Big(T_*(g(\cdot,s))(x_\S)
+\eta_0^2T_*(\lambda_1\lambda_2g(\cdot,s))(x_\S)\\
&\hskip165pt+\eta_0 T_*(\lambda_1g(\cdot,s))(x_\S)
+\eta_0 T_*(\lambda_2g(\cdot,s))(x_\S)\Big)\,ds\\
&\leq C\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
\big(\widetilde{T}_*g(x_\S)+\widetilde{T}_*(\lambda_1\lambda_2g)(x_\S)
+\widetilde{T}_*(\lambda_1g)(x_\S)+\widetilde{T}_*(\lambda_2g)(x_\S)\big).
\end{split}
\end{equation}
Combining \eqref{witb 1}, \eqref{witb 2}, \eqref{witb 11}, \eqref{witb 9} and \eqref{witb 10} and taking the supremum on $\epsilon$ we finally get that
\begin{equation}\label{witb 12}
\begin{split}
\sup_{0<\epsilon\leq\eta_0}|\widetilde{B}_\epsilon g(x_\S,t)|
&\leq C\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
\big(\widetilde{M}_*g(x_\S)+\widetilde{T}_*g(x_\S)\\
&\hskip90pt+\widetilde{T}_*(\lambda_1\lambda_2g)(x_\S)
+\widetilde{T}_*(\lambda_1g)(x_\S)+\widetilde{T}_*(\lambda_2g)(x_\S)\big),
\end{split}
\end{equation}
where $C>0$ only depends on $\eta_0$. Define
\begin{equation}
\widetilde{B}_*g(x_\S,t):=\sup_{0<\epsilon\leq\eta_0}|\widetilde{B}_\epsilon g(x_\S,t)|\quad\text{ for $(x_\S,t)\in\S\times(-1,1)$.}
\end{equation}
Then, from \eqref{witb 12}, the boundedness of $\widetilde{M}_*$ and $\widetilde{T}_*$ from $L^2(\Sigma\times(-1,1))$ to $L^2(\Sigma)$ (see \eqref{max hardy sio}) and the fact that
$\|\lambda_1\|_{L^\infty(\S)}$ and
$\|\lambda_2\|_{L^\infty(\S)}$ are finite by Proposition \ref{weingarten map}, we easily conclude that there exists $C>0$ only depending on $\eta_0$ such that
\begin{equation}\label{witb 13}
\begin{split}
\|\widetilde{B}_*g\|_{L^2(\S\times(-1,1))}
\leq C\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
\|g\|_{L^2(\S\times(-1,1))}.
\end{split}
\end{equation}
\subsection{$B_{\epsilon,\omega_3}\to B_{0,\omega_3}+B'$ in the strong sense when $\epsilon\to0$ and conclusion of the proof of \eqref{conv B th}}
\label{cpB}
\mbox{}
To begin this section, we present a standard result in harmonic analysis about the existence of limit almost everywhere for a sequence of operators acting on a fixed function and its convergence in strong sense. General statements can be found in
\cite[Theorem 2.2 and the remark below it]{duoandikoetxea} and
\cite[Proposition 6.2]{torchinsky}, for example.
For the sake of completeness, here we present a concrete version with its proof.
\begin{lemma}\label{Calderon lemma}
Let $b\in{\mathbb N}$ and $(X,\mu_X)$ and $(Y,\mu_Y)$ be two Borel measure spaces. Let $\{W_\epsilon\}_{0<\epsilon\leq\eta_0}$ be a family of bounded linear operators from $L^2(\mu_X)^b$ to $L^2(\mu_Y)^b$
such that, if
\begin{equation}
W_*g(y):=\sup_{0<\epsilon\leq\eta_0}|W_\epsilon g(y)|
\quad\text{for $g\in L^2(\mu_X)^b$ and $y\in Y$,
\quad\text{then }$W_*:L^2(\mu_X)^b\to L^2(\mu_Y)$}
\end{equation}
is a bounded sublinear operator.
Suppose that for any $g\in S$, where $S\subset L^2(\mu_X)^b$ is a dense subspace, $\lim_{\epsilon\to0}W_\epsilon g(y)$ exists for $\mu_Y$-a.e. $y\in Y$. Then, for any $g\in L^2(\mu_X)^b$, $\lim_{\epsilon\to0}W_\epsilon g(y)$ exists for $\mu_Y$-a.e. $y\in Y$ and
\begin{equation}\label{abstract strong conv}
\lim_{\epsilon\to 0}\big\|W_\epsilon g-\lim_{\delta\to0}W_\delta g\big\|_{ L^2(\mu_Y)^b}=0.
\end{equation}
In particular, $\lim_{\epsilon\to0}W_\epsilon$ defines a bounded operator from $L^2(\mu_X)^b$ to $L^2(\mu_Y)^b$.
\end{lemma}
\begin{proof}
We start by proving that, for any $g\in L^2(\mu_X)^b$, $\lim_{\epsilon\to0}W_\epsilon g(y)$ exists for $\mu_Y$-a.e. $y\in Y$. Take $g_k\in S$ such that $\|g_k-g\|_{L^2(\mu_X)^b}\to0$ for $k\to\infty$, and fix $\lambda>0$. Since $\lim_{\epsilon\to0}W_\epsilon g_k(y)$ exists for $\mu_Y$-a.e. $y\in Y$, Chebyshev inequality yields
\begin{equation}
\begin{split}
\mu_Y\Big(\Big\{y\in Y:&\,\Big|\limsup_{\epsilon\to0}W_\epsilon g(y)-\liminf_{\epsilon\to0}W_\epsilon g(y)\Big|>\lambda\Big\}\Big)\\
&\leq\mu_Y\Big(\Big\{y\in Y:\,\Big|\limsup_{\epsilon\to0}
W_\epsilon (g-g_k)(y)\Big|
+\Big|\liminf_{\epsilon\to0}W_\epsilon (g_k-g)(y)\Big|>\lambda\Big\}\Big)\\
&\leq\mu_Y(\{y\in Y:\,2W_* (g-g_k)(y)>\lambda\})\\
&\leq\frac{4}{\lambda^2}\,\|W_* (g-g_k)\|^2_{L^2(\mu_Y)}
\leq\frac{C}{\lambda^2}\,\|g-g_k\|^2_{L^2(\mu_X)^b}.
\end{split}
\end{equation}
Letting $k\to\infty$ we deduce that
\begin{equation}\mu_Y\Big(\Big\{y\in Y:\,\Big|\limsup_{\epsilon\to0}W_\epsilon g(y)-\liminf_{\epsilon\to0}W_\epsilon g(y)\Big|>\lambda\Big\}\Big)=0.\end{equation} Since this holds for all $\lambda>0$, we finally get that $\lim_{\epsilon\to0}W_\epsilon g(y)$ exists $\mu_Y$-a.e.
Note that $|W_\epsilon g(y)-\lim_{\delta\to0}W_\delta g(y)|
\leq 2W_*g(y)$ and $W_*g\in L^2(\mu_Y)$. Thus, \eqref{abstract strong conv} follows by the dominated convergence theorem. The last statement in the lemma is also a consequence of the boundedness of $W_*$.
\end{proof}
Thanks to Lemma \ref{Calderon lemma} and the results in Sections \ref{pointwise B} and \ref{meB}, we are ready to conclude the proof of
\eqref{conv B th}.
As we said before \eqref{point limit}, to obtain \eqref{conv B th} we only need to show that
$B_{\epsilon,\omega_3}\to B_{0,\omega_3}+B'$ in the strong sense when $\epsilon\to0$. From \eqref{point limit}, we know that
\begin{equation}
\lim_{\epsilon\to0}B_{\epsilon,\omega_3}g(x_\S,t)
=B_{0,\omega_3}g(x_\S,t)
+B'g(x_\S,t)\quad\text{for almost all }(x_\S,t)\in\S\times(-1,1)
\end{equation}
and all $g\in L^\infty(\S\times(-1,1))^4$ such that
$\sup_{|t|<1}|g(x_\S,t)-g(y_\S,t)|\leq C_g|x_\S-y_\S|$ for all $x_\S,\,y_\S\in\S$ and some $C_g>0$ (it may depend on $g$). Note also that this set of functions $g$ is dense in $L^2(\S\times(-1,1))^4$. Besides, thanks to \eqref{witb 13} we see that, if $\eta_0>0$ is small enough and we set
\begin{equation}
B_{*,\omega_3}g(x_\S,t):=\sup_{0<\epsilon\leq\eta_0}
|B_{\epsilon,\omega_3}g(x_\S,t)|\quad\text{ for $(x_\S,t)\in\S\times(-1,1)$,}
\end{equation}
then there exists $C>0$ only depending on $\eta_0$ such that
\begin{equation}\label{remark eq1_}
\begin{split}
\|B_{*,\omega_3}g\|_{L^2(\S\times(-1,1))}
\leq C\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
\|g\|_{L^2(\S\times(-1,1))^4}.
\end{split}
\end{equation}
Therefore, from Lemma \ref{Calderon lemma} we get that, for any $g\in L^2(\S\times(-1,1))^4$,
the pointwise limit $\lim_{\epsilon\to0}B_{\epsilon,\omega_3}g(x_\S,t)$ exists for almost every $(x_\S,t)\in\S\times(-1,1)$. Recall also that
$B_{0,\omega_3}+B'$ is bounded in $L^2(\S\times(-1,1))^4$ (see the comment before \eqref{witb 0bis} for $B_{0,\omega_3}$, the case of $B'$ is trivial),
so one can easily adapt the proof of Lemma \ref{Calderon lemma} to also show that, for any $g\in L^2(\S\times(-1,1))^4$,
\begin{equation}
\lim_{\epsilon\to0}B_{\epsilon,\omega_3}g(x_\S,t)
=B_{0,\omega_3}g(x_\S,t)
+B'g(x_\S,t)\quad\text{for almost all }(x_\S,t)\in\S\times(-1,1).
\end{equation}
Finally, \eqref{abstract strong conv} in Lemma \ref{Calderon lemma} yields
\begin{equation}
\lim_{\epsilon\to 0}\|(B_{\epsilon,\omega_3} -B_{0,\omega_3}-B')g\|_{L^2(\S\times(-1,1))^4}=0\quad\text{for all }g\in L^2(\S\times(-1,1))^4,
\end{equation}
which is the required strong convergence of $B_{\epsilon,\omega_3}$ to
$B_{0,\omega_3}+B'$. This finishes the proof of \eqref{conv B th}.
\section{Proof of \eqref{convergence A}: $A_\epsilon (a)\to A_0(a)$ in the strong sense when $\epsilon\to0$}\label{ss A}
Recall from \eqref{ABCepsilon} and \eqref{limit operators defi} that $A_\epsilon(a)$ with $0<\epsilon\leq\eta_0$ and $A_0(a)$ are defined by
\begin{equation}
\begin{split}
&(A_\epsilon(a)g)(x)=\int_{-1}^1\int_\Sigma\phi^a(x-y_\S - \epsilon s \nu (y_\S))v(s) \det(1-\epsilon s W(y_\S)) g(y_\S ,s)\,d\upsigma (y_\S)\,ds,\\
&(A_0(a)g)(x)=\int_{-1}^1\int_\Sigma\phi^a(x-y_\S)v(s)g(y_\S ,s)\,d\upsigma (y_\S)\,ds.
\end{split}
\end{equation}
We already know that $A_\epsilon(a)$ is bounded from $L^2(\Sigma\times(-1,1))^4$ to $L^2({\R}^3)^4$. To show the boundedness of $A_0(a)$ (and conclude the proof of \eqref{ABC espacios2}) just note that, by Fubini's theorem, for every $x\in{\R}^3\setminus\S$ we have
\begin{equation}
\begin{split}
(A_0(a)g)(x)=\int_\Sigma\phi^a(x-y_\S)\Big(\int_{-1}^1v(s)g(y_\S ,s)\,ds\Big)d\upsigma (y_\S),
\end{split}
\end{equation}
and $\int_{-1}^1v(s)g(\cdot,s)\,ds\in L^2(\S)^4$ if $g\in L^2(\S\times(-1,1))^4$. Since $a\in{\mathbb C}\setminus{\mathbb R}$,
\cite[Lemma 2.1]{amv1} shows that $A_0(a)$ is bounded from $L^2(\Sigma\times(-1,1))^4$ to $L^2({\R}^3)^4$.
We begin the proof of \eqref{convergence A} by splitting
\begin{equation}\label{0002A000}
A_\epsilon(a)g
=\chi_{{\R}^3\setminus\Omega_{\eta_0}}A_\epsilon(a)g
+\chi_{\Omega_{\eta_0}}A_\epsilon(a)g.
\end{equation}
Let us treat first the case of $\chi_{{\R}^3\setminus\Omega_{\eta_0}}A_\epsilon(a)$. As we said before, since $a\in {\mathbb C}\setminus{\mathbb R}$, the components of $\phi^a(x)$ decay exponentially when $|x|\to\infty$. In particular, there exist $C,r>0$ only depending on $a$ and $\eta_0$ such that
\begin{equation}\label{A strong convergence 1}
|\phi^a(x)|,|\partial\phi^a(x)|
\leq Ce^{-r|x|}\quad\text{for all }|x|\geq \frac{\eta_0}{2},
\end{equation}
where the left hand side of \eqref{A strong convergence 1} means the absolute value of any component of the matrix $\phi^a(x)$ and of any first order derivative of it, respectively.
Note that $\eta_0={\rm dist}({\R}^3\setminus\Omega_{\eta_0},\S)$. Hence, if $x\in{\R}^3\setminus\Omega_{\eta_0}$, $y_\S\in\S$, $0\leq\epsilon\leq\frac{\eta_0}{2}$ and $s\in(-1,1)$ then, for any $0\leq q\leq1$,
\begin{equation}\label{A strong convergence 2}
\begin{split}
|q(x-y_\S - \epsilon s \nu (y_\S))+(1-q)&(x-y_\S)|
=|x-y_\S-q\epsilon s \nu (y_\S)|\\
&\geq|x-y_\S|-q\epsilon |s|
\geq|x-y_\S|-\frac{\eta_0}{2}
\geq\frac{|x-y_\S|}{2}\geq\frac{\eta_0}{2}.
\end{split}
\end{equation}
Thus \eqref{A strong convergence 1} applies to
$[x,y_\S]_q:=q(x-y_\S - \epsilon s \nu (y_\S))+(1-q)(x-y_\S)$, and a combination of the mean value theorem and \eqref{A strong convergence 2} gives
\begin{equation}\label{correc9}
\begin{split}
|\phi^a(x-y_\S - \epsilon s \nu (y_\S))-\phi^a(x-y_\S)|
\leq\epsilon\max_{0\leq q\leq1}|\partial\phi^a([x,y_\S]_q)|
\leq C\epsilon e^{-\frac{r}{2}|x-y_\S|}.
\end{split}
\end{equation}
Set
$\widetilde{g_\epsilon}(y_\S,s):=\det(1-\epsilon s W(y_\S)) g(y_\S ,s).$
On one hand, from \eqref{correc9}, Proposition \ref{weingarten map} and Cauchy-Schwarz inequality, we get that
\begin{equation}
\begin{split}
\chi_{{\R}^3\setminus\Omega_{\eta_0}}(x)|
(A_\epsilon(a)&g)(x)-(A_0(a)g_\epsilon)(x)|\\
&\leq C\|v\|_{L^\infty({\mathbb R})}\chi_{{\R}^3\setminus\Omega_{\eta_0}}(x)\int_{-1}^1\int_\Sigma\epsilon e^{-\frac{r}{2}|x-y_\S|}|\widetilde{g_\epsilon}(y_\S ,s)|\,d\upsigma (y_\S)\,ds\\
&\leq C\epsilon\|v\|_{L^\infty({\mathbb R})}\|\widetilde{g_\epsilon}\|_{L^2(\S\times(-1,1))^4}
\chi_{{\R}^3\setminus\Omega_{\eta_0}}(x)
\Big(\int_\Sigma e^{-r|x-y_\S|}\,d\upsigma (y_\S)\Big)^{1/2}\\
&\leq C\epsilon\|v\|_{L^\infty({\mathbb R})}\|g\|_{L^2(\S\times(-1,1))^4}\xi(x),
\end{split}
\end{equation}
where
\begin{equation}
\xi(x):=\chi_{{\R}^3\setminus\Omega_{\eta_0}}(x)
\Big(\int_\Sigma e^{-r|x-y_\S|}\,d\upsigma (y_\S)\Big)^{1/2}.
\end{equation}
Since $\xi\in L^2({\R}^3)$ because $\upsigma(\S)<+\infty$, we deduce that
\begin{equation}\label{correc10}
\begin{split}
\|\chi_{{\R}^3\setminus\Omega_{\eta_0}}(A_\epsilon(a)g-A_0(a)\widetilde{g_\epsilon})\|_{L^2({\R}^3)^4}
\leq C\epsilon\|v\|_{L^\infty({\mathbb R})}\|g\|_{L^2(\S\times(-1,1))^4}.
\end{split}
\end{equation}
On the other hand, by Proposition \ref{weingarten map} we have that
\begin{equation}
\begin{split}
|\widetilde{g_\epsilon}(y_\S,s)-{g}(y_\S,s)|
=\big|\!\det(1-\epsilon s W(y_\S))-1\big| |g(y_\S ,s)|
\leq C\epsilon |g(y_\S ,s)|.
\end{split}
\end{equation}
This, together with the fact that $A_0(a)$ is bounded from $L^2(\Sigma\times(-1,1))^4$ to $L^2({\R}^3)^4$ (see above \eqref{0002A000}), implies that
\begin{equation}\label{correc11}
\begin{split}
\|\chi_{{\R}^3\setminus\Omega_{\eta_0}}A_0(a)
(\widetilde{g_\epsilon}-g)\|_{L^2({\R}^3)^4}
&\leq C\|v\|_{L^\infty({\mathbb R})}
\|\widetilde{g_\epsilon}-g\|_{L^2(\Sigma\times(-1,1))^4}\\
&\leq C\epsilon\|v\|_{L^\infty({\mathbb R})}\|g\|_{L^2(\S\times(-1,1))^4}.
\end{split}
\end{equation}
Using the triangle inequality, \eqref{correc10} and \eqref{correc11}, we finally get that
\begin{equation}\label{A exponential_}
\begin{split}
\|\chi_{{\R}^3\setminus\Omega_{\eta_0}}(A_\epsilon(a)-A_0(a))g\|_{L^2({\R}^3)^4}
\leq C\epsilon\|v\|_{L^\infty({\mathbb R})}\|g\|_{L^2(\S\times(-1,1))^4}
\end{split}
\end{equation}
for all $0\leq\epsilon\leq\frac{\eta_0}{2}$, where $C>0$ only depends on $a$ and $\eta_0$. In particular, this implies that
\begin{equation}\label{A exponential}
\lim_{\epsilon\to0}\|\chi_{{\R}^3\setminus\Omega_{\eta_0}}(A_\epsilon(a)-A_0(a))
\|_{L^2(\Sigma\times(-1,1))^4\to L^2({\R}^3)^4}=0.
\end{equation}
Let us deal now with $\chi_{\Omega_{\eta_0}}A_\epsilon(a)$. Consider the decomposition of $\phi^a$ given by \eqref{eqn:break phi}.
Then, as in \eqref{eqn:break phi2}, we write
\begin{equation}\label{0002A0}
\begin{split}
&A_\epsilon (a)=A_{\epsilon,\omega_1^a}+A_{\epsilon,\omega_2^a}+A_{\epsilon,\omega_3},\\
&A_0 (a)=A_{0,\omega_1^a}+A_{0,\omega_2^a}+A_{0,\omega_3},
\end{split}
\end{equation}
where $A_{\epsilon,\omega_1^a}$, $A_{\epsilon,\omega_2^a}$ and $A_{\epsilon,\omega_3}$ are defined as $A_\epsilon(a)$ but replacing $\phi^a$ by $\omega_1^a$, $\omega_2^a$ and $\omega_3$, respectively, and analogously for the case of $A_0(a)$.
For $j=1,2$, the arguments used to show \eqref{0002} in the case of
$B_{\epsilon,\omega_j^a}$ also apply to $\chi_{\Omega_{\eta_0}}A_{\epsilon,\omega_j^a}$, thus we now get
\begin{equation}\label{0002A}
\lim_{\epsilon\to 0}\|\chi_{\Omega_{\eta_0}}(A_{\epsilon,\omega_j^a}-A_{0,\omega_j^a})\|_{L^2(\Sigma\times(-1,1))^4\to
L^2({\R}^3)^4}=0\quad\text{for } j=1,2.
\end{equation}
It only remains to show the strong convergence of $\chi_{\Omega_{\eta_0}}A_{\epsilon,\omega_3}$. This case is treated similarly to what we did in Sections \ref{pointwise B}, \ref{meB} and \ref{cpB}, as follows.
\subsection{The pointwise limit of $A_{\epsilon,\omega_3}g(x)$ when $\epsilon\to0$ for $g\in L^2(\S\times(-1,1))^4$}\label{pointwise A}
\mbox{}
This case is much more easy than the one in Section \ref{pointwise B}.
Fixed $x\in{\R}^3\setminus{\Sigma}$, we can always find $\delta_x,C_x>0$ small enough such that
\begin{equation}|x-y_\Sigma-\epsilon s \nu(y_\Sigma)|\geq C_x\quad\text{for all $y_\Sigma\in \Sigma$, $s\in (-1,1)$ and $0\leq\epsilon\leq\delta_x$.}\end{equation}
In particular, fixed $x\in{\R}^3\setminus{\Sigma}$, $|\omega_3(x-y_\Sigma-\epsilon s \nu(y_\Sigma))|\leq C$ uniformly on $y_\Sigma\in \Sigma$, $s\in (-1,1)$ and $0\leq\epsilon\leq\delta_x$, where $C>0$ depends on $x$. By \Cref{weingarten map} and the dominated convergence theorem, given $g\in L^2(\S\times(-1,1))^4$, we have
\begin{equation}\label{0003A}
\lim_{\epsilon\to0}A_{\epsilon,\omega_3}g(x)
=A_{0,\omega_3}g(x)\quad\text{for $\mathcal{L}$-a.e. }x\in{\R}^3,
\end{equation}
where $\mathcal{L}$ denotes the Lebesgue measure in ${\mathbb R}^3$.
\subsection{A pointwise estimate of $\chi_{\Omega_{\eta_0}}(x)|
A_{\epsilon,\omega_3}g(x)|$ by maximal operators}\label{meA}
\mbox{}
Given $0\leq\epsilon\leq\frac{\eta_0}{4}$, we divide the study of $\chi_{\Omega_{\eta_0}}(x)A_{\epsilon,\omega_3}g(x)$ into two different cases, i.e. $x\in\Omega_{\eta_0}\setminus\Omega_{4\epsilon}$ and $x\in\Omega_{4\epsilon}$. As we did in Section \ref{meB}, we are going to work componentwise, that is, we consider ${\mathbb C}$-valued functions instead of ${\mathbb C}^4$-valued functions.
With this in mind, for $g\in L^2(\S\times(-1,1))$ we set
\begin{equation}
\begin{split}
\widetilde{A}_\epsilon g(x):=\int_{-1}^1\int_\Sigma k(x-y_\S - \epsilon s \nu (y_\S))v(s) \det(1-\epsilon s W(y_\S)) g(y_\S ,s)\,d\upsigma (y_\S)\,ds,
\end{split}
\end{equation}
where $k$ is given by \eqref{CZ kernel2}.
In what follows, we can always assume that $x\in{\R}^3\setminus{\Sigma}$ because $\mathcal{L}(\S)=0$. In case that $x\in\Omega_{4\epsilon}$, we can write $x=x_\S+\epsilon t\nu(x_\S)$ for some $t\in(-4,4)$, and then $\widetilde{A}_\epsilon g(x)$ coincides with $\widetilde{B}_\epsilon g(x_\S,t)$ (see \eqref{witb epsilon}) except for the term $u(t)$. Therefore, one can carry out all the arguments involved in the estimate of $\widetilde{B}_\epsilon g(x_\S,t)$ (that is, from \eqref{witb epsilon} to \eqref{witb 13}) with minor modifications to get the following result:
define
\begin{equation}\label{A strong convergence maximal estimate 1}
\widetilde{A}_*g(x_\S,t):=\sup_{0<\epsilon\leq\eta_0/4}|\widetilde{A}_\epsilon g(x_\S+\epsilon t\nu(x_\S))|\quad\text{ for $(x_\S,t)\in\S\times(-4,4)$}.
\end{equation}
Then, if $\eta_0$ is small enough, there exists $C>0$ only depending on $\eta_0$ such that
\begin{equation}\label{A strong convergence maximal estimate}
\begin{split}
\big\|\sup_{|t|< 4}\widetilde{A}_*g(\cdot,t)\big\|_{L^2(\S)}
\leq C\|v\|_{L^\infty({\mathbb R})}
\|g\|_{L^2(\S\times(-1,1))}\quad\text{for all $g\in L^2(\S\times(-1,1))$.}
\end{split}
\end{equation}
For the proof of \eqref{A strong convergence maximal estimate}, a remark is in order. The fact that in the present situation $t\in(-4,4)$ instead of $t\in(-1,1)$ (as in the definition of $\widetilde{B}_\epsilon g(x_\S,t)$ in \eqref{witb epsilon}) only affects the arguments used to get \eqref{witb 12} at the comment just below \eqref{witb 9}. Now one should use that $\int_0^5|\log_2 r|^2\,dr<+\infty$ to prove the estimate analogous to \eqref{witb 9} and to derive the counterpart of \eqref{witb 12}, that is,
\begin{equation}
\begin{split}
\widetilde{A}_* g(x_\S,t)
&\leq C\|v\|_{L^\infty({\mathbb R})}
\big(\widetilde{M}_*g(x_\S)+\widetilde{T}_*g(x_\S)
+\widetilde{T}_*(\lambda_1\lambda_2g)(x_\S)
+\widetilde{T}_*(\lambda_1g)(x_\S)+\widetilde{T}_*(\lambda_2g)(x_\S)\big)
\end{split}
\end{equation}
for all $(x_\S,t)\in\S\times(-4,4)$, being $\lambda_1$ and $\lambda_2$ the eigenvalues of the Weingarten map. Combining this estimate (whose right hand side is independent of $t\in(-4,4)$), the boundedness of $\widetilde{M}_*$ and $\widetilde{T}_*$ from $L^2(\Sigma\times(-1,1))$ to $L^2(\Sigma)$ (see \eqref{max hardy sio}) and Proposition \ref{weingarten map}, we get \eqref{A strong convergence maximal estimate}.
Finally, thanks to \eqref{A strong convergence maximal estimate 1}, \eqref{eqn:coaera}, Proposition \ref{weingarten map} and \eqref{A strong convergence maximal estimate}, for $\eta_0$ small enough we conclude
\begin{equation}\label{cecece 1}
\begin{split}
\big\|\sup_{0\leq\epsilon\leq\eta_0/4}\chi_{\Omega_{4\epsilon}}
|\widetilde{A}_\epsilon g|\big\|_{L^2({\R}^3)}
&\leq\big\|\sup_{|t|<4}\widetilde{A}_*g(P_\S\cdot,t)\big\|_{L^2(\Omega_{\eta_0})}\\
&\leq C\big\|\sup_{|t|< 4}\widetilde{A}_*g(\cdot,t)\big\|_{L^2(\S)}
\leq C\|v\|_{L^\infty({\mathbb R})}\|g\|_{L^2(\S\times(-1,1))}.
\end{split}
\end{equation}
We now focus on $\chi_{\Omega_{\eta_0}\setminus\Omega_{4\epsilon}}
\widetilde{A}_\epsilon$ for $0\leq\epsilon\leq\frac{\eta_0}{4}$.
Similarly to what we did in \eqref{witb 1}, we set
\begin{equation}
g_\epsilon(y_\S,s):=v(s)\det(1-\epsilon s W(y_\S)) g(y_\S ,s)\qquad\text{(see \eqref{witb 0bis})}
\end{equation}
and we split
$\widetilde{A}_\epsilon g(x)=\widetilde{A}_{\epsilon,1} g(x)+\widetilde{A}_{\epsilon,2} g(x)
+\widetilde{A}_{\epsilon,3} g(x)+\widetilde{A}_{\epsilon,4} g(x)$, where
\begin{equation}
\begin{split}
&\widetilde{A}_{\epsilon,1} g(x):=\int_{-1}^1\int_{\S}
\big(k(x-y_\S - \epsilon s \nu (y_\S))-k(x-y_\S)\big)
g_\epsilon(y_\S,s)\,d\upsigma (y_\S)\,ds,\\
&\widetilde{A}_{\epsilon,2} g(x):=\int_{-1}^1\int_{|x_\S-y_\S|\leq4{\rm dist}(x,\S)}
k(x-y_\S)g_\epsilon(y_\S,s)\,d\upsigma (y_\S)\,ds,\\
&\widetilde{A}_{\epsilon,3} g(x):=\int_{-1}^1\int_{|x_\S-y_\S|>4{\rm dist}(x,\S)}
\big(k(x-y_\S)-k(x_\S-y_\S)\big)g_\epsilon(y_\S,s)\,d\upsigma (y_\S)\,ds,\\
&\widetilde{A}_{\epsilon,4} g(x):=\int_{-1}^1\int_{|x_\S-y_\S|>4{\rm dist}(x,\S)}
k(x_\S-y_\S)g_\epsilon(y_\S,s)\,d\upsigma (y_\S)\,ds.
\end{split}
\end{equation}
From now on we assume $x\in\Omega_{\eta_0}\setminus\Omega_{4\epsilon}$ and, as always, $y_\S\in\S$. Note that
\begin{equation}|(y_\S - \epsilon s \nu (y_\S))-y_\S|\leq\epsilon
\leq\frac{1}{4}\,{\rm dist}(x,\S)\leq\frac{1}{4}\,|x-y_\S|,\end{equation}
so \eqref{Horm est} gives
$|k(x-y_\S - \epsilon s \nu (y_\S))-k(x-y_\S)|\leq C\epsilon{|x-y_\S|^{-3}}.$
Furthermore, we have that $|x-y_\S|\geq C|x_\S-y_\S|$ for all $y_\S\in\S$ and some $C>0$ only depending on $\eta_0$.
We can split the integral on $\S$ which defines $\widetilde{A}_{\epsilon,1} g(x)$ in dyadic annuli as we did in \eqref{witb 4} (see also \eqref{witb 11})
to obtain
\begin{equation}\label{dedede 1}
\begin{split}
|&\widetilde{A}_{\epsilon,1} g(x)|
\leq
C\int_{-1}^1\int_{|x_\S-y_\S|<{\rm dist}(x,\S)}
\frac{\epsilon|g_\epsilon(y_\S,s)|}{{\rm dist}(x,\S)^3}\,d\upsigma (y_\S)\,ds\\
&\qquad\qquad\quad+C\int_{-1}^1\sum_{n=0}^\infty
\int_{2^{n}{\rm dist}(x,\S)<|x_\S-y_\S|\leq2^{n+1}{\rm dist}(x,\S)}
\frac{\epsilon|g_\epsilon(y_\S,s)|}{|x-y_\S|^3}\,d\upsigma (y_\S)\,ds\\
&\,\,\leq C\|v\|_{L^\infty({\mathbb R})}
\widetilde{M}_*g(x_\S)+
C\int_{-1}^1\sum_{n=0}^\infty\frac{1}{2^n}
\int_{|x_\S-y_\S|\leq2^{n+1}{\rm dist}(x,\S)}
\frac{|g_\epsilon(y_\S,s)|}{(2^{n}{\rm dist}(x,\S))^2}\,d\upsigma (y_\S)\,ds\\
&\,\,\leq C\|v\|_{L^\infty({\mathbb R})}
\widetilde{M}_*g(x_\S)
+C\sum_{n=0}^\infty\!\frac{1}{2^n}
\int_{-1}^1M_*(g_\epsilon(\cdot,s))(x_\S)\,ds
\leq C\|v\|_{L^\infty({\mathbb R})}
\widetilde{M}_*g(x_\S).
\end{split}
\end{equation}
Using that $|k(x-y_\S)|\leq C|x-y_\S|^{-2}\leq C{\rm dist}(x,\S)^{-2}$ by \eqref{Horm est}, it is easy to show that
\begin{equation}\label{dedede 2}
|\widetilde{A}_{\epsilon,2} g(x)|\leq C\|v\|_{L^\infty({\mathbb R})}\widetilde{M}_*g(x_\S).
\end{equation}
Since ${\rm dist}(x,\S)=|x-x_\S|$, the same arguments as in \eqref{dedede 1} yield
\begin{equation}\label{dedede 3}
|\widetilde{A}_{\epsilon,3} g(x)|\leq C\|v\|_{L^\infty({\mathbb R})}\widetilde{M}_*g(x_\S).
\end{equation}
Finally, the same arguments as in \eqref{witb 10} show that
\begin{equation}\label{dedede 4}
\begin{split}
|\widetilde{A}_{\epsilon,4}g(x)|
\leq C\|v\|_{L^\infty({\mathbb R})}
\big(\widetilde{T}_*g(x_\S)+\widetilde{T}_*(\lambda_1\lambda_2g)(x_\S)
+\widetilde{T}_*(\lambda_1g)(x_\S)+\widetilde{T}_*(\lambda_2g)(x_\S)\big).
\end{split}
\end{equation}
Therefore, thanks to \eqref{dedede 1}, \eqref{dedede 2}, \eqref{dedede 3} and \eqref{dedede 4} we conclude that
\begin{equation}
\begin{split}
\sup_{0\leq\epsilon\leq{\eta_0/4}}
\chi_{\Omega_{\eta_0}\setminus\Omega_{4\epsilon}}(x)
|\widetilde{A}_\epsilon g(x)|
&\leq C\|v\|_{L^\infty({\mathbb R})}
\big(\widetilde{M}_*g(x_\S)+\widetilde{T}_*g(x_\S)\\
&\quad+\widetilde{T}_*(\lambda_1\lambda_2g)(x_\S)
+\widetilde{T}_*(\lambda_1g)(x_\S)+\widetilde{T}_*(\lambda_2g)(x_\S)\big),
\end{split}
\end{equation}
and then, similarly to what we did in \eqref{cecece 1}, a combination of \eqref{max hardy sio} and Proposition \ref{weingarten map} gives
\begin{equation}\label{cecece 2}
\begin{split}
\big\|\sup_{0\leq\epsilon\leq{\eta_0/4}}
\chi_{\Omega_{\eta_0}\setminus\Omega_{4\epsilon}}
|\widetilde{A}_\epsilon g|\big\|_{L^2({\R}^3)}
&\leq C\|v\|_{L^\infty({\mathbb R})}
\|g\|_{L^2(\S\times(-1,1))}.
\end{split}
\end{equation}
Finally, combining \eqref{cecece 1} and \eqref{cecece 2} we get that, if $\eta_0>0$ is small enough, then
\begin{equation}\label{wita **}
\begin{split}
\big\|\sup_{0\leq\epsilon\leq{\eta_0/4}}
\chi_{\Omega_{\eta_0}}|\widetilde{A}_\epsilon g|\big\|_{L^2({\R}^3)}
&\leq C\|v\|_{L^\infty({\mathbb R})}
\|g\|_{L^2(\S\times(-1,1))},
\end{split}
\end{equation}
where $C>0$ only depends on $\eta_0$.
\subsection{$A_{\epsilon,\omega_3}\to A_{0,\omega_3}$ in the strong sense when $\epsilon\to0$ and conclusion of the proof of \eqref{convergence A}}
\mbox{}
It only remains to put all the pieces together. Despite that the proof follows more or less the same lines as the one in Section \ref{cpB}, in this case the things are easier. Namely, now we don't need to appeal to Lemma \ref{Calderon lemma} because the dominated convergence theorem suffices (the developements in Section \ref{pointwise A} hold for all $g\in L^2(\S\times(-1,1))^4$, not only for a dense subspace like in Section \ref{pointwise B}).
Working component by component and using \eqref{wita **} we see that, if we set
\begin{equation}A_{*,\omega_3}g(x):=
\sup_{0\leq\epsilon\leq{\eta_0/4}}
|A_{\epsilon,\omega_3}g(x)|\quad\text{ for $x\in{\R}^3\setminus\S$,}\end{equation}
then there exists $C>0$ only depending on $\eta_0>0$ (being $\eta_0$ small enough) such that
\begin{equation}\label{___}
\begin{split}
\|\chi_{\Omega_{\eta_0}}A_{*,\omega_3}g\|_{L^2({\R}^3)^4}
\leq C\|v\|_{L^\infty({\mathbb R})}
\|g\|_{L^2(\S\times(-1,1))^4}.
\end{split}
\end{equation}
Moreover, given $g\in L^2(\S\times(-1,1))^4$, in \eqref{0003A} we showed that $\lim_{\epsilon\to 0}A_{\epsilon,\omega_3}g(x)=A_{0,\omega_3}g(x)$ for $\mathcal{L}$-a.e. $x\in{\R}^3$. Thus \eqref{___} and the dominated convergence theorem show that
\begin{equation}\label{0002A00}
\lim_{\epsilon\to0}\|\chi_{\Omega_{\eta_0}}(A_{\epsilon,\omega_3}-A_{0,\omega_3})g\|_{L^2({\R}^3)^4}=0.
\end{equation}
Then, combining \eqref{0002A000}, \eqref{0002A0}, \eqref{A exponential}, \eqref{0002A} and \eqref{0002A00}, we conclude that
\begin{equation}
\begin{split}
\lim_{\epsilon\to0}\|(A_\epsilon(a)-A_0(a))g\|_{L^2({\R}^3)^4}^2
\leq\lim_{\epsilon\to0}\Big(&
\|\chi_{{\R}^3\setminus\Omega_{\eta_0}}(A_\epsilon(a)-A_0(a))g\|_{L^2({\R}^3)^4}^2\\
&+\|\chi_{\Omega_{\eta_0}}(A_{\epsilon,\omega_1^a}-A_{0,\omega_1^a})g\|_{L^2({\R}^3)^4}^2\\
&+\|\chi_{\Omega_{\eta_0}}(A_{\epsilon,\omega_2^a}-A_{0,\omega_2^a})g\|_{L^2({\R}^3)^4}^2\\
&+\|\chi_{\Omega_{\eta_0}}(A_{\epsilon,\omega_3}-A_{0,\omega_3})g\|_{L^2({\R}^3)^4}^2\Big)
=0
\end{split}
\end{equation}
for all $g\in L^2(\S\times(-1,1))^4$. This is precisely \eqref{convergence A}.
\section{Proof of Corollary \ref{convergence main}}\label{s proof corol}
We first prove an auxiliary result.
\begin{lemma}\label{REM}
Let $a\in{\mathbb C}\setminus{\mathbb R}$ and $\eta_0>0$ be such that \eqref{C^2 domain properties} holds for all $0<\epsilon\leq\eta_0$. If $\eta_0$ is small enough, then for any $0<\eta\leq\eta_0$ and $V\in L^\infty({\mathbb R})$ with ${\rm supp} V\subset[-\eta,\eta]$ we have that
\begin{equation}
\begin{split}
&\|A_\epsilon(a)\|_{L^2(\Sigma\times (-1,1))^4\to L^2({\R}^3)^4},\\
&\|B_\epsilon(a)\|_{L^2(\Sigma\times(-1,1))^4\to L^2(\Sigma\times(-1,1))^4},\\
&\|C_\epsilon(a)\|_{L^2({\R}^3)^4\to L^2(\Sigma\times (-1,1))^4}
\end{split}
\end{equation}
are uniformly bounded for all $0\leq\epsilon\leq\eta_0$, with bounds that only depend on $a$, $\eta_0$ and $V$.
Furthermore, if $\eta_0$ is small enough there exists $\delta>0$ only depending on $\eta_0$ such that
\begin{equation}\label{proof colo 4}
\|B_\epsilon(a)\|_{L^2(\Sigma\times(-1,1))^4\to L^2(\Sigma\times(-1,1))^4}\leq\frac{1}{3}
\end{equation}
for all $|a|\leq1$, $0\leq\epsilon\leq{\eta_0}$, $0<\eta\leq{\eta_0}$ and all $(\delta,\eta)$-small $V$.
\end{lemma}
\begin{proof}
The first statement in the lemma comes as a byproduct of the developements carried out in Sections \ref{ss C}, \ref{ss B} and \ref{ss A};
see \eqref{unif estimate Cepsilon} for the case of $C_\epsilon(a)$, \eqref{remark eq1_} and the paragraph which contains \eqref{0002} for $B_\epsilon(a)$, and \eqref{A exponential_}, \eqref{0002A} and \eqref{___} for $A_\epsilon(a)$.
We shoud stress that these developements are valid for any $V\in L^\infty({\mathbb R})$ with ${\rm supp} V\subset[-\eta,\eta]$, where $0<\eta\leq{\eta_0}$, hence the $(\delta,\eta)$-small assuption on $V$ in Theorem \ref{Main theorem} is only required to prove the explicit bound in the second part of the lemma, which will yield the strong convergence of $(1+B_\epsilon(a))^{-1}$ and $(\beta+B_\epsilon(a))^{-1}$ to $(1+B_0(a)+B')^{-1}$ and $(\beta+B_0(a)+B')^{-1}$, respectively, in Corollary \ref{convergence main}.
Recall the decomposition
\begin{equation}\label{correc8}
B_\epsilon (a)=B_{\epsilon,\omega_1^a}+B_{\epsilon,\omega_2^a}+B_{\epsilon,\omega_3}
\end{equation}
given by \eqref{eqn:break phi2}. Thanks to \eqref{remark eq1_}, there exists $C_0>0$ only depending on $\eta_0$ such that
\begin{equation}\label{proof colo 1}
\begin{split}
\|B_{\epsilon,\omega_3}\|_{L^2(\S\times(-1,1))^4
\to L^2(\S\times(-1,1))^4}
\leq C_0\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
\quad\text{for all }0<\epsilon\leq\eta_0.
\end{split}
\end{equation}
The comments in the paragraph which contains \eqref{0002} and an inspection of the proof of \cite[Lemma 3.4]{approximation} show that
there also exists $C_1>0$ only depending on $\eta_0$ such that, for any $|a|\leq1$ and $j=1,2$,
\begin{equation}\label{proof colo 2}
\begin{split}
\|B_{\epsilon,\omega_j^a}\|_{L^2(\S\times(-1,1))^4
\to L^2(\S\times(-1,1))^4}
\leq C_1\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
\quad\text{for all }0<\epsilon\leq\eta_0.
\end{split}
\end{equation}
Note that the kernel defining $B_{\epsilon,\omega_2^a}$ is given by
\begin{equation}\omega_2^a(x)=\frac{e^{-\sqrt{m^2-a^2}|x|}-1}{4 \pi}\,i\alpha\cdot\frac{x}{|x|^3},\quad\text{so
$|\omega_2^a(x)|=O\Big(\frac{\sqrt{|m^2-a^2|}}{|x|}\Big)$
for $|x|\to0$.}\end{equation}
Therefore, the kernel is of fractional type with respect to $\upsigma$, but the estimate blows up as $|a|\to\infty$. This is the reason why we restrict ourselves to $|a|\leq1$ in \eqref{proof colo 2}, where we have a uniform bound with respect to $a$. However, for proving Theorem \ref{Main theorem}, one fixed $a\in{\mathbb C}\setminus{\mathbb R}$ suffices, say $a=i$ (see \eqref{main eq*1} and \eqref{main eq*2}).
From \eqref{correc8}, \eqref{proof colo 1} and \eqref{proof colo 2}, we derive that
\begin{equation}\label{proof colo 3}
\begin{split}
\|B_{\epsilon}(a)\|_{L^2(\S\times(-1,1))^4
\to L^2(\S\times(-1,1))^4}
\leq (C_0+2C_1)\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
\quad\text{for all }0<\epsilon\leq\eta_0.
\end{split}
\end{equation}
If $V$ is $(\delta,\eta)$-small (see Definition \ref{deltasmall}) then
$\|V\|_{L^\infty({\mathbb R})}\leq\frac{\delta}{\eta}$, so \eqref{eq u,v} yields
\begin{equation}
\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}
=\eta\|V\|_{L^\infty({\mathbb R})}\leq\delta.
\end{equation}
Taking $\delta>0$ small enough so that
$(C_0+2C_1)\delta\leq\frac{1}{3}$, from \eqref{proof colo 3} we finally get \eqref{proof colo 4} for all $0<\epsilon\leq\eta_0$. The case of $B_0(a)$ follows similarly, just recall the paragraph previous to \eqref{witb 0bis} taking into account that the dependence of the norm of $B_0(a)$ with respect to $\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}$ is the same as in the case of $0<\epsilon\leq\eta_0$.
\end{proof}
\subsection{Proof of Corollary \ref{convergence main}}
\mbox{}
We are going to prove the corollary for $(H+\mathbf{V}_{\!\epsilon}-a)^{-1}$, the case of $(H+\beta\mathbf{V}_{\!\epsilon}-a)^{-1}$ follows by the same arguments. Let $\eta_0,\,\delta>0$ be as in Lemma \ref{REM} and take $a\in{\mathbb C}\setminus{\mathbb R}$ with $|a|\leq 1$. It is trivial to show that
\begin{equation}\|B'\|_{L^2(\Sigma\times(-1,1))^4\to L^2(\Sigma\times(-1,1))^4}
\leq C\|u\|_{L^\infty({\mathbb R})}\|v\|_{L^\infty({\mathbb R})}\end{equation}
for some $C>0$ only depending on $\S$.
Using \eqref{eq u,v}, we can take a smaller $\delta>0$ so that, for any $(\delta,\eta)$-small $V$ with $0<\eta\leq\eta_0$,
\begin{equation}
\|B'\|_{L^2(\Sigma\times(-1,1))^4\to L^2(\Sigma\times(-1,1))^4}\leq C\delta\leq\frac{1}{3}.
\end{equation}
Then, from this and \eqref{proof colo 4} in Lemma \ref{REM} (with $\epsilon=0$) we deduce that
\begin{equation}
\begin{split}
\|(1+B_0(a)+B')g\|_{L^2(\Sigma\times(-1,1))^4}
&\geq\|g\|_{L^2(\Sigma\times(-1,1))^4}
-\|(B_0(a)+B')g\|_{L^2(\Sigma\times(-1,1))^4}\\
&\geq\frac{1}{3}\|g\|_{L^2(\Sigma\times(-1,1))^4}
\end{split}
\end{equation}
for all $g\in L^2(\Sigma\times(-1,1))^4$. Therefore, $1+B_0(a)+B'$ is invertible and
\begin{equation}\label{opop eq1}
\|(1+B_0(a)+B')^{-1}\|_{L^2(\Sigma\times(-1,1))^4\to L^2(\Sigma\times(-1,1))^4}\leq 3.
\end{equation}
This justifies the last comment in the corollary.
Similar considerations also apply to $1+B_\epsilon(a)$, so in this case we deduce that
\begin{equation}\label{opop eq2}
\|(1+B_\epsilon(a))^{-1}\|_{L^2(\Sigma\times(-1,1))^4\to L^2(\Sigma\times(-1,1))^4}\leq \frac{3}{2}
\end{equation}
for all $0<\epsilon\leq\eta_0$.
Note also that
\begin{equation}\label{opop eq3}
\begin{split}
(1+B_\epsilon(a))^{-1}-(1&+B_0(a)+B')^{-1}\\
&=(1+B_\epsilon(a))^{-1}(B_0(a)+B'-B_\epsilon(a))
(1+B_0(a)+B')^{-1}.
\end{split}
\end{equation}
Given $g\in L^2(\Sigma\times(-1,1))^4$, set
$f=(1+B_0(a)+B')^{-1}g\in L^2(\Sigma\times(-1,1))^4$. Then,
by \eqref{opop eq3} and \eqref{opop eq2}, we see that
\begin{equation}\label{opop eq4}
\begin{split}
\big\|\big((1+B_\epsilon(a))^{-1}-(1+&B_0(a)+B')^{-1}\big)g\big\|_{L^2(\Sigma\times(-1,1))^4}\\
&= \|(1+B_\epsilon(a))^{-1}(B_0(a)+B'-B_\epsilon(a))
f\|_{L^2(\Sigma\times(-1,1))^4}\\
&\leq \frac{3}{2}\,\|(B_0(a)+B'-B_\epsilon(a))
f\|_{L^2(\Sigma\times(-1,1))^4}.
\end{split}
\end{equation}
By \eqref{conv B th} in Theorem \ref{conv AB th}, the right hand side of \eqref{opop eq4} converges to zero when $\epsilon\to0$. Therefore, we deduce that $(1+B_\epsilon(a))^{-1}$ converges strongly to $(1+B_0(a)+B')^{-1}$ when $\epsilon\to0$. Since the composition of strongly convergent operators is strongly convergent, using \eqref{resolvent formula 2} and Theorem \ref{conv AB th}, we finally obtain the desired strong convergence
\begin{equation}(H+\mathbf{V}_{\!\epsilon}-a)^{-1}\to
(H-a)^{-1}+A_0(a)\big(1+B_0(a)+B'\big)^{-1}C_0(a)\quad\text{when }\epsilon\to0.\end{equation} Corollary \ref{convergence main} is finally proved.
\printbibliography
\end{document}
|
1,477,468,750,666 | arxiv | \section{Introduction}
Today String Theory is a promising candidate for the quantum theory of gravity. Since it requires more than four dimensions for its internal consistency, there has been growing interest in higher dimensional solutions and, in particular, in higher dimensional black holes (see e.g. \cite{Emparan:2008eg}).
The higher-dimensional generalizations of the stationary axisymmetric Kerr black holes were found by Myers and Perry \cite{Myers:1986un},
who anticipated already the existence of higher-dimensional black rings.
In 2001 Emparan and Reall \cite{Emparan:2001wn} then found such black rings in five dimensions. They rotate in the direction of the ring and possess a horizon topology of $S^1 \times S^2$.
The phase diagram of these black rings together with the Myers-Perry black holes showed, that the uniqueness of 4-dimensional vacuum black holes does not generalize to higher dimensions.
In Myers-Perry black hole spacetimes the geodesic equations are separable \cite{Kubiznak:2006kt,Page:2006ka,Frolov:2006pe}.
However, the geodesic equations of five dimensional black rings do not seem to be separable, in general. Nevertheless, it is possible to separate the equations of motion on the rotational axis (which is actually a plane), in the equatorial plane and in the case $E=m=0$ (which is only possible in the ergosphere) \cite{Hoskisson:2007zk,Durkee:2008an}.
Hoskisson \cite{Hoskisson:2007zk} studied the geodesic motion of a singly spinning black ring and discussed the separability of the Hamilton-Jacobi equation, and provided numerical solutions. He analyzed numerically the motion on the rotational axis and the equatorial plane in detail. Some aspects of nullgeodesics in the equatorial plane were studied by Elvang, Emparan and Virmani \cite{Elvang:2006dd}.
The Pomerasky-Sen'kov doubly spinning black ring \cite{Pomeransky:2006bd} was studied by Durkee \cite{Durkee:2008an}. He showed that it is possible to separate the Hamilton-Jacobi equation of the doubly spinning black ring in the case $E=m=0$ and analyzed the effective potential on the two axes (planes) of rotation and in the case $E=m=0$. The zero energy nullgeodesics of the singly spinning dipole black ring were analyzed by Armas \cite{Armas:2010pw}. In \cite{Igata:2010ye} Igata, Ishihara and Takamori concentrated on stable bound orbits in the singly spinning black ring spacetime, which they found numerically on and near the rotational axis.
So far the equations of motion for test particles in black ring spacetimes were only solved numerically, and no analytic solutions have been given.
The first to solve the geodesic equations in a black hole spacetime analytically was Hagihara \cite{Hagihara:1931}. He presented the solution of the geodesic equation for test particles in the Schwarzschild spacetime in terms of the elliptic Weierstra{\ss} $\wp$ function.
When adding the cosmological constant to the Schwarzschild metric one encounters hyperelliptic curves in the geodesic equations. The equations of motion in Schwarzschild-(anti) de Sitter spacetimes in four dimensions were solved in \cite{Hackmann:2008zz}. Also analytical solutions of the geodesic equations in higher dimensional Myers-Perry spacetime \cite{Enolski:2010if} as well as in higher dimensional Schwarzschild, Schwarzschild-(anti) de Sitter, Reissner-Nordstr\"om and Reissner-Nordstr\"om-(anti) de Sitter \cite{Hackmann:2008tu} were found.
The mathematical method is based on the Jacobi inversion problem. The solution can be found if the problem is restricted to the Theta-divisor, the set of zeros of the theta function. Enolski, Pronine and Richter developed this method in 2003 to solve the problem of the double pendulum \cite{Enolski:2003}.
In this paper we present analytical solutions of the equations of motion of a singly spinning black ring. In the case $E=m=0$ and in the equatorial plane the equations are of elliptic type, however, on the rotational axis the geodesic equations are of hyperelliptic type.
\section{Singly Spinning Black Ring Spacetime}
The singly spinning black ring solution can be written in the form \cite{Durkee:2008an}
\begin{equation}
\mathrm{d}s^2 = -\frac{H(y)}{H(x)}(\mathrm{d}t+\Omega_\psi \mathrm{d}\psi)^2 + \frac{R^2H(x)}{(x-y)^2} \left[ \frac{G(x)}{H(x)}\mathrm{d}\phi^2 + \frac{\mathrm{d}x^2}{G(x)} - \frac{G(y)}{H(y)}\mathrm{d}\psi^2 - \frac{\mathrm{d}y^2}{G(y)} \right] .
\end{equation}
The metric is given in toroidal coordinates (see \ref{pic:ringcoord}) where $-1 \leq x \leq 1$, $-\infty < y \leq -1$ and $-\infty < t < \infty$. $\phi$ and $\psi$ are $2\pi$-periodic. The metric functions are
\begin{equation}
\begin{split}
G(x) &= (1-x^2)(1+\lambda x), \\
H(x) &= 1+2x\lambda + \lambda ^2 , \\
\Omega _\psi \mathrm{d}\psi &= -CR\frac{1+y}{H(y)} \mathrm{d}\psi , \quad \mathrm{where} \quad C^2\equiv 2\lambda ^2\frac{(1+\lambda)^3}{1-\lambda}.
\end{split}
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{ringcoord.eps}
\caption{Toroidal coordinates (or ring coordinates) on a cross section at constant angles $\phi$ and $\psi$. Solid circles correspond to $y=\mathrm{const.}$ and dashed circles corredspond to $x=\mathrm{const.}$. \cite{Emparan:2006mm}}
\label{pic:ringcoord}
\end{figure}
The parameters $\lambda$ and $R$ describe the shape, mass and angular momentum of the ring. $\lambda$ lies in the range $0\leq \lambda <1$ to ensure the black ring is balanced.
A spacelike curvature singularity is located at $y=-\infty$. The metric has a coordinate singularity at $G(y)=0$, so the event horizon lies at $y_h=-\frac{1}{\lambda}$. The ergosphere of the singly spinning black ring is determined by $H(y)=0$, which is at $y_e=-\frac{1+\lambda^2}{2\lambda}$. Since we have $y_h<y_e<-1$, it is clear that an ergoregion does exist. The topology of the event horizon and the ergosphere is $S^1\times S^2$.\\
The inverse metric is
\begin{equation}
\left(\frac{\partial}{\partial s}\right) ^2 = -\frac{H(x)}{H(y)} \left(\frac{\partial}{\partial t}\right) ^2 + \frac{(x-y)^2}{R^2H(x)}\left[ G(x) \left(\frac{\partial}{\partial x}\right) ^2 - G(y) \left(\frac{\partial}{\partial y}\right) ^2 + \frac{H(x)}{G(x)}\left(\frac{\partial}{\partial \phi}\right) ^2 - \frac{H(y)}{G(y)}\left( \frac{\partial}{\partial \psi} -\Omega_\psi \frac{\partial}{\partial t} \right)^2 \right] \, .
\end{equation}
The singly spinning black ring metric and its Hamiltonian $\mathscr{H} = \frac{1}{2}g^{ab}p_ap_b$ do not depend on the coordinates $t$, $\phi$ and $\psi$, so we have three conserved momenta $p_a=g_{ab}\dot{x}^b$ with the associated killing vector fields $\partial / \partial t$, $\partial / \partial \phi$ and $\partial / \partial \psi$. A dot denotes the derivative with respect to an affine parameter $\tau$.
\begin{eqnarray}
-p_t &=& \frac{H(y)}{H(x)} (\dot{t}+\Omega_\psi\dot{\psi}) \equiv E \label{eqn:t-impuls}\\
p_\phi &=& \frac{R^2G(x)}{(x-y)^2}\dot{\phi} \equiv \Phi \label{eqn:phi-impuls}\\
p_\psi &=& -\Omega _\psi E - \frac{R^2H(x)G(y)}{H(y)(x-y)^2}\dot{\psi} \equiv \Psi \, .\label{eqn:psi-impuls}
\end{eqnarray}
$E$ is the energy, $\Phi$ and $\Psi$ are the angular momenta in $\phi$- and $\psi$-direction. The conjugate momenta in $x$- and $y$-direction are:
\begin{eqnarray}
p_x &=& \frac{R^2H(x)}{(x-y)^2G(x)}\dot{x} \label{eqn:x-impuls}\\
p_y &=& -\frac{R^2H(x)}{(x-y)^2G(y)} \dot{y}\label{eqn:y-impuls}
\end{eqnarray}
To obtain the equations of motion for a particle in the singly spinning black ring spacetime we need the Hamilton-Jacobi equation:
\begin{equation}
\frac{\partial S}{\partial \tau} + \mathscr{H} \left( x^a, \frac{\partial S}{\partial x^b}\right) = 0 \, .
\label{eqn:hamilton-jacobi}
\end{equation}
We already have three constants of motion ($E$, $\Phi$ and $\Psi$) and the mass shell condition $g^{ab}p_a p_b=-m^2$ gives us a fourth , so we can make the ansatz
\begin{equation}
S(\tau, t, x, y, \phi, \psi) = \frac{1}{2}m^2\tau -Et +\Phi\phi +\Psi\psi + S_x(x)+S_y(y) .
\label{eqn:s-ansatz}
\end{equation}
Inserting this ansatz into (\ref{eqn:hamilton-jacobi}) gives
\begin{equation}
0 = m^2 - \frac{H(x)}{H(y)}E^2 + \frac{(x-y)^2}{R^2H(x)}\left[ G(x) \left(\frac{\partial S}{\partial x}\right)^2 -G(y) \left(\frac{\partial S}{\partial y}\right)^2 + \frac{H(x)}{G(x)}\Phi^2 - \frac{H(y)}{G(y)}(\Psi + \Omega_\psi E)^2\right] \, .
\label{eqn:hjd-sring}
\end{equation}
The Hamilton-Jacobi equation does not seem to be separable in general. However, it is possible to separate the equation in the special case $E=m=0$. These zero energy null geodesics are only realisable in the ergoregion.
We can also obtain equations of motion for geodesics on the $\phi$- and $\psi$-axis by setting $x=\pm 1$ ($\phi$-axis) or $y=-1$ ($\psi$-axis). The plane $x=\pm 1$ which is called the $\phi$-axis, is the equatorial plane of the black ring. The plane $y=-1$ which is called the $\psi$-axis corresponds to the rotational axis of the singly spinning black ring.\\
In the next sections we will study these special cases and solve the corresponding equations of motion analytically.
\section{Nullgeodesics in the Ergosphere}
For $E=m=0$ it is possible to separate the Hamilton-Jacobi equation:
\begin{equation}
G(x) \left(\frac{\partial S}{\partial x}\right)^2 + \frac{H(x)}{G(x)}\Phi^2 = G(y) \left(\frac{\partial S}{\partial y}\right)^2 + \frac{H(y)}{G(y)}\Psi^2 \,.
\label{eqn:ham-jac-sing}
\end{equation}
With a separation constant $c$, the equation (\ref{eqn:ham-jac-sing}) splits into two:
\begin{eqnarray}
G^2(x)\left(\frac{\partial S}{\partial x}\right)^2 &=& cG(x) -\Phi^2H(x) := X(x) \qquad \mathrm{and} \qquad \\
G^2(y)\left(\frac{\partial S}{\partial y}\right)^2 &=& cG(y) -\Psi^2H(y) := Y(y),
\end{eqnarray}
so that
\begin{equation}
S =\Phi\phi +\Psi\psi + \int \! \sqrt{X(x)} \, \mathrm{d}x + \int \! \sqrt{Y(y)} \, \mathrm{d}y .
\end{equation}
Using $p_a = \frac{\partial S}{\partial x^a}$ and (\ref{eqn:t-impuls})-(\ref{eqn:y-impuls}) the separated Hamilton-Jacobi equation gives the equations of motion:
\begin{eqnarray}
\frac{\mathrm{d} x}{\mathrm{d} \gamma} &=& \sqrt{X(x)} \label{eqn:sing-x-gleichung} \\
\frac{\mathrm{d} y}{\mathrm{d} \gamma} &=& -\sqrt{Y(y)} \label{eqn:sing-y-gleichung} \\
\frac{\mathrm{d} \phi}{\mathrm{d} \gamma} &=& \frac{H(x)\Phi}{G(x)} \label{eqn:sing-phi-gleichung} \\
\frac{\mathrm{d} \psi}{\mathrm{d} \gamma} &=& -\frac{H(y)\Psi}{G(y)} \label{eqn:sing-psi-gleichung} \\
\frac{\mathrm{d} t}{\mathrm{d} \gamma} &=& -\frac{CR(1+y)\Psi}{G(y)} \label{eqn:sing-t-gleichung}
\end{eqnarray}
where we have introduced the Mino-time \cite{Mino:2003yg} $\mathrm{d}\gamma= \frac{(x-y)^2}{R^2H(x)} \mathrm{d}\tau$.
\subsection{Classification of geodesics}
Equation (\ref{eqn:sing-x-gleichung}) and (\ref{eqn:sing-y-gleichung}) can be written as
\begin{eqnarray}
\left( \frac{\text{d}x}{\text{d}\gamma} \right) ^2 + U(x) &=& 0 \qquad \mathrm{where} \qquad U(x)=\Phi^2H(x)-cG(x) \qquad \mathrm{and}, \\
\left( \frac{\text{d}y}{\text{d}\gamma} \right) ^2 + V(y) &=& 0 \qquad \mathrm{where} \qquad V(y)=\Psi^2H(y)-cG(y).
\end{eqnarray}
$U(x)$ and $V(y)$ can be regarded as effective potentials (see \cite{Durkee:2008an}). To get real solutions for the $x$- and $y$-equation the effective potentials have to be negative. So $c\geq 0$ is required, because $H(x)\geq 0$ for $0\leq \lambda <1$ and $-1\leq x \leq 1$. The zeros of the effective potentials and hence $X$ and $Y$ mark the turning points of the motion of light or a test particle (in this case we only have light since $m=0$). A good way to determine the number of zeros are parametric diagrams. Figure \ref{pic:sing-parameter} shows a parametric $\Phi$-$\lambda$-$c$-diagram for $U(x)$. It turns out that $X(x)$ has two real zeros in the allowed range of $x$ or none. One zero is always negative while the other can be positive or negative.
If $X(x)$ has two zeros the $x$-motion takes place between these two values, if $X(x)$ has a single zero at $x=0$ the $x$-motion stays constant.
$Y(y)$ and accordingly $V(y)$ determine the type of the orbit. If $\lambda =0$ and $c\geq\Psi^2$, $Y(y)$ has no real zeros. Otherwise $Y(y)$ has always one real zero in the allowed range of $y$. That means the only possible orbit is a Terminating Orbit (TO), where light crosses the horizon and falls into the singularity.
See figure \ref{pic:sing-potential} for examples of the effective potentials.
\begin{figure}
\centering
\includegraphics[width=8cm]{parameterplot3d.eps}
\caption{Three dimensional parametric $c$-$\lambda$-$\Phi$-diagram for the singly spinning black ring in the case $E=m=0$. Inside the structure $X(x)$ has two real zeros, outside the structure it has no real zeros in the allowed range of $x$.}
\label{pic:sing-parameter}
\end{figure}
\begin{figure}
\centering
\subfigure[Effective potential $U(x)$ for $\Phi =0.5$. There is one positive and one negative zero.]{
\includegraphics[width=4.7cm]{pot_u-lambda0.5-phi0.5.eps}
\label{pic:sing-potential1}
}
\subfigure[Effective potential $U(x)$ for $\Phi =0.9$. There are two negative zeros.]{
\includegraphics[width=4.7cm]{pot_u-lambda0.5-phi0.9.eps}
\label{pic:sing-potential2}
}
\subfigure[Effective potential $V(y)$ for $\Psi =5$. The point indicates the position of the turning point and the red horizontal dashed line shows the range of a terminating orbit. The horizon is marked by a vertical dashed line.]{
\includegraphics[width=4.7cm]{pot_v-lambda0.5-psi5.eps}
\label{pic:sing-potential3}
}
\caption{$c=1$,$\lambda=0.5$: Effective potentials for the singly spinning black ring in the case $E=m=0$.}
\label{pic:sing-potential}
\end{figure}
\subsection{Solution of the $x$-equation}
\label{sec:ergo-xsol}
Equation (\ref{eqn:sing-x-gleichung}) can be written as
\begin{equation}
\left(\frac{\mathrm{d}x}{\mathrm{d}\gamma} \right) ^2 = X(x) = b_{x,3}x^3 + b_{x,2}x^2 + b_{x,1}x + b_{x,0},
\end{equation}
where $X$ is a polynomial of third order with the coefficients
\begin{eqnarray}
b_{x,3} &=& -c\lambda \nonumber\\
b_{x,2} &=& -c \nonumber\\
b_{x,1} &=& \lambda (c-2\Phi^2) \nonumber\\
b_{x,0} &=& c-\Phi^2(1+\lambda^2) .
\end{eqnarray}
The substitution $x=\frac{1}{b_{x,3}}\left( 4v-\frac{b_{x,2}}{3}\right) $ transforms the polynomial into the standard Weierstra{\ss} form
\begin{equation}
\left( \frac{\mathrm{d}v}{\mathrm{d}\gamma} \right) ^ 2= 4v^3 - g_{x,2} v -g_{x,3} :=P_{x,3}(v) ,
\label{eqn:sing-weierstrass-form}
\end{equation}
where
\begin{equation}
g_{x,2}=\frac{b_{x,2}^2}{12} - \frac{b_{x,1} b_{x,3}}{4} \qquad \mathrm{and} \qquad
g_{x,3}=\frac{b_{x,1} b_{x,2} b_{x,3}}{48}-\frac{b_{x,0} b_{x,3}^2}{16}-\frac{b_{x,2}^3}{216} \ .
\end{equation}
Equation (\ref{eqn:sing-weierstrass-form}) is of elliptic type and is solved by the Weierstra{\ss} elliptic function \cite{Markushevich:1967}
\begin{equation}
v(\gamma )=\wp (\gamma - \gamma '_{\rm in},g_{x,2},g_{x,3}) \, ,
\end{equation}
where $\gamma' _{\rm in} = \gamma _{\rm in} + \int _{v_{x,\rm in}}^\infty \! \frac{\mathrm{d}v'}{\sqrt{4v'^3 - g_{x,2} v' -g_{x,3}}}$ and $v_{x,\rm in}=\frac{1}{4} \left( b_{x,3}x_{\rm in}+\frac{b_{x,2}}{3}\right) $.
Then the solution of (\ref{eqn:sing-x-gleichung}) takes the form
\begin{equation}
x (\gamma)=\frac{1}{b_{x,3}}\left[ 4\wp (\gamma - \gamma '_{\rm in},g_{x,2},g_{x,3}) -\frac{b_{x,2}}{3} \right] .
\end{equation}
\subsection{Solution of the $y$-equation}
Equation (\ref{eqn:sing-y-gleichung}) can be written as
\begin{equation}
\left(\frac{\mathrm{d}y}{\mathrm{d}\gamma} \right) ^2 =Y(y) = b_{y,3}y^3 + b_{y,2}y^2 + b_{y,1}y + b_{y,0},
\end{equation}
where $Y$ is a polynomial of third order with the coefficients
\begin{eqnarray}
b_{y,3} &=& -c\lambda \nonumber\\
b_{y,2} &=& -c \nonumber\\
b_{y,1} &=& \lambda (c-2\Psi^2) \nonumber\\
b_{y,0} &=& c-\Psi^2(1+\lambda^2) .
\end{eqnarray}
The problem can be solved analogously to the $x$-equation. Here the solution is
\begin{equation}
y (\gamma)=\frac{1}{b_{y,3}}\left[ 4\wp (\gamma - \gamma ''_{\rm in},g_{y,2},g_{y,3}) -\frac{b_{y,2}}{3} \right] ,
\end{equation}
where $\gamma'' _{\rm in} = \gamma _{\rm in} - \int _{v_{y,\rm in}}^\infty \! \frac{\mathrm{d}v'}{\sqrt{4v'^3 - g_{y,2} v' -g_{y,3}}} $ and $v_{y,\rm in}=\frac{1}{4} \left( b_{y,3}y_{\rm in}+\frac{b_{y,2}}{3}\right) $.
\subsection{Solution of the $\phi$-equation}
\label{sec:ergo-phisol}
Using (\ref{eqn:sing-x-gleichung}) the equation (\ref{eqn:sing-phi-gleichung}) becomes
\begin{equation}
\mathrm{d}\phi = \frac{H(x)\Phi}{G(x)}\frac{\mathrm{d}x}{\sqrt{X(x)}} \qquad \mathrm{or}
\end{equation}
\begin{equation}
\phi - \phi_{\rm in} = \Phi \int_{x_{\rm in}}^x \! \frac{H(x')\Phi}{G(x')} \, \frac{\mathrm{d}x'}{\sqrt{X(x')}} \, .
\end{equation}
We substitute $x=\frac{1}{b_{x,3}}\left( 4u -\frac{b_{x,2}}{3}\right)$ to transform $X(x)$ into the Weierstra{\ss} form $P_{x,3}$ (see (\ref{eqn:sing-weierstrass-form})):
\begin{equation}
\phi - \phi_{\rm in} = \Phi \int_{u_{\rm in}}^u \! \frac{H\left( \frac{1}{b_{x,3}}\left( 4u' -\frac{b_{x,2}}{3}\right)\right) \Phi}{G\left( (\frac{1}{b_{x,3}}\left( 4u' -\frac{b_{x,2}}{3}\right)\right) } \, \frac{\mathrm{d}u'}{\sqrt{P_{x,3}(u')}}
\label{eqn:sing-Ix}
\end{equation}
Now $G$ has the zeros $p_{1,2}=\pm \frac{b_{x,3}}{4}+\frac{b_{x,2}}{12}$ and $p_3=-\frac{b_{x,3}}{4\lambda}+\frac{b_{x,2}}{12}$.
We next apply a partial fractions decomposition upon equation (\ref{eqn:sing-Ix}):
\begin{equation}
\phi - \phi_{\rm in} = \Phi \int^u_{u_{\rm in}} \sum^3_{j=1}\frac{H_j}{u-p_j} \frac{du'}{\sqrt{P_{x,3}(u')}}
\label{eqn:sing-Ix-partial}
\end{equation}
$H_j$ are constants which arise from the partial fractions decomposition and depend on the parameters of the metric and the test particle. Then we substitute $u = \wp (v, g_{x,2}, g_{x,3})$ with $\wp^\prime(v)=\sqrt{4 \wp^3(v)-g_{x,2}\wp(v)-g_{x,3}}$. Equation (\ref{eqn:sing-Ix-partial}) now simplifies to
\begin{equation}
\phi - \phi_{\rm in} = \Phi \int^v_{v_{\rm in}} \sum^3_{j=1}\frac{H_j}{\wp(v)-p_j} dv
\end{equation}
with $v=v(\gamma)=\gamma-\gamma^\prime_{\rm in}$ and $v_{\rm in}=v(\gamma_{\rm in})$.
After solving the integrals of the third kind (see e.g. \cite{Enolski:2011id}), the final solution reads
\begin{equation}
\phi (\gamma) = \Phi \sum^3_{j=1} \frac{H_j}{\wp^\prime_x(v_{j})}\Biggl( 2\zeta_x(v_{j})(v-v_{\rm in}) + \log\frac{\sigma_x(v-v_{j})}{\sigma_x(v_{\rm in}-v_{j})} - \log\frac{\sigma_x(v+v_{j})}{\sigma_x(v_{\rm in}+v_{j})} \Biggr) + \phi _{\rm in}
\end{equation}
with $p_j=\wp(v_j)$. The index $x$ refers to the Weierstra{\ss}-functions with respect to the parameters $g_{x,2}$ and $g_{x,3}$.
\subsection{Solution of the $\psi$-equation}
\label{sec:ergo-psisol}
Using (\ref{eqn:sing-y-gleichung}) equation (\ref{eqn:sing-psi-gleichung}) becomes
\begin{equation}
\mathrm{d}\psi = \frac{H(y)\Psi}{G(y)}\frac{\mathrm{d}y}{\sqrt{Y(y)}} \qquad \mathrm{or}
\end{equation}
\begin{equation}
\psi - \psi_{\rm in} = \Psi \int_{y_{\rm in}}^y \! \frac{H(y')\Phi}{G(y')} \, \frac{\mathrm{d}y'}{\sqrt{Y(y')}} \, .
\end{equation}
The $\psi$-equation can be solved analogously to the $\phi$-equation. With $v=v(\gamma)=\gamma-\gamma''_{\rm in}$, $v_{\rm in}=v(\gamma_{\rm in})$ and $p_j=\wp(v_j)$ the solution is
\begin{equation}
\psi (\gamma) = \Psi \sum^3_{j=1} \frac{K_j}{\wp^\prime_y(v_{j})}\Biggl( 2\zeta_y(v_{j})(v-v_{\rm in}) + \log\frac{\sigma_y(v-v_{j})}{\sigma_y(v_{\rm in}-v_{j})} - \log\frac{\sigma_y(v+v_{j})}{\sigma_y(v_{\rm in}+v_{j})} \Biggr) + \psi _{\rm in} \, .
\end{equation}
$K_j$ are constants which arise from the partial fractions decomposition and depend on the parameters of the metric and the test particle. The index $y$ refers to the Weierstra{\ss}-functions with respect to the parameters $g_{y,2}$ and $g_{y,3}$.
\subsection{Solution of the $t$-equation}
Using (\ref{eqn:sing-y-gleichung}) we can write (\ref{eqn:sing-t-gleichung}) as
\begin{equation}
\mathrm{d}t = CR\Psi\frac{1+y}{G(y)} \frac{\mathrm{d}y}{\sqrt{Y(y)}} = CR\Psi\frac{1}{(1-y)(1+\lambda y)} \frac{\mathrm{d}y}{\sqrt{Y(y)}} \qquad \mathrm{or}
\end{equation}
\begin{equation}
t - t_{\rm in} = CR\Psi \int_{y_{\rm in}}^y \! \frac{1}{(1-y')(1+\lambda y')} \, \frac{\mathrm{d}y'}{\sqrt{Y(y')}} \, .
\label{eqn:sing-em0-tint}
\end{equation}
The $t$-equation can be solved in an analogous way to the $\phi$- and $\psi$-equation. We substitute $y=\frac{1}{b_{y,3}}\left( 4u -\frac{b_{y,2}}{3}\right)$. The integral (\ref{eqn:sing-em0-tint}) is of the third kind and has the poles $p_1=\frac{b_{x,3}}{4}+\frac{b_{x,2}}{12}$ and $p_2=-\frac{b_{x,3}}{4\lambda}+\frac{b_{x,2}}{12}$ (with respect to $u$). Then we apply a partial fractions decomposition, where the constants $M_j$ arise and we substitute again $u = \wp (v, g_{y,2}, g_{y,3})$. After the solution of the occurring elliptic integrals of the third kind, the solution of (\ref{eqn:sing-t-gleichung}) yields
\begin{equation}
t (\gamma) = CR\Psi \sum^2_{j=1} \frac{M_j}{\wp^\prime_y(v_{j})}\Biggl( 2\zeta_y(v_{j})(v-v_{\rm in}) + \log\frac{\sigma_y(v-v_{j})}{\sigma_y(v_{\rm in}-v_{j})} - \log\frac{\sigma_y(v+v_{j})}{\sigma_y(v_{\rm in}+v_{j})} \Biggr) + t _{\rm in} \, .
\end{equation}
\subsection{The Orbits}
\label{sec:em0-orbit}
In the ergosphere of a singly spinning black ring only TOs with $E=m=0$ are possible. Figure \ref{pic:sing-to} shows a TO plotted in the $x$-$y$-plane in Cartesian coordinates ($a$, $b$).
To change from the ring coordinates to the polar coordinates ($\rho$, $\theta$) the transformation
\begin{equation}
\rho=\frac{R\sqrt{y^2-x^2}}{x-y}\, ,\quad \tan \theta =\sqrt{\frac{y^2-1}{1-x^2}}
\end{equation}
is used. Then conventional Cartesian coordinates take the form
\begin{equation}
a = \rho\sin\theta \, ,\quad b = \rho\cos\theta
\end{equation}
(see \cite{Hoskisson:2007zk} or \cite{Lim:2008}). The singularity of the black ring is at $a=\pm 1$, $b=0$.
\begin{figure}[h!]
\centering
\includegraphics[width=10cm]{sing-to.eps}
\caption{$c=1$, $\lambda=0.5$, $\Phi=0.5$ and $\Psi=5$\newline
$a$-$b$-plot of a TO for the singly spinning black ring in the case $E=m=0$. The black dashed circles are the event horizon and the red dotted circles denote the ergosphere.}
\label{pic:sing-to}
\end{figure}
\section{Geodesics on the rotational axis}
The surface $y=-1$ is the axis of rotation of the singly spinning black ring. Here the Hamilton-Jacobi equation depends on the coordinate $x$ only. We set $y=-1$, $\Psi =0$ and $p_y=\frac{\partial S}{\partial y}=0$ in the Hamilton-Jacobi equation (\ref{eqn:hjd-sring}):
\begin{equation}
0 = m^2 - \frac{H(x)}{(1-\lambda)^2}E^2+\frac{(x+1)^2}{R^2H(x)}\left[ G(x) \left(\frac{\partial S}{\partial x}\right) ^2 + \frac{H(x)}{G(x)}\Phi^2\right] \, .
\end{equation}
This can be rearranged to
\begin{equation}
\left(\frac{\partial S}{\partial x}\right) ^2 = \frac{R^2H(x)}{(x+1)^2G(x)}\left[\frac{H(x)}{(1-\lambda)^2}E^2-m^2 \right] - \frac{H(x)}{G^2(x)}\Phi^2 := X_S .
\end{equation}
Then we have
\begin{equation}
S=\frac{1}{2}m^2\tau -Et+\Phi\phi + \int\! \sqrt{X_S} \, \mathrm{d}x \, .
\end{equation}
Now we set the partial derivatives of $S$ with respect to the constants $m^2$, $E$ and $\Phi$ to zero in order to obtain the equations of motion.
With the Mino-time \cite{Mino:2003yg} $\mathrm{d}\gamma=\frac{x+1}{R^2H(x)}\mathrm{d}\tau$ the equations of motion take the form
\begin{eqnarray}
\frac{\mathrm{d}x}{\mathrm{d}\gamma} &=& \left\lbrace R^2H(x)G(x)\left( H(x)\frac{E^2}{(1-\lambda)^2}-m^2 \right) -\Phi^2 H(x) (x+1)^2 \right\rbrace ^{1/2} \nonumber\\
&:=& \sqrt{X(x)} \label{eqn:sing-psi-x-gleichung} \, ,\\
\frac{\mathrm{d}\phi}{\mathrm{d}\gamma} &=& \Phi\frac{(x+1) H(x)}{G(x)} \label{eqn:sing-psi-phi-gleichung} \, ,\\
\frac{\mathrm{d}t}{\mathrm{d}\gamma} &=&\frac{R^2EH^2(x)}{(x+1)(1-\lambda)^2} \label{eqn:sing-psi-t-gleichung} \, ,
\end{eqnarray}
where $X(x)$ is a polynomial of fifth order.
\subsection{Classification of geodesics}
From (\ref{eqn:sing-psi-x-gleichung}) we can read off the effective potential consisting of two parts $U_+(x)$ and $U_-(x)$ (to be consistent with the effective potential on the equatorial plane later on):
\begin{equation}
X=a(x)(E-U_+)(E-U_-) \, .
\end{equation}
Since $X(x)$ can be written as $X(x)=a(x)E^2+b(x)$ the effective potential takes the form
\begin{equation}
U_\pm (x) = \pm \sqrt{-\frac{b(x)}{a(x)}},
\end{equation}
where $a(x)=\frac{R^2H^2(x)G(x)}{(1-\lambda)^2}$ and $b(x)=-R^2H(x)G(x)m^2-\Phi^2H(x)(x+1)^2$.
Figure \ref{pic:s-psi-orbits1} shows the effective potential for the motion on the $\psi$ axis. $U_+$ is plotted in red (solid) while $U_-$ is plotted in blue (dotted). The grey area between the two parts of the potential is a forbidden zone where no motion is possible because $X(x)$ becomes negative there. $U_+$ and $U_-$ are symmetric ($U_+=-U_-$) and meet at the horizon.
$x=-1$ is always a zero of $X(x)$, but since the point $x=-1$, $y=-1$ corresponds to infinity in cartesian coordinates it is not a real turning point of the test particle. If $\Phi=0$ then $X(x)$ has the zeros $x=-1$ and $x=+1$ and possibly a third zero between -1 and +1. The coordinate range of $x$ ($-1\leq x\leq +1$) only covers the space from infinity (-1) to the center of the black ring (+1). Since there is no potential barrier at $x=+1$ for $\Phi=0$, light and test particles with the right amount of energy cross the center of the black ring and continue their orbit at the other side of the black ring.
If $\Phi=0$ then none or one turning point exists. If $|\Phi|>0$ there is a potential barrier which prevents the geodesics from reaching $x=+1$ and one or two turning points exist. For larger $\lambda$ and $|\Phi|$ the potential has local extrema which lead to three turning points.
Possible orbits are Bound Orbits (BO), where light or test particles circle the black ring, and Escape Orbits (EO), where light or test particles approach the black ring, turn around at a certain point and escape the gravitational field.
There are five different types of orbits (see table \ref{tab:s-psi-typen-orbits1}).
\begin{itemize}
\item Type A:\\
$X(x)$ has no zero in the range $-1<x<1$. EOs without a turning point exist. The orbit crosses the equatorial plane ($x=+1$) and reaches infinity ($x=-1$ and $y=-1$).
\item Type B:\\
$X(x)$ has one zero in the range $-1<x<1$. BOs with a turning point on each side of the ring exist, so that the orbit crosses the equatorial plane ($x=+1$).
\item Type C:\\
$X(x)$ has one zero in the range $-1<x<1$. EOs with a turning point exist.
\item Type D:\\
$X(x)$ has two zeros in the range $-1<x<1$. BOs which do not cross the equatorial plane exist.
\item Type E:\\
$X(x)$ has three zeros in the range $-1<x<1$. BOs which do not cross the equatorial plane and EOs exist.
\end{itemize}
\begin{figure}[h]
\centering
\subfigure[$R=1$, $m=1$, $\lambda=0.4$ and $\Phi=0$ \newline Examples of orbits of type A and B. There are none or one turning points.]{
\includegraphics[width=4.5cm]{s-psi-energie1.eps}
}
\subfigure[$R=1$, $m=1$, $\lambda=0.4$ and $\Phi=1$ \newline Examples of orbits of type C and D. If $|\Phi|>0$ there is a potential barrier which prevents the geodesics from reaching $x=+1$. There are one or two turning points.]{
\includegraphics[width=4.5cm]{s-psi-energie2.eps}
}
\subfigure[$R=1$, $m=1$, $\lambda=0.8$ and $\Phi=8$ \newline Example of an orbit of type E. The potential can have lokal extrema which lead to three turning points.]{
\includegraphics[width=4.5cm]{s-psi-energie3.eps}
}
\caption{Effective potentials $U_+(x)$ (red, solid) and $U_-(x)$ (blue, dotted) on the $\psi$ axis of the ring. The grey area is a forbidden zone, where no motion is possible. Green dashed lines represent energies and green points mark the turning points.}
\label{pic:s-psi-orbits1}
\end{figure}
\begin{table}[h]
\begin{center}
\begin{tabular}{|lcll|}\hline
type & zeros & range of $x$ & orbit \\
\hline\hline
A & 0 &
\begin{pspicture}(-2.5,-0.2)(3,0.2
\psline[linewidth=0.5pt]{->|}(-2.5,0)(3,0)
\psline[linewidth=1.2pt]{-}(-2.5,0)(3,0)
\end{pspicture}
& EO
\\ \hline
B & 1 &
\begin{pspicture}(-2.5,-0.2)(3,0.2
\psline[linewidth=0.5pt]{->|}(-2.5,0)(3,0)
\psline[linewidth=1.2pt]{*-}(-1.5,0)(3,0)
\end{pspicture}
& BO
\\ \hline
C & 1 &
\begin{pspicture}(-2.5,-0.2)(3,0.2
\psline[linewidth=0.5pt]{->|}(-2.5,0)(3,0)
\psline[linewidth=1.2pt]{-*}(-2.5,0)(2,0)
\end{pspicture}
& EO
\\ \hline
D & 2 &
\begin{pspicture}(-2.5,-0.2)(3,0.2
\psline[linewidth=0.5pt]{->|}(-2.5,0)(3,0)
\psline[linewidth=1.2pt]{*-*}(-1,0)(2,0)
\end{pspicture}
& BO
\\ \hline
E & 3 &
\begin{pspicture}(-2.5,-0.2)(3,0.2
\psline[linewidth=0.5pt]{->|}(-2.5,0)(3,0)
\psline[linewidth=1.2pt]{*-*}(-0,0)(2,0)
\psline[linewidth=1.2pt]{-*}(-2.5,0)(-1.0,0)
\end{pspicture}
& EO, BO
\\ \hline\hline
\end{tabular}
\caption{Types of orbits of light and particles in the singly spinning black ring spacetime for $y=-1$, $\Psi =0$. The thick lines represent the range of the orbits. The turning points are shown by thick dots. The number of zeros in the table is the number of zeros in the range $-1<x<1$. }
\label{tab:s-psi-typen-orbits1}
\end{center}
\end{table}
\subsection{Solution of the $x$-equation}
\label{sec:sing-x-solution}
Equation (\ref{eqn:sing-psi-x-gleichung}) can be written as
\begin{equation}
\left( \frac{\mathrm{d}x}{\mathrm{d}\gamma}\right)^2 = X(x) = a_{x,5} x^5 + a_{x,4} x^4 + a_{x,3} x^3 + a_{x,2} x^2 + a_{x,1} x + a_{x,0},
\label{eqn:sing-psi-x-quadrgleichung}
\end{equation}
where $X(x)$ is a polynomial of fifth order with the coefficients
\begin{eqnarray}
a_{x,5} &=& \frac{-4 R^2 \lambda^3 E^2}{(1-\lambda)^2} \nonumber\\
a_{x,4} &=& \frac{[-2R^2\lambda-R^2 (1+\lambda^2)\lambda]2\lambda E^2}{(1-\lambda)^2 } - 2R^2\lambda^2 \left( \frac{(1+\lambda^2)}{(1-\lambda)^2}E^2 -m^2 \right) \nonumber\\
a_{x,3} &=& 2\Phi^2\lambda + \frac{[2R^2\lambda^2-R^2(1+\lambda^2)]2\lambda E^2}{(1-\lambda)^2} + (-2R^2\lambda-R^2(1+\lambda^2)\lambda) \left( \frac{(1+\lambda^2)}{(1-\lambda)^2}E^2 -m^2 \right) \nonumber\\
a_{x,2} &=& -4 \Phi^2 \lambda-\Phi^2 (1+\lambda^2) + \frac{[2R^2\lambda^2+R^2(1+\lambda^2)]2\lambda E^2}{(1-\lambda)^2} + (2R^2\lambda-R^2(1+\lambda^2)) \left( \frac{(1+\lambda^2)}{(1-\lambda)^2}E^2 -m^2 \right) \nonumber\\
a_{x,1} &=& -2\Phi^2\lambda-2\Phi^2(1+\lambda^2) + \frac{2R^2(1+\lambda^2)\lambda E^2}{(1-\lambda)^2} + (2R^2\lambda + R^2(1+\lambda^2)\lambda ) \left( \frac{(1+\lambda^2)}{(1-\lambda)^2}E^2 -m^2 \right) \nonumber\\
a_{x,0} &=& -\Phi^2(1+\lambda^2) + R^2(1+\lambda^2)\left( \frac{(1+\lambda^2)}{(1-\lambda)^2}E^2 -m^2 \right) .
\label{eqn:sing-xkoeff}
\end{eqnarray}
A separation of variables gives the hyperelliptic integral
\begin{equation}
\gamma - \gamma_{\rm in} = \int _{x_{\rm in}}^x \! \frac{\mathrm{d}x'}{\sqrt{X(x')}}.
\label{eqn:sing-psi-x-intgleichung}
\end{equation}
A canonical basis of holomorphic ($\mathrm{d}u_i$) and meromorphic ($\mathrm{d}r_i$) differentials associated with the hyperelliptic curve $w^2=X(x)$ is given by (see \cite{Enolski:2011id} or \cite{Hackmann:2008tu})
\begin{equation}
\mathrm{d}u_1 := \frac{\mathrm{d}x}{\sqrt{X(x)}}, \qquad \mathrm{d}u_2 := \frac{x\mathrm{d}x}{\sqrt{X(x)}},
\label{eqn:sing-dz}
\end{equation}
\begin{equation}
\mathrm{d}r_1 := (3a_{x,5}x^3+2a_{x,4}x^2+a_{x,3}x)\frac{\mathrm{d}x}{4\sqrt{X(x)}}, \qquad \mathrm{d}r_2 := a_{x,5}x^2\frac{\mathrm{d}x}{4\sqrt{X(x)}} \, .
\label{eqn:sing-dr}
\end{equation}
Furthermore we introduce the holomorphic and meromorphic period matrices $(2\omega, 2\omega ')$ and $(2\eta, 2\eta ')$:
\begin{equation}
\begin{split}
2\omega_{ij} := \oint_{a_j} \mathrm{d}u_i, \qquad 2\omega'_{ij} := \oint_{b_j} \mathrm{d}u_i, \\
2\eta_{ij} := -\oint_{a_j} \mathrm{d}r_i, \qquad 2\eta'_{ij} := -\oint_{b_j} \mathrm{d}r_i ,
\end{split}
\end{equation}
with $i,j = 1,2$, where $\{ a_1,a_2;b_1,b_2 \}$ is the canonical basis of closed paths.
The normalized holomorphic differentials are
\begin{equation}
\mathrm{d}\boldsymbol{v} := (2\omega)^{-1}\mathrm{d}\boldsymbol{u}, \qquad \mathrm{d}\boldsymbol{u}=
\left(
\begin{array}{c}
\mathrm{d}u_1\\
\mathrm{d}u_2\\
\end{array}
\right) .
\end{equation}
The solution of equation (\ref{eqn:sing-psi-x-quadrgleichung}) is extensively discussed in \cite{Hackmann:2008zz,Enolski:2010if,Hackmann:2008tu,Enolski:2011id}, and is given by the derivatives $\sigma_i$ of the Kleinian sigma function $\sigma (\boldsymbol{u}) = k e^{-(1/2)\boldsymbol{u}^t\eta\omega^{-1}\boldsymbol{u}} \vartheta ((2\omega)^{-1}\boldsymbol{u} + \boldsymbol{K}_{x_{\rm in}};\tau)$:
\begin{equation}
x(\gamma)=-\frac{\sigma_1(\boldsymbol{\gamma}_\Theta)}{\sigma_2(\boldsymbol{\gamma}_\Theta)} \, ,
\end{equation}
where
\begin{equation}
\boldsymbol{\gamma}_\Theta :=
\left(
\begin{array}{c}
\gamma - \gamma_{\rm in}' \\
\gamma_2
\end{array}
\right).
\end{equation}
The constant $\gamma_{\rm in}' = \gamma_{\rm in} +\int_{x_{\rm in}}^{\infty}\! \mathrm{d}u_1$ depends on $ \gamma_{\rm in}$ and $x_{\rm in}$ only. $\gamma_2$ is defined by the vanishing condition of the Kleinian sigma function $\sigma (\boldsymbol{\gamma}_\Theta) = 0$ so that $(2\omega )^{-1}\boldsymbol{\gamma}_\Theta$ is an element of the theta divisor $\Theta_{ \boldsymbol{K_\infty}}$ (the set of zeros of the theta function) where
\begin{equation}
\boldsymbol{K}_\infty = \tau
\left(
\begin{array}{c}
1/2\\
1/2\\
\end{array}
\right) +
\left(
\begin{array}{c}
0\\
1/2\\
\end{array}
\right)
\end{equation}
is the vector of Riemann constants and $\tau$ is the Riemann period matrix defined as $\tau := \omega ^{-1}\omega'$.
\subsection{Solution of the $\phi$-equation}
\label{sec:sing-psi-phi-lösung}
With (\ref{eqn:sing-psi-x-gleichung}) equation (\ref{eqn:sing-psi-phi-gleichung}) yields
\begin{equation}
\mathrm{d}\phi = \Phi \frac{(x+1)H(x)}{G(x)} \frac{\mathrm{d}x}{\sqrt{X(x)}} = \Phi \frac{H(x)}{(1-x)(1+\lambda x)} \frac{\mathrm{d}x}{\sqrt{X(x)}}
\end{equation}
or
\begin{equation}
\phi - \phi_{\rm in} = \Phi \int _{x_{\rm in}}^x \! \frac{H(x')}{(1-x')(1+\lambda x')} \, \frac{\mathrm{d}x'}{\sqrt{X(x')}} \, .
\label{eqn:psi-phiint}
\end{equation}
The integral (\ref{eqn:psi-phiint}) has poles at $p_1=1$ and $p_2=-\frac{1}{\lambda}$.
Now we apply a partial fractions decomposition upon (\ref{eqn:psi-phiint}):
\begin{equation}
\phi - \phi_{\rm in} = \Phi \int _{x_{\rm in}}^x \! \sum _{i=1}^2 \frac{K_i}{x'-p_i} \, \frac{\mathrm{d}x'}{\sqrt{X(x')}} \, ,
\end{equation}
where $K_i=\mp\lambda -1$ are constants which arise from the partial fractions decomposition.
The differentials in the equation above are of the third kind and can be solved with the help of the following equation (see \cite{Enolski:2011id}).
\begin{align}\begin{split}
W\int_{P'}^P\frac{1}{x-Z}\frac{\mathrm{d}x}{w} = & 2\int_{P'}^P \mathrm{d}\boldsymbol{u}^T(x,y) \left[ \boldsymbol{\zeta} \left( \int_{(e_2,0)}^{(Z,W)} \mathrm{d} \boldsymbol{u} + \boldsymbol{K}_\infty \right) - 2( \boldsymbol{\eta}^{\prime}\boldsymbol{\varepsilon}^\prime + \boldsymbol{\eta}\boldsymbol{\varepsilon} ) - \frac12 \boldsymbol{\mathfrak{Z}}(Z,W) \right]\\
& +\ln \frac{\sigma\left(\int_{\infty}^P \mathrm{d}\boldsymbol{u}- \int_{(e_2,0)}^{(Z,W)} \mathrm{d}\boldsymbol{u} - \boldsymbol{K}_\infty \right)}{\sigma\left(\int_{\infty}^P \mathrm{d}\boldsymbol{u}+ \int_{(e_2,0)}^{(Z,W)} \mathrm{d}\boldsymbol{u} - \boldsymbol{K}_\infty \right)}
- \mathrm{ln} \frac{\sigma\left(\int_{\infty}^{P'} \mathrm{d}\boldsymbol{u} - \int_{(e_2,0)}^{(Z,W)} \mathrm{d}\boldsymbol{u} - \boldsymbol{K}_\infty \right)}{\sigma\left(\int_{\infty}^{P'} \mathrm{d}\boldsymbol{u} + \int_{(e_2,0)}^{(Z,W)} \mathrm{d}\boldsymbol{u} - \boldsymbol{K}_\infty \right)}.\end{split} \label{main1-2}
\end{align}
$P$ and $P'$ are points on the hyperelliptic curve, $Z$ is a pole, $W=w(Z)$ and $w^2=X(x)$. The zeros $e_i$ of $w^2(x)$ are the branch points of the curve $w^2$. $\mathrm{d}\boldsymbol{u}$ is the vector of the holomorphic differentials of the first kind $\mathrm{d}u_i=\frac{x^{i-1}}{w}\mathrm{d}x$ with $i=1,...,g$. $\zeta$ and $\sigma$ are Kleinian functions and $\boldsymbol{K}_\infty$ is the vector of Riemann constants.
The vector $\boldsymbol{\mathfrak A}_i$ identified with each branch point $e_i$ is defined as \cite{Enolski:2011id}
\begin{equation}
\boldsymbol{\mathfrak{A}}_i=\int_{\infty}^{(e_i,0)} \mathrm{d}\boldsymbol{u}= 2\omega \boldsymbol{\varepsilon}_k+2\omega' \boldsymbol{\varepsilon}_i', \quad i=1,\ldots,6 \,,
\label{eqn:characteristics}
\end{equation}
with the vectors $\boldsymbol{\varepsilon}_i$ and $\boldsymbol{\varepsilon}_i'$ whose entries $\varepsilon_{i,j}$, $\varepsilon'_{i,j}$ are $\frac{1}{2}$ or $0$ for every $i=1,\ldots,6$, $j=1,2$. The matrix
\begin{equation}
[\boldsymbol{u}_i] = \left[
\begin{array}{c}
\boldsymbol{\varepsilon}_i'\\
\boldsymbol{\varepsilon}_i
\end{array}
\right]
\end{equation}
is called the characteristic of a branch point $e_i$.\\
The $g$th component (in this case genus $g=2$) of the vector $\boldsymbol{\mathfrak{Z}}(Z,W)$ is $\mathfrak{Z}_g(Z,W)=0$ and for $1\leq j<g$ we have
\begin{equation}
\mathfrak{Z}_j(Z,W)=\frac{W}{\prod_{k=2}^{g} (Z-e_{2k})}\sum_{k=0}^{g-j-1}(-1)^{g-k+j+1}Z^kS_{g-k-j-1}(\boldsymbol{e}) \, .
\end{equation}
The $S_k(\boldsymbol{e})$ are elementary symmetric functions of order $k$ built on $g-1$ branch points $e_4,\ldots, e_{2g}$: $S_0=1$, $S_1=e_4+\ldots+e_{2g}$, etc.\\
Then the solution of the $\phi$-equation reads
\begin{equation}
\begin{split}
\phi &=\phi_{\rm in} + \Phi \sum _{i=1}^2 K_i \left[ \frac{2}{W_i} \left(\int _{x_{\rm in}}^x d\boldsymbol{u}\right)^T \left( \boldsymbol{\zeta} \left( \int_{(e_2,0)}^{(p_i,W_i)} \mathrm{d} \boldsymbol{u} + \boldsymbol{K}_\infty \right) - 2( \boldsymbol{\eta}^{\prime}\boldsymbol{\varepsilon}^\prime + \boldsymbol{\eta}\boldsymbol{\varepsilon} ) - \frac12 \boldsymbol{\mathfrak{Z}}(p_i,W_i) \right) \right. \\
& \left. + \ln\frac{\sigma\left( W^2(x) \right)}{\sigma\left( W^1(x) \right)}
- \ln \frac{\sigma\left( W^2(x_{\rm in}) \right)}{\sigma\left( W^1(x_{\rm in}) \right)} \right]
\end{split}
\end{equation}
where $W_i=\sqrt{X(p_i)}$ and $W^{1,2}(x) = \int^{x}_{\infty}{d\boldsymbol{u}} \pm \int_{(e_2,0)}^{(p_i,W_i)} \mathrm{d} \boldsymbol{u} - \boldsymbol{K}_\infty $.
\subsection{Solution of the $t$-equation}
With (\ref{eqn:sing-psi-x-gleichung}) equation (\ref{eqn:sing-psi-t-gleichung}) yields
\begin{equation}
\frac{\mathrm{d}t}{\mathrm{d}\gamma} =\frac{R^2EH^2(x)}{(x+1)(1-\lambda)^2}
\end{equation}
or
\begin{equation}
t - t_{\rm in} = \frac{R^2E}{(1-\lambda )^2} \int _{x_{\rm in}}^x \! \frac{H^2(x')}{(x'+1)} \, \frac{\mathrm{d}x'}{\sqrt{X(x')}} \,.
\label{eqn:psi-tint}
\end{equation}
Next we apply a partial fractions decomposition upon (\ref{eqn:psi-tint}) where the constants $M_i$ arise.
\begin{equation}
t - t_{\rm in} = \frac{R^2E}{(1-\lambda )^2} \int _{x_{\rm in}}^x \! \left( \frac{K_1}{x+1} + K_2 + K_3\cdot x \right) \, \frac{\mathrm{d}x'}{\sqrt{X(x')}}
\label{eqn:sing-t-pbz}
\end{equation}
First we will solve the holomorphic integrals $\int _{x_{\rm in}}^x\! \frac{x'\,^i}{\sqrt{X(x')}}\mathrm{d}x'$.
We introduce a variable $v$ so that $v-v_{0} = \int _{x_{\rm in}}^x\! \frac{\mathrm{d}x'}{\sqrt{X(x')}}$. The inversion of this integral yields $x(v)=-\frac{\sigma_{1}(\boldsymbol{u})}{\sigma_{2}(\boldsymbol{u})}$ (see section \ref{sec:sing-x-solution} or \cite{Enolski:2011id}), where
\begin{equation}
\boldsymbol{u}=\boldsymbol{\mathfrak A}_i+\left(
\begin{array}{c}
v - v_0 \\
f_1(v - v_0)
\end{array} \right) ,\quad f_1(0)=0 \, .
\end{equation}
The function $f_1(v - v_0)$ can be found from the condition $\sigma(\boldsymbol{u})=0$.
Equation (\ref{eqn:sing-t-pbz}) now reads
\begin{equation}
t - t_{\rm in} = \frac{R^2E}{(1-\lambda )^2} \left[ \int _{x_{\rm in}}^x \! \frac{K_1}{x+1} \, \frac{\mathrm{d}x'}{\sqrt{X(x')}} + K_2(v-v_0) + K_3 f_1(v-v_0) \right] \, .
\end{equation}
The remaining differential is of the third kind. Its solution is presented in (\ref{main1-2}). Then the solution of the $t$-equation (\ref{eqn:sing-psi-t-gleichung}) is
\begin{equation}
\begin{split}
t &=t_{\rm in} + \frac{R^2E}{(1-\lambda )^2} \left\lbrace K_1 \left[ \frac{2}{\sqrt{X(-1)}} \left(\int _{x_{\rm in}}^x d\boldsymbol{u}\right)^T \left( \boldsymbol{\zeta} \left( \int_{(e_2,0)}^{(-1,\sqrt{X(-1)})} \mathrm{d} \boldsymbol{u} + \boldsymbol{K}_\infty \right) - 2( \boldsymbol{\eta}^{\prime}\boldsymbol{\varepsilon}^\prime + \boldsymbol{\eta}\boldsymbol{\varepsilon} ) \right. \right. \right.\\
& \left. \left. \left. - \frac12 \boldsymbol{\mathfrak{Z}}(-1,\sqrt{X(-1)}) \right) + \ln\frac{\sigma\left( W^2(x) \right)}{\sigma\left( W^1(x) \right)}
- \ln \frac{\sigma\left( W^2(x_{\rm in}) \right)}{\sigma\left( W^1(x_{\rm in}) \right)} \right] + K_2(v - v_0) + K_3f_1(v - v_0) \right\rbrace \\
\end{split}
\end{equation}
where $W^{1,2}(x) = \int^{x}_{\infty}{d\boldsymbol{u}} \pm \int_{(e_2,0)}^{(-1,\sqrt{X(-1)})} \mathrm{d} \boldsymbol{u} - \boldsymbol{K}_\infty $.
\subsection{The orbits}
\label{sec:psi-axis-orbits}
On the rotational axis of the singly spinning black ring bound orbits and escape orbits are possible. The orbits either move around the ring or move directly through the center of the ring. Figure \ref{pic:sing-psi-bo} and \ref{pic:sing-psi-bo2} show bound orbits and figure \ref{pic:sing-psi-eo} and \ref{pic:sing-psi-eo2} show escape orbits in $a$-$b$-coordinates (see section \ref{sec:em0-orbit}) and the corresponding solution $x(\gamma)$. Since the orbits on the rotational axis are presented by lines in the $x$-$y$-plane, we will also show them in the $x$-$\phi$-plane where we will use the coordinates $r_1$ and $\phi$.
One can think of ring coordinates as two pairs of polar coordinates
\begin{equation}
\begin{array}{l}
x_1=r_1 \sin(\phi)\\
x_2=r_1 \cos(\phi)
\end{array}
\quad\text{and}\quad
\begin{array}{l}
x_3=r_2 \sin(\psi)\\
x_4=r_2 \cos(\psi)
\end{array}
\end{equation}
where
\begin{equation}
r_1=R\frac{\sqrt{1-x^2}}{x-y} \quad\text{and}\quad r_2=R\frac{\sqrt{y^2-1}}{x-y}
\end{equation}
(see \cite{Hoskisson:2007zk,Emparan:2006mm}).
If $\psi$ is constant, the horizon of the black ring consists of two $S^2$ spheres. If we look at the rotational axis where $y=-1$, the coordinates $x_1$ and $x_2$ describe the plane between these two spheres, so the horizon cannot be seen in this plane.
If $\phi$ is constant, the horizon has $S^1\times S^1$ topology. So if $x=\pm1$ the coordinates $x_3$ and $x_4$ describe the equatorial plane ``as seen from above''.\\
The bound orbits of type B and the escape orbits of type A, which move through the center of the black ring, are lines in every plane since here both angles $\phi$ and $\psi$ are constant. So in that case we only show the $a$-$b$-plot.
\begin{figure}
\centering
\subfigure[$a$-$b$-plot ($x$-$y$-plane)\newline
The black dashed circles show the position of the horizon and the red dotted circles mark the ergosphere.]{
\includegraphics[width=6cm]{sing-psi-bo.eps}
}
\subfigure[Solution $x(\gamma)$\newline
The black horizontal lines are the position of the turning points.]{
\includegraphics[width=6cm]{sing-psi-bo_x.eps}
}
\subfigure[$x_1$-$x_2$-plot ($x$-$\phi$-plane)]{
\includegraphics[width=6cm]{sing-psi-bo-xphi-num.eps}
}
\caption{$R=1$, $m=1$, $\lambda=0.4$, $\Phi=1$ and $E=0.9$: Bound orbit on the rotational axis.}
\label{pic:sing-psi-bo}
\end{figure}
\begin{figure}
\centering
\subfigure[$a$-$b$-plot ($x$-$y$-plane)\newline
The black dashed circles show the position of the horizon and the red dotted circles mark the ergosphere.]{
\includegraphics[width=6cm]{sing-psi-bo2.eps}
}
\subfigure[Solution $x(\gamma)$\newline
The lower horizontal black line shows the turning point on each side of the ring. The upper horizontal black line at $x=1$ represents the equatorial plane, if a test particle reaches $x=1$ it continues its orbit on the other side of the ring.]{
\includegraphics[width=6cm]{sing-psi-bo2_x-2.eps}
}
\caption{$R=1$, $m=1$, $\lambda=0.4$, $\Phi=0$ and $E=0.8$: Bound orbit passing through the black ring on the rotational axis. The motion above the equatorial plane is shown in blue (solid) and the motion below the equatorial plane is shown in green (dashed).}
\label{pic:sing-psi-bo2}
\end{figure}
\begin{figure}
\centering
\subfigure[$a$-$b$-plot ($x$-$y$-plane)\newline
The black dashed circles show the position of the horizon and the red dotted circles mark the ergosphere.]{
\includegraphics[width=6cm]{sing-psi-eo.eps}
}
\subfigure[Solution $x(\gamma)$\newline
Solution $x(\gamma)$\newline
The upper horizontal black line is the position of the turning point. The lower horizontal black line represents infinity ($y=x=-1$ in ring coordinates).]{
\includegraphics[width=6cm]{sing-psi-eo_x.eps}
}
\subfigure[$x_1$-$x_2$-plot ($x$-$\phi$-plane)]{
\includegraphics[width=6cm]{sing-psi-eo-xphi-num.eps}
}
\caption{$R=1$, $m=1$, $\lambda=0.4$, $\Phi=1$ and $E=2$: Escape orbit on the rotational axis.}
\label{pic:sing-psi-eo}
\end{figure}
\begin{figure}
\centering
\subfigure[$a$-$b$-plot ($x$-$y$-plane)\newline
The black dashed circles show the position of the horizon and the red dotted circles mark the ergosphere.]{
\includegraphics[width=6cm]{sing-psi-eo2.eps}
}
\subfigure[Solution $x(\gamma)$\newline
The upper horizontal black line at $x=1$ represents the equatorial plane, if a test particle reaches $x=1$ it continues its orbit on the other side of the ring. The lower horizontal black line represents infinity ($y=x=-1$ in ring coordinates).]{
\includegraphics[width=6cm]{sing-psi-eo2_x-2.eps}
}
\caption{$R=1$, $m=1$, $\lambda=0.4$, $\Phi=0$ and $E=1.5$: Escape orbit passing through the black ring on the rotational axis. The motion above the equatorial plane is shown in blue (solid) and the motion below the equatorial plane is shown in green (dashed).}
\label{pic:sing-psi-eo2}
\end{figure}
\section{Geodesics on the equatorial plane}
The surface $x=\pm 1$ is the equatorial plane of the black ring, which is divided into two parts. The first part $x=+1$ is the plane enclosed by the ring (or more precisely: enclosed by the singularity), which we will refer to as ``inside'' the ring. The second part $x=-1$ describes the equatorial plane around the black ring (or more precisely: around the singularity), which we will refer to as ``outside'' the ring.
If we set $x=\pm 1$, $\Phi =0$ and $p_x=\frac{\partial S}{\partial x}=0$ in the Hamilton-Jacobi equation (\ref{eqn:hjd-sring}), it depends on the coordinate $y$ only:
\begin{equation}
0 = m^2 - \frac{(1\pm\lambda)^2}{H(y)}E^2 + \frac{(\pm 1-y)^2}{R^2(1\pm\lambda)^2}\left[-G(y)\left( \frac{\partial S}{\partial y}\right) ^2 - \frac{H(y)}{G(y)}(\Psi + \Omega_\psi E)^2 \right] \, .
\end{equation}
This can be rearranged to
\begin{equation}
\left( \frac{\partial S}{\partial y}\right) ^2 = \frac{R^2(1\pm\lambda)^2}{(\pm 1-y)^2G(y)}\left[ m^2 - \frac{H(y)}{(1\pm\lambda)^2}E^2\right] - \frac{H(y)}{G^2(y)}(\Psi + \Omega_\psi E)^2 := Y_S \, .
\end{equation}
Then we have
\begin{equation}
S=\frac{1}{2}m^2\tau -Et+\Psi\psi + \int\! \sqrt{Y_S} \, \mathrm{d}y \, .
\end{equation}
Now we set the derivatives of $S$ with respect to the constants $m^2$, $E$ and $\Psi$ to zero in order to obtain the equations of motion.
With the Mino-time \cite{Mino:2003yg} $\mathrm{d}\gamma=\frac{\pm 1-y}{R^2}\mathrm{d}\tau$ and the relation $\Omega_\psi = -CR\frac{1+y}{H(y)}$ the equations of motion take the form
\begin{eqnarray}
\frac{\mathrm{d}y}{\mathrm{d}\gamma} &=& \left\lbrace R^2\frac{G(y)}{H(y)}\left[ \frac{H(y)}{(1\pm\lambda)^2}m^2-E^2\right] - \frac{(\pm 1-y)^2H(y)}{(1\pm\lambda)^4} [H(y)\Psi +\Omega_\psi]^2 \right\rbrace ^{1/2} \nonumber\\
&:=& \sqrt{Y(y)} \, , \label{eqn:sing-phi-y-gleichung}\\
\frac{\mathrm{d}\psi}{\mathrm{d}\gamma} &=& -\frac{(\Psi + \Omega_\psi E)(\pm 1-y)H(y)}{(1\pm\lambda)^2G(y)} \, ,\label{eqn:sing-phi-psi-gleichung}\\
\frac{\mathrm{d}t}{\mathrm{d}\gamma} &=& \frac{R^2E}{(\pm 1-y)H(y)} + \frac{(\Omega_\psi\Psi+\Omega_\psi^2E)(\pm 1-y)H(y)}{(1\pm\lambda)^2G(y)} \label{eqn:sing-phi-t-gleichung} \, .
\end{eqnarray}
It might not be obvious at first glance, but $Y(y)$ is a polynomial of third order in $y$ and therefore the equations of motion are of elliptic type.
\subsection{Classification of geodesics}
From (\ref{eqn:sing-phi-y-gleichung}) we can read off an effective potential consisting of the two parts $V_+(y)$ and $V_-(y)$:
\begin{equation}
Y=a(y)(E-V_+)(E-V_-) \, .
\end{equation}
Since $Y(y)$ can be written as $Y(y)=a(y)E^2+b(y)E+c(y)$ the effective potential takes the form
\begin{equation}
V_\pm (y) = \frac{-b(y)\pm\sqrt{b(y)^2-4a(y)c(y)}}{2a(y)}, \qquad \mathrm{where}
\end{equation}
\begin{eqnarray}
a(y) &=& -R^2\frac{G(y)}{H(y)}-\frac{(\pm1-y)^2C^2R^2(1+y)^2}{(1\pm\lambda)^4H(y)}\, , \nonumber\\
b(y) &=& \frac{2(\pm1-y)^2\Psi CR(1+y)}{(1\pm\lambda)^4}\, ,\nonumber \\
c(y) &=& \frac{R^2G(y)m^2}{(1\pm\lambda)^2}-\frac{(\pm 1-y)^2H(y)\Psi^2}{(1\pm\lambda)^4} \, .
\end{eqnarray}
The two cases $x=+1$ (geodesics inside the ring) and $x=-1$ (geodesics outside the ring) have to be discussed separately.
\subsubsection{Geodesics outside the ring}
Let us first take a look at the motion on the surface outside the black ring. Here we have $x=-1$. Figure \ref{pic:sing-phi-orbits1} shows the effective potential $V(y)$ for different values of the parameters. $V_+$ and $V_-$ meet at the horizon. Mainly the angular momentum $\Psi$ defines the shape of the effective potential. For $\Psi =0$ the potential is symmetric and $Y(y)$ has none or one zero. If $|\Psi|>0$ the potential is no longer symmetric and if $|\Psi|$ is large enough up to two zeros of $Y(y)$ are possible.
Possible orbits are Terminating Orbits (TO) with or without a turning point, where light or test particles cross the horizon and fall into the singularity, and Escape Orbits (EO), where light or test particles aproach the black ring, turn around at a certain point and escape the gravitational field. The zero of $Y(y)$ of a TO can lie directly on the event horizon.
There are three different types of orbits (see table \ref{tab:sing-phi-typen-orbits1}).
\begin{itemize}
\item Type A:\\
$Y(y)$ has no zeros and only TOs exist.
\item Type B:\\
$Y(y)$ has one zero and only TOs exist. In a special case the zero of $Y(y)$ lies on the horizon.
\item Type C:\\
$Y(y)$ has two zeros. TOs and EOs exist. In a special case the zero of $Y(y)$ lies on the horizon.
\end{itemize}
\begin{figure}
\centering
\subfigure[$\Psi=0$ \newline Examples of orbits of type A, B and B$_0$. The potential is symmetric and $Y(y)$ has none or one zeros.]{
\includegraphics[width=7cm]{sing-phi-1energie1.eps}
}
\subfigure[$\Psi=5$ \newline Examples of orbits of type of type C and C$_0$. If $|\Psi|>0$ the potential is no longer symmetric and up to two zeros of $Y(y)$ are possible.]{
\includegraphics[width=7cm]{sing-phi-1energie2.eps}
}
\caption{$R=1$, $m=1$ and $\lambda=0.5$ \newline
Effective potentials $V_+(y)$ (red, solid) and $V_-(y)$ (blue, dotted) on the $\phi$ axis outside the ring. The grey area is a forbidden zone, where no motion is possible. The horizon is marked by a vertical dashed line. Green dashed lines represent energys and green points mark the turning points.}
\label{pic:sing-phi-orbits1}
\end{figure}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|lcll|}\hline
type & zeros & range of $y$ & orbit \\
\hline\hline
A & 0 &
\begin{pspicture}(-2.5,-0.2)(3,0.2
\psline[linewidth=0.5pt]{|->}(-2.5,0)(3,0)
\psline[linewidth=0.5pt,doubleline=true](0.5,-0.2)(0.5,0.2)
\psline[linewidth=1.2pt]{-}(-2.5,0)(3,0)
\end{pspicture}
& TO
\\ \hline
B & 1 &
\begin{pspicture}(-2.5,-0.2)(3,0.2
\psline[linewidth=0.5pt]{|->}(-2.5,0)(3,0)
\psline[linewidth=0.5pt,doubleline=true](0.5,-0.2)(0.5,0.2)
\psline[linewidth=1.2pt]{-*}(-2.5,0)(1,0)
\end{pspicture}
& TO
\\ \hline
B$_0$ & 1 &
\begin{pspicture}(-2.5,-0.2)(3,0.2
\psline[linewidth=0.5pt]{|->}(-2.5,0)(3,0)
\psline[linewidth=0.5pt,doubleline=true](0.5,-0.2)(0.5,0.2)
\psline[linewidth=1.2pt]{-*}(-2.5,0)(0.5,0)
\end{pspicture}
& TO
\\ \hline
C & 2 &
\begin{pspicture}(-2.5,-0.2)(3,0.2
\psline[linewidth=0.5pt]{|->}(-2.5,0)(3,0)
\psline[linewidth=0.5pt,doubleline=true](0.5,-0.2)(0.5,0.2)
\psline[linewidth=1.2pt]{-*}(-2.5,0)(1,0)
\psline[linewidth=1.2pt]{*-}(2.0,0)(3,0)
\end{pspicture}
& TO, EO
\\ \hline
C$_0$ & 2 &
\begin{pspicture}(-2.5,-0.2)(3,0.2
\psline[linewidth=0.5pt]{|->}(-2.5,0)(3,0)
\psline[linewidth=0.5pt,doubleline=true](0.5,-0.2)(0.5,0.2)
\psline[linewidth=1.2pt]{-*}(-2.5,0)(0.5,0)
\psline[linewidth=1.2pt]{*-}(2.0,0)(3,0)
\end{pspicture}
& TO, EO
\\ \hline\hline
\end{tabular}
\caption{Types of orbits of light and particles in the singly spinning black ring spacetime for $x=-1$, $\Phi =0$. The thick lines represent the range of the orbits. The turning points are shown by thick dots. The horizon is indicated by a vertical double line. In a special case the zero of $Y(y)$ lies on the horizon.}
\label{tab:sing-phi-typen-orbits1}
\end{center}
\end{table}
\subsubsection{Geodesics inside the ring}
The effective potential for geodesics on the surface enclosed by the black ring ($x=+1$) is shown in figure \ref{pic:sing-phi-orbits2}. Again, if we have $\Psi =0$ the potential is symmetric and $Y(y)$ has none or one zeros. In the case $|\Psi|>0$ a potential barrier appears which prevents test particles and light from reaching $y=-1$. Then $Y(y)$ has always a single zero in the allowed range of $y$. The higher $|\Psi|$ the higher the energy where $V_+$ and $V_-$ meet. Note that $x=+1$, $y=-1$ is the location of the center of the black ring.
Possible orbits are Terminating Orbits where $Y(y)$ has one zero (type B) or none zero (type A). For the type of orbits see previous section. In a special case the zero of $Y(y)$ lies on the horizon. (type B$_0$).
\begin{figure}[ht]
\centering
\includegraphics[width=6cm]{sing-phi1energie.eps}
\caption{$R=1$, $m=1$, $\lambda=0.5$ und $\Phi=0$ \newline
Effective potentials $V_+(y)$ (red, solid) and $V_-(y)$ (blue, dotted) on the equatorial plane inside the ring. The grey area is a forbidden zone, where no motion is possible. The horizon is marked by a vertical dashed line. Green dashed lines represent energies and green points mark the turning points. Possible orbits are of type A, B and B$_0$.}
\label{pic:sing-phi-orbits2}
\end{figure}
\subsection{Solution of the $y$-equation}
\label{sec:sing-phi-y-solution}
Equation (\ref{eqn:sing-phi-y-gleichung}) can be solved analogously to (\ref{eqn:sing-x-gleichung}). (\ref{eqn:sing-phi-y-gleichung}) can be written as
\begin{equation}
\left( \frac{\mathrm{d}y}{\mathrm{d}\gamma}\right)^2 = Y(y) = b_3 y^3 +b_2y^2+b_1y+b_0 \, ,
\end{equation}
where
\begin{equation}
\begin{split}
b_3&=\frac{-C^2R^2E^2}{2\lambda(\lambda\pm1)^4} + \frac{CRE\Psi-\lambda\Psi^2}{(\lambda\pm1)^4} - \frac{R^2m^2\lambda}{(\lambda\pm1)^2}\\
b_2&=\frac{C^2R^2E^2}{2\lambda^2(\lambda\pm1)^4} + \frac{4(1\pm1)\lambda(1+\lambda+2\lambda^2)R^2E^2}{(\lambda-1)(\lambda\pm1)^4} + \frac{(-\lambda^2\pm4\lambda-1)\Psi^2}{(\lambda\pm1)^4}\mp\frac{2(2\mp1)CRE\Psi}{(\lambda\pm1)^4}-\frac{R^2m^2}{(\lambda\pm1)^2}\\
b_1&=\frac{12\lambda^2R^2E^2}{(\lambda-1)(\lambda\pm1)^4}-\frac{\lambda(\lambda+1)R^2E^2}{(\lambda-1)(\lambda\pm1)^2}\mp\frac{2(2\mp1)CRE\Psi}{(\lambda\pm1)^4} \pm \frac{2(\lambda^2\mp\lambda+1)\Psi^2}{(\lambda\pm1)^4} + \frac{R^2m^2\lambda}{(\lambda\pm1)^2}\\
b_0&= \frac{-C^2R^2E^2}{2\lambda^2(\lambda\pm1)^4} + \frac{2CRE\Psi-(\lambda^2+1)\Psi^2}{(\lambda\pm1)^4} + \frac{4(1\pm1)\lambda R^2E^2}{(\lambda\pm1)^4} + \frac{R^2m^2}{(\lambda\pm1)^2} \, .
\end{split}
\end{equation}
The solution is (see section \ref{sec:ergo-xsol})
\begin{equation}
y (\gamma)=\frac{1}{b_{3}}\left[ 4\wp (\gamma - \gamma '_{\rm in},g_{2},g_{3}) -\frac{b_{2}}{3} \right] ,
\end{equation}
where $\gamma'_{\rm in} = \gamma _{\rm in} + \int _{v_{\rm in}}^\infty \! \frac{\mathrm{d}v'}{\sqrt{4v'^3 - g_{2} v' -g_{3}}} $ and $v_{\rm in}=\frac{1}{4} \left( b_{3}y_{\rm in}+\frac{b_{2}}{3}\right) $. The coefficients $g_2$ and $g_3$ of the polynomial in the Weierstra{\ss} form are
\begin{equation}
g_{2}=\frac{b_{2}^2}{12} - \frac{b_{1} b_{3}}{4} \qquad \mathrm{and} \qquad
g_{3}=\frac{b_{1} b_{2} b_{3}}{48}-\frac{b_{0} b_{3}^2}{16}-\frac{b_{2}^3}{216} \ .
\end{equation}
\subsection{Solution of the $\psi$-equation}
\label{sec:sing-phi-psi-solution}
With (\ref{eqn:sing-phi-y-gleichung}) equation (\ref{eqn:sing-phi-psi-gleichung}) yields
\begin{equation}
\mathrm{d}\psi = -\frac{(\Psi+\Omega_\psi E)(\pm1-y)H(y)}{(1\pm\lambda)^2G(y)} \frac{\mathrm{d}y}{\sqrt{Y(y)}}
\end{equation}
or
\begin{equation}
\psi - \psi_{\rm in} = \int _{y_{\rm in}}^y \! -\frac{(\Psi+\Omega_\psi E)(\pm1-y')H(y')}{(1\pm\lambda)^2G(y')} \, \frac{\mathrm{d}y'}{\sqrt{Y(y')}} \, .
\end{equation}
This can be rewritten as
\begin{equation}
\psi - \psi_{\rm in} = \frac{\pm1}{(1\pm\lambda)^2}\int _{y_{\rm in}}^y \! \frac{CRE(1+y')-H(y')\Psi}{(1\pm y)(1+\lambda y')} \, \frac{\mathrm{d}y'}{\sqrt{Y(y')}} \, .
\end{equation}
This equation can be solved analogously to the $\phi$- and $\psi$-equation for nullgeodesics in the ergosphere (see section \ref{sec:ergo-phisol} and \ref{sec:ergo-psisol}). With $v=v(\gamma)=\gamma-\gamma'_{\rm in}$, $v_{\rm in}=v(\gamma_{\rm in})$ and $p_j=\wp(v_j)$ the solution is
\begin{equation}
\begin{split}
\psi (\gamma) &= \sum^2_{j=1} \frac{K_j}{\wp^\prime_y(v_{j})}\Biggl( 2\zeta_y(v_{j})(v-v_{\rm in}) + \log\frac{\sigma_y(v-v_{j})}{\sigma_y(v_{\rm in}-v_{j})} - \log\frac{\sigma_y(v+v_{j})}{\sigma_y(v_{\rm in}+v_{j})} \Biggr) \\
& + \psi _{\rm in} \, .
\end{split}
\end{equation}
$K_j$ are constants which arise from the partial fractions decomposition and depend on the parameters of the metric and the test particle.
\subsection{Solution of the $t$-equation}
With (\ref{eqn:sing-phi-y-gleichung}) equation (\ref{eqn:sing-phi-t-gleichung}) yields
\begin{equation}
\mathrm{d}t = \left( \frac{R^2E}{H(y)(\pm1-y)} + \frac{(\pm1-y)CR[\Psi H(y)+ CRE(1+y)]}{(1\pm\lambda)^2(1-y)(1+\lambda y)H(y)} \right) \frac{\mathrm{d}y}{\sqrt{Y(y)}}
\end{equation}
or
\begin{equation}
t - t_{\rm in} = \int _{y_{\rm in}}^y \! \left( \frac{R^2E}{H(y')(\pm1-y')} + \frac{(\pm1-y')CR[\Psi H(y')+ CRE(1+y')]}{(1\pm\lambda)^2(1-y')(1+\lambda y')H(y')} \right)\, \frac{\mathrm{d}y'}{\sqrt{Y(y')}} \, .
\label{eqn:phi-tint}
\end{equation}
This equation can be solved analogously to the $\phi$- and $\psi$-equation for nullgeodesics in the ergosphere (see section \ref{sec:ergo-phisol} and \ref{sec:ergo-psisol}). With $v=v(\gamma)=\gamma-\gamma'_{\rm in}$, $v_{\rm in}=v(\gamma_{\rm in})$ and $q_j=\wp(v_j)$ the solution is
\begin{equation}
\begin{split}
t (\gamma) &= \sum^4_{j=1} \frac{M_j}{\wp^\prime_y(v_{j})}\Biggl( 2\zeta_y(v_{j})(v-v_{\rm in}) + \log\frac{\sigma_y(v-v_{j})}{\sigma_y(v_{\rm in}-v_{j})} - \log\frac{\sigma_y(v+v_{j})}{\sigma_y(v_{\rm in}+v_{j})} \Biggr)\\
& + M_0(v-v_{\rm in}) + t _{\rm in} \, .
\end{split}
\end{equation}
$M_j$ are constants which arise from the partial fractions decomposition and depend on the parameters of the metric and the test particle.
\subsection{The orbits}
On the equatorial plane around the singly spinning black ring terminating orbits and escape orbits are possible. Figure \ref{pic:sing-phiout-eo} and \ref{pic:sing-phiout-to} show some orbits in $a$-$b$-coordinates (see section \ref{sec:em0-orbit}) and the corresponding solution $y(\gamma)$. Also the $y$-$\psi$-plane is shown in the coordinates $x_3$ and $x_4$ (see section \ref{sec:psi-axis-orbits}).
An escape orbit is depicted in figure \ref{pic:sing-phiout-eo}. Figure \ref{pic:sing-phiout-to} shows a terminating orbit which starts at its turning point and then falls into the singularity.
The frame dragging effect can be seen in figure \ref{pic:framedragg}. Once the particle enters the ergosphere it is dragged along by the rotation of the black ring. If the angular momentum of the particle and the black ring have opposite signs, the particle changes its direction when approaching the ergosphere.
\\
\begin{figure}[h]
\centering
\subfigure[$a$-$b$-plot\newline
The black dashed circles show the position of the horizon and the red dotted circles mark the ergosphere.]{
\includegraphics[width=9cm]{sing-phiout-eo.eps}
}
\subfigure[Solution $y(\gamma)$\newline
The lower horizontal black line marks the position of the turning point and the upper horizontal black line shows where infinity is reached ($x=y=-1$ in ring coordinates)]{
\includegraphics[width=6cm]{sing-phiout-eo_y.eps}
}
\subfigure[$x_3$-$x_4$-plot ($y$-$\psi$-plane)\newline
In this plane we are looking at the black ring from above. The black dashed circles show the position of the horizon and the red dotted circles mark the ergosphere.]{
\includegraphics[width=6cm]{sing-phiout-eo-ypsi.eps}
}
\caption{$R=1$, $m=1$, $\lambda=0.5$, $\Psi=5$ and $E=1.6$: Escape orbit on the equatorial plane outside the ring ($x=-1$).}
\label{pic:sing-phiout-eo}
\end{figure}
\begin{figure}[h]
\centering
\subfigure[$a$-$b$-plot\newline
The black dashed circles show the position of the horizon and the red dotted circles mark the ergosphere. The orbit is plotted for $\phi=\psi=\frac{\pi}{2}$.]{
\includegraphics[width=9cm]{sing-phiout-to.eps}
}
\subfigure[Solution $y(\gamma)$\newline
The black dashed line shows the position of the event horizon and the black solid line marks the position of the turning point.]{
\includegraphics[width=6cm]{sing-phiout-to_y.eps}
}
\subfigure[ $x_3$-$x_4$-plot ($y$-$\psi$-plane)\newline
In this plane we are looking at the black ring from above. The black dashed circles show the position of the horizon and the red dotted circles mark the ergosphere. The green solid circle ($\rho_2=1$) is the singularity of the black ring.]{
\includegraphics[width=6cm]{sing-phiout-to-ypsi.eps}
}
\caption{$R=1$, $m=1$, $\lambda=0.5$, $\Psi=5$ and $E=1.6$: Terminating orbit starting at its turning point on the equatorial plane outside the ring ($x=-1$).}
\label{pic:sing-phiout-to}
\end{figure}
\begin{figure}[h]
\centering
\subfigure[$R=1$, $m=1$, $\lambda=0.5$, $\Psi=5$ and $E=2.75$\newline
Here the angular momentum of the particle and the black ring are both positive.]{
\includegraphics[width=7cm]{sing-phiout-to4-ypsi.eps}
}
\subfigure[$R=1$, $m=1$, $\lambda=0.5$, $\Psi=-5$ and $E=1.6$\newline
Here the angular momentum of the particle and the black ring have opposite signs, so the particle changes its direction when approaching the ergosphere.]{
\includegraphics[width=7cm]{sing-phiout-to3-ypsi.eps}
}
\caption{Frame dragging effect: once the particle enters the ergosphere it is dragged along by the rotation of the black ring.\newline
The black dashed circles show the position of the horizon and the red dotted circles mark the ergosphere. The green solid circle is the singularity of the black ring.}
\label{pic:framedragg}
\end{figure}
On the equatorial plane enclosed by the black ring only terminating orbits are possible. Figure \ref{pic:sing-phiin-to} shows a terminating orbit which starts at the center of the black ring and then falls into the singularity.
\begin{figure}[h]
\centering
\subfigure[$a$-$b$-plot\newline
The black dashed circles show the position of the horizon and the red dotted circles mark the ergosphere. The orbit is plotted for $\phi=\psi=\frac{\pi}{2}$.]{
\includegraphics[width=9cm]{sing-phiin-to.eps}
}
\subfigure[Solution $y(\gamma)$\newline
The black dashed line shows the position of the event horizon and the black solid line marks the position of the turning point.]{
\includegraphics[width=6cm]{sing-phiin-to_y.eps}
}
\subfigure[$x_3$-$x_4$-plot ($y$-$\psi$-plane)\newline
In this plane we are looking at the black ring from above. The black dashed circles show the position of the horizon and the red dotted circles mark the ergosphere. The green solid circle ($\rho_2=1$) is the singularity of the black ring.]{
\includegraphics[width=6cm]{sing-phiin-to-ypsi.eps}
}
\caption{$R=1$, $m=1$, $\lambda=0.5$, $\Psi=0$ and $E=0.8$: Terminating orbit on the equatorial plane inside the ring ($x=+1$). The particle starts at the center of the black ring ($y=-1$,$x=+1$).}
\label{pic:sing-phiin-to}
\end{figure}
\section{Conclusion}
In this paper we presented the analytical solutions of the geodesic equations of the singly spinning black ring for special cases. Since the Hamilton-Jacobi equation seems not to be separable in general, we had to concentrate on the nullgeodesics in the ergosphere ($E=m=0$), geodesics on the rotational axis ($y=-1$) and geodesics on the equatorial plane ($x=\pm1$).\\
We discussed the general structure of the orbits and gave a complete classification of their types.
In the ergosphere there is just one possible orbit, where light crosses the event horizon and falls inevitably into the singularity (terminating orbit). The $x$-motion bounces back and forth between two values or stays constant at $x=0$, while $y$ ranges from a turning point to $-\infty$.
On the rotational axis $y$ is constant, so here the $x$-motion determines the type of orbit. We found escape orbits and bound orbits, the latter were also shown numerically by Igata et al. \cite{Igata:2010ye}.
On the equatorial plane we found terminating orbits and escape orbits.\\
The separability of the Hamilton-Jacobi equation is a coordinate related phenomenon, so one might think of a coordinate system in which it would be possible to separate the Hamilton-Jacobi equation in general. But recently Igata, Ishihara and Takamori found evidence of chaotic motion in the singly spinning black ring spacetime using the Poincar\'e map \cite{Igata:2010cd}. From that one could conclude that it is not possible to separate the Hamilton-Jacobi equation in any coordinate system.\\
Besides the singly spinning black ring, one could consider black rings with two angular momenta (doubly spinning black ring \cite{Durkee:2008an,Pomeransky:2006bd}) or add charge to the black ring \cite{Elvang:2003yy,Hoskisson:2008qq,Gal'tsov:2009da}. Also a supersymmetric black ring solution was found \cite{Elvang:2004rt}, \cite{Elvang:2004ds}. The methods shown in this paper can be applied to (charged) doubly spinning black rings as well as to supersymmetric black rings. This will be done in future work.
\clearpage
\section{Acknowledgements}
We would like to thank Victor Enolski, Norman G\"urlebeck and Volker Perlick for helpful discussions. We gratefully acknowledge support by the DFG, in particular, also within the DFG Research Training Group 1620 ``Models of Gravity''.
\bibliographystyle{unsrt}
|
1,477,468,750,667 | arxiv | \section{Introduction}
The cardinal sequence of a scattered space is the sequence of the cardinalities of its Cantor-Bendixson levels.
The investigation of the cardinal sequences of different classes of topological spaces is a classical problem of set theoretic topology.
Many important results were proved in connection with the cardinal sequences of
locally compact scattered (LCS, in short) spaces, see e.g.
\cite{ba2007,Ba2002,bs87,ErVe2010,jw78,Juwe2006,Ju85,Lag77,Ma1992,ma01,ma03,R76a,Ro2002,Ro85}.
In \cite{JSSSh2004} a complete characterization of the cardinal sequences of the
0-dimensional, of the regular, and of the Hausdorff spaces was given.
Recall that a topological space $X$ is a {\em P-space}, if the intersection of every countable family of open sets in $X$ is open in $X$.
The aim of this paper is to start the systematic investigation of cardinal sequences
of locally Lindelöf scattered P-spaces. We will see that several methods applied to LCS spaces
can be applied here, but typically we should face more serious technical problems.
\vspace{2mm} If $X$ is a topological space and $\al$ is an ordinal, we denote by $X^{\al}$ the $\al$-th Cantor-Bendixson derivative of $X$. Then, $X$ is {\em scattered} if $X^{\al} = \emptyset$ for some ordinal $\al$. Assume that $X$ is a scattered space. We define the {\em height } of $X$ by
$$\mbox{ht}(X) = \mbox { the least ordinal } \al \mbox{ such that } X^{\al} = \emptyset.$$
\vspace{1mm}
\noindent For $\al < \mbox{ht}(X)$, we write $I_{\al}(X) = X^{\al}\setminus X^{\al + 1}$. If $x\in I_{\al}(X)$, we say that $\al$ is the {\em level} of $x$ and we write $\rho(x,X) = \al$, or simply $\rho(x) = \al$ if no confusion can occur. Note that $\rho(x) = \al$ means that $x$ is an accumulation point of $I_{\beta}(X)$ for $\beta < \al$ but $x$ is not an accumulation point of $X^{\al} = \bigcup \{I_{\be}(X) : \be \geq \al \}$. We define the {\em width} of $X$ as
$$\mbox{wd}(X) = \mbox{sup}\{ |I_{\al}(X)| : \al < \mbox{ht}(X) \}.$$
If $X$ is a scattered space, $x\in X$ and $U$ is a neighbourhood of $x$, we say that $U$ is a {\em cone on} $x$, if $x$ is the only point in $U$ of level $\geq \rho(x,X)$.
\vspace{1mm}
By an {\em LLSP space}, we mean a locally Lindel\"{o}f, scattered, Hausdorff P-space.
\begin{proposition}\label{pr:mi}
An LLSP space is 0-dimensional.
\end{proposition}
\begin{proof}
By \cite[Proposition 4.2(b)]{Mi}, a Lindelöf Hausdorff P-space $X$ is normal, so a locally Lindelöf Hausdorff P-space is regular.
Thus, by \cite[Corollary 3.3]{Mi}, $X$ is 0-dimensional.
\end{proof}
So, by Proposition \ref{pr:mi} above,
if $X$ is an LLSP space, $x\in X$ and ${\mathbb B}_x$ is a neighbourhood basis of $x$, we may assume that every $U\in {\mathbb B}_x$ is a Lindel\"{o}f clopen cone on $x$.
\bigskip
It was proved by Juh\'asz and Weiss in \cite{jw78} that for every ordinal ${\alpha} < {\omega}_2$
there is an LCS space of height ${\alpha}$ and width ${\omega}$. Then, we will transfer this
theorem to the setting of LLSP spaces, showing that for every ordinal ${\alpha} < {\omega}_3$
there is an LLSP space of height ${\alpha}$ and width ${\omega}_1$.
To obtain an LCS space of height ${\omega}_1$ and width ${\omega}$,
in \cite{jw78} Juhász and Weiss, using transfinite recursion, constructed a sequence $\<X_{\alpha}:{\alpha}\le{\omega}_1\>$ of LCS spaces such that
$X_{\alpha}$ had height ${\alpha}$ and width ${\omega}$, and for ${\alpha}<{\beta}$, the space
$X_{\alpha}$ was just the first ${\alpha}$ Cantor-Bendixson levels of $X_{\beta}$.
Since $X_{\alpha}$ is dense in $X_{{\alpha}+1}$, Juhász and Weiss had to guarantee
that $X_{\alpha}$ is not compact. But it was automatic, because if ${\alpha}=\gamma+1$, then
$X_{\alpha}$ had a top infinite Cantor-Bendixson level, so $X_{\alpha}$ was not compact.
If ${\alpha}$ is a limit ordinal, then the open cover $\{X_{\xi}:{\xi}<{\alpha}\}$ witnessed
that $X_{\alpha}$ is not compact.
What happens if we try to adopt that approach for LLSP spaces?
To obtain an LLSP space of height ${\omega}_2$ and width ${\omega}_1$,
we can try, using transfinite recursion, to construct a sequence
$\<X_{\alpha}:{\alpha}\le{\omega}_2\>$ of LLSP spaces such that
$X_{\alpha}$ has height ${\alpha}$ and width ${\omega}_1$, and for ${\alpha}<{\beta}$, the space
$X_{\alpha}$ is just the first ${\alpha}$ levels of $X_{\beta}$.
Since $X_{\alpha}$ is dense in $X_{{\alpha}+1}$, we have to guarantee that
$X_{\alpha}$ is not closed in $X_{{\alpha}+1}$, in particular, $X_{\alpha}$ is not Lindelöf.
(Since in a P-space, Lindelöf subspaces are closed.)
However, in our case it is not automatic in limit steps, because
the increasing countable union of open non-Lindelöf subspaces can be Lindelöf.
So some extra effort is needed to guarantee the non-Lindelöfness in limit steps.
\begin{comment}
\medskip
\hrule
\medskip
\medskip
\hrule
\medskip
{\em
\vspace{1mm} It was proved by Juh\'asz and Weiss in \cite{jw} that for every ordinal $\al < \omega_2$ there is a locally compact, Hausdorff, scattered space of height $\al$ and width $\omega$. Then, we will transfer this theorem to the context of locally Lindel\"{o}f P-spaces, showing that for every ordinal $\al < \om_3$ there is an LLSP space of height $\al$ and width $\omega_1$.
\vspace{1mm} We want to remark that the argument given by Juh\'asz and Weiss in their proof of the above theorem can not be extended to the setting of LLSP spaces. For $\al = \omega_1$, in that argument a locally compact, Hausdorff, scattered space $X$ of underlying set $\om_1$ is constructed in such a way that $X$ is the direct union of a family of spaces $\{X_{\al}: \al < \om_1 \}$ such that for every $\al < \om_1$ the following holds: (1) the underlying set of $X_{\al}$ is $\bigcup \{I_{\be} : \be \leq \al \}$ where $I_{\be} = (\omega \cdot (\beta + 1))\setminus (\omega\cdot \beta)$ for $\beta \leq \al$; (2) $X_{\al}$ is a locally compact, Hausdorff, scattered space such that $I_{\be}(X_{\al}) = I_{\beta}$ for $\beta \leq \alpha$ and $I_{\al + 1}(X_{\al}) = \emptyset$; and (3) if $x\in I_{\beta}$ for $\beta < \al$, then a neighbourhood basis of $x$ in $X_{\beta}$ is also a neighbourhood basis of $x$ in $X_{\al}$. Then, $X_0$ is defined as $I_0 = \om$ with the discrete topology. If $\al = \be + 1$ is a successor ordinal, in order to construct a neighbourhood basis of every point in $I_{\al}$, a discrete family $\{V_n : n < \om \}$ is constructed in $X_{\be}$ such that each $V_n$ is a compact clopen cone on a point in $I_{\be}(X_{\be})$. And if $\al$ is a limit ordinal, in order to construct a neighbourhood basis of every point in $I_{\al}$, a discrete family $\{V_n : n < \om \}$ is constructed in $Z = \bigcup \{X_{\be} : \be < \al \}$ proceeding by induction on $n$ in such a way that for some strictly increasing sequence of ordinals $\langle \al_n : n < \om \rangle$ cofinal in $\al$, each $V_n$ is a compact clopen cone on some point in $Z$ such that $V_n \cap I_{\al_n}(Z) \neq \emptyset$ for each $n$.
\vspace{1mm} However, if we want to construct a locally Lindel\"{o}f, scattered, Hausdorff P-space $X$ of height $\om_2$ and width $\om_1$ as the direct union of a family of P-spaces $\{ X_{\al}: \al < \om_2 \}$ by extending the above argument, note that if $\al$ is a limit ordinal of cofinality $\om$, then when we want to construct a discrete family $\{V_{\xi} : \xi < \om_1 \}$ of clopen sets in $Z = \bigcup \{X_{\beta} : \beta < \al \}$ by induction on $\xi$ in such a way that $\mbox{sup}\{\rho(x) : x\in V_{\xi} \mbox{ for some } \xi < \om_1 \} = \al$, it may happen that for some ordinal $\om\leq \xi < \om_1$ and some ordinal $\be < \al$, $I_{\beta} = \omega_1\cdot (\be + 1)\setminus \omega_1 \cdot \beta \subset \bigcup \{V_{\mu} : \mu < \xi \}$, and so we can not construct the required discrete family of $\omega_1$ many clopen sets in $Z$.
}
\medskip
\hrule
\medskip
\medskip
\hrule
\medskip
\end{comment}
\vspace{1mm} Assume that $\kappa$ is an uncountable cardinal and $\al$ is a non-zero ordinal. If $X$ is an LLSP space such that $\mbox{ht}(X) = \al$ and $\mbox{wd}(X) = \kappa$, we say that $X$ is a $(\kappa,\alpha)$-{\em LLSP space}.
\vspace{1mm} Then, we will also transfer the results proved in
\cite{bs87} and \cite{ma01} on thin-tall spaces to the context of locally Lindel\"{o}f P-spaces, showing that Con(ZFC) implies Con(ZFC + ``there is an $(\omega_1,\alpha)$-LLSP space for every ordinal $\alpha < \omega_4$'').
\section{Construction of an LLSP space of width $\om_1$ and height $\om_2$}
\vspace{2mm} By a {\em decomposition} of a set $A$ of size $\om_1$, we mean a partition of $A$ into subsets of size $\om_1$. In this section we will prove the following result.
\begin{theorem} There is an $(\om_1,\om_2)$-LLSP space. \end{theorem}
\begin{proof}
We construct an $(\om_1,\om_2)$-LLSP space whose underlying set is $\om_2$.
For every $\al < \om_2$, we put $I_{\al} = (\om_1 \cdot (\al + 1))\setminus (\om_1 \cdot \al)$,
and for every ordinal $\xi < \om_1$, we define the ``column'' $N_{\xi} = \{\om_1 \cdot \mu + \xi : \mu < \om_2 \}$. Write ${\xi}\in N_{n({\xi})}$.
Our aim is to construct, by transfinite induction on $\al < \om_2$ an LLSP space $X_{\al}$ satisfying the following:
\vspace{1mm} (1) $X_{\al}$ is an $(\om_1,\al + 1)$-LLSP space such that $I_{\be}(X_{\al}) = I_{\be}$ for every $\be\leq \al$.
\vspace{1mm} (2) For every $\xi < \om_1$, $N_{\xi}\cap X_{\al}$ is a closed discrete subset of $X_{\al}$.
\vspace{1mm} (3) If $\be < \al$ and $x\in X_{\be}$, then a neighbourhood basis of $x$ in $X_{\be}$ is also a neighbourhood basis of $x$ in $X_{\al}$.
\vspace{2mm} For every $\al < \om_2$ and $x\in I_{\al}$, in order to define the required neighbourhood basis ${\mathbb B}_x$ of $x$ in $X_{\al}$, we will also fix a Lindel\"{o}f cone $V_x$ of $x$ in $X_{\al}$ such that the following holds:
\vspace{1mm} (4) $V_x \cap I_{\al} = \{x\}$.
\vspace{1mm} (5) $V_x = \bigcup {\mathbb B}_x$.
\vspace{1mm} (6) There is a club subset $C_x$ of $\om_1$ such that $\om_1 \setminus C_x$ is unbounded in $\om_1$ and $V_x \cap \bigcup\{N_{\nu}: \nu \in C_x \} = \emptyset$.
\vspace{2mm} We define $X_0$ as the set $I_0 = \om_1$ with the discrete topology, and for $x\in I_0$ we put $V_x = \{x\}$ and $C_x = \{y \in \om_1 : y \mbox{ is a limit ordinal } > x \}$. So, assume that $\al > 0$. If $\al = \be + 1$ is a successor ordinal, we put $Z = X_{\be}$. And if $\al$ is a limit ordinal, we define $Z$ as the direct union of
$\{X_{\be}: \be < \al\}.$ So, the underlying set of the required space $X_{\al}$ is $Z\cup I_{\al}$. If $x\in Z$, then a basic neighbourhood of $x$ in $X_{\al}$ is a neighbourhood of $x$ in $Z$. Our purpose is to define a neighbourhood basis of each element of $I_{\al}$. Let $\{x_{\nu} : \nu < \om_1 \}$ be an enumeration without repetitions of $Z$. By the induction hypothesis, for every $\xi < \om_1$ there is a club subset $C_{\xi}$ of $\om_1$ such that $\om_1\setminus C_{\xi}$ is unbounded in $\om_1$ and $V_{x_{\xi}} \cap \bigcup \{N_{\nu} : \nu \in C_{\xi} \} = \emptyset$. Let $C = \Delta \{C_{\xi} : \xi < \om_1 \}$, the diagonal intersection of the family $\{C_{\xi} : \xi < \om_1 \}$. As $V_{x_{\xi}} \cap \bigcup \{N_{\nu} : \nu \in C_{\xi} \} = \emptyset$, by the definition of $C$, for every $\xi < \om_1$, $V_{x_{\xi}} \cap \bigcup \{N_{\nu} : \nu \in C \}\subset \bigcup \{N_{\nu} : \nu \leq \xi \}$, and clearly $\om_1\setminus C$ is unbounded in $\om_1$. Then, we will define for every element $y\in I_{\al}$ a neighbourhood basis of $y$ from a set $V_y$ in such a way that for some final segment $C'$ of $C$ we will have that $V_{y} \cap \bigcup \{N_{\nu} : \nu \in C' \} = \emptyset$. We distinguish the following three cases:
\vspace{5mm}\noindent {\bf Case 1}. $\al = \be + 1$ is a successor ordinal.
\vspace{3mm}
For each $\xi < \om_1$ we take a Lindel\"{o}f clopen cone $U_{\xi}$ on some $u_{\xi}$ in $Z$ as follows. We take $U_0\subset V_{x_0}$ as a Lindel\"{o}f clopen cone on $x_0$ such that $(U_0\setminus \{x_0\})\cap N_0 = \emptyset$. Suppose that $\xi > 0$. Let $u_{\xi}$ be the first element $x_{\eta}$ in the enumeration $\{x_{\nu} : \nu < \om_1 \}$ of $Z$ such that $u_{\xi}\not\in \bigcup \{U_{\mu}: \mu < \xi \}$.
Since $I_{\beta}\cap \bigcup \{U_{\mu}: \mu < \xi \}\subs \{u_{\mu}:{\mu}<{\xi}\}$, the element
$u_{\xi}$ is defined.
Then, we choose $U_{\xi} \subset V_{x_{\eta}}$ as a Lindel\"{o}f clopen cone on $u_{\xi}$ such that $U_{\xi} \cap \bigcup \{U_{\mu}: \mu < \xi \} = \emptyset$ and $(U_{\xi}\setminus \{u_{\xi}\}) \cap \bigcup \{N_{\nu}: \nu \leq \eta \} = \emptyset$. So, as $V_{x_{\eta}} \cap \bigcup \{N_{\nu} : \nu \in C \}\subset \bigcup \{N_{\nu} : \nu \leq \eta \}$, we deduce that $(U_{\xi}\setminus \{u_{\xi}\}) \cap \bigcup \{N_{\nu}: \nu \in C \} = \emptyset$. And clearly, $\{U_{\xi} : \xi < \om_1 \}$ is a partition of $Z$. Let
$$A = \{\xi \in \om_1 : u_{\xi}\in I_{\be}\cap N_{\rho} \mbox{ for some } \rho \in \om_1\setminus C \}.$$
\noindent
Since $I_{\beta}\subs \{u_{\xi}:{\xi}<{\omega}_1\}$, we have $|A|={\omega}_1$.
Let $\{A_{\xi}: \xi < \om_1 \}$ be a decomposition of $A$. Fix $\xi < \om_1$. Let $y_{\xi} = \om_1\cdot \al + \xi$. Then, we define
$$V_{y_{\xi}} = \{y_{\xi}\} \cup \bigcup \{U_{\nu} : \nu\in A_{\xi} \}.$$
\noindent Note that since $\bigcup \{U_{\nu} : \nu \in A_{\xi} \}\cap \bigcup \{N_{\nu} : \nu \in C \} = \emptyset$, we infer that $V_{y_{\xi}}\cap \bigcup \{N_{\nu} : \nu \in C \mbox{ and } \nu > \xi \} = \emptyset$. Now, we define a basic neighbourhood of $y_{\xi}$ in $X_{\al}$ as a set of the form
$$\{y_{\xi}\} \cup \bigcup \{U_{\nu} : \nu\in A_{\xi}, \nu\geq\zeta \}$$
\noindent where $\zeta < \om_1$. Then, it is easy to check that conditions $(1)-(6)$ hold.
\vspace{2mm}\noindent {\bf Case 2}. $\al$ is a limit ordinal of cofinality $\om_1$.
\vspace{2mm} Let $\langle \al_{\nu} : \nu < \om_1 \rangle$ be a strictly increasing sequence of ordinals cofinal in $\al$. For every $\xi < \om_1$, we choose a Lindel\"{o}f clopen cone $U_{\xi}$ on some point $u_{\xi}$ in $Z$ as follows. If $\xi$ is not a limit ordinal, let $u_{\xi}$ be the first element $x_{\eta}$ in the enumeration $\{x_{\nu} : \nu < \om_1 \}$ of $Z$ such that $u_{\xi}\not\in \bigcup \{U_{\mu} : \mu <\xi \}$ and let $U_{\xi}\subset V_{x_{\eta}}$ be a Lindel\"{o}f clopen cone on $u_{\xi}$ such that $U_{\xi}\cap \bigcup \{U_{\mu} : \mu < \xi \} = \emptyset$. Now, assume that $\xi$ is a limit ordinal. Let $\nu < \om_1$ be such that $\al_{\nu} > \mbox{sup} \{\rho(u_{\mu},Z) : \mu < \xi \}$. Then,
we pick $u_{\xi}$ as the first element $x_{\eta}$ in the enumeration $\{x_{\nu} : \nu < \om_1 \}$ of $Z$ such that $u_{\xi}\in I_{\al_{\nu}}(Z) \cap N_{\delta}$ for some $\delta\in \om_1\setminus C$ with $\delta > \xi$. Note that by the election of $\al_{\nu}$, we have that $u_{\xi}\not\in \bigcup\{U_{\mu} : \mu < \xi \}$. Then, we choose $U_{\xi}\subset V_{x_{\eta}}$ as a Lindel\"{o}f clopen cone on $u_{\xi}$ such that
$$U_{\xi}\cap \bigcup \{U_{\mu} : \mu < \xi \} = \emptyset \mbox{ and }$$
$$(U_{\xi}\setminus \{u_{\xi}\}) \cap \bigcup \{N_{\nu} : \nu \leq \eta \} = \emptyset.$$
\noindent Then since $V_{x_{\eta}} \cap \bigcup \{N_{\nu} : \nu \in C \} \subset \bigcup \{N_{\nu} : \nu \leq \eta \}$ and $\delta\not\in C$, we infer that $U_{\xi}\cap \bigcup \{N_{\nu} : \nu \in C \} = \emptyset$.
\vspace{1mm} Now, let $\{A_{\xi} : \xi < \om_1 \}$ be a decomposition of the set of limit ordinals of $\om_1$. Fix $\xi < \om_1$. Let $y_{\xi} = \om_1 \cdot \al + \xi$. Then, we define
$$V_{y_{\xi}} = \{y_{\xi}\} \cup \bigcup \{U_{\mu} : \mu \in A_{\xi} \}.$$
\noindent Clearly,
$V_{y_{\xi}} \cap \bigcup \{N_{\nu} : \nu \in C, \nu > \xi \} = \emptyset.$
Now, we define a basic neighbourhood of $y_{\xi}$ in $X_{\al}$ as a set of the form
$$V_{y_{\xi}} \setminus \bigcup \{U_{\nu} : \nu \in A_{\xi}, \nu < \zeta \}$$
\noindent where $\zeta < \om_1$.
\vspace{2mm} Note that the condition that $\delta > \xi$ in the election of $u_{\xi}$ for $\xi$ a limit ordinal is needed to assure that $N_{\xi}\cap X_{\al}$ is a closed discrete subset of $X_{\al}$ for $\xi < \om_1$. So, conditions $(1)-(6)$ hold.
\vspace{3mm}\noindent {\bf Case 3}. $\al$ is a limit ordinal of cofinality $\om$.
\vspace{2mm} Let $\langle \al_n : n < \om \rangle$ be a strictly increasing sequence of ordinals converging to $\al$. Proceeding by transfinite induction on $\xi < \om_1$, we construct a sequence $\langle u^{\xi}_n : n < \om \rangle$ of points in $Z$ and a sequence $\langle U^{\xi}_n : n < \om \rangle$ such that each $U^{\xi}_n\subset V_{u^{\xi}_n}$ is a Lindel\"{o}f clopen cone on $u^{\xi}_n$ as follows. Fix $\xi < \om_1$, and assume that for $\mu < \xi$ the sequences $\langle u^{\mu}_n : n < \om \rangle$ and $\langle U^{\mu}_n : n < \om \rangle$ have been constructed. Let $C^* = \bigcap \{C_{u^{\mu}_n} : \mu < \xi, n <\om \}$. Note that $C^*$ is a club subset of $\om_1$, because it is a countable intersection of club subsets of $\om_1$. Now since for every $\mu < \xi$ and $n < \om$, we have that $V_{u^{\mu}_n}\cap \bigcup \{N_{\nu} : \nu \in C_{u^{\mu}_n} \} =\emptyset$, we infer that
$$\bigcup \{V_{u^{\mu}_n}: \mu < \xi, n < \om \} \cap \bigcup\{N_{\nu} : \nu \in C^* \} = \emptyset.$$
\noindent Hence, for every ordinal $\be < \al$,
$$|I_{\be}\setminus \bigcup \{V_{u^{\mu}_n} : \mu < \xi, n <\om \}| = \om_1.$$
\vspace{1mm} Now, we construct the sequences $\langle u^{\xi}_n : n < \om \rangle$ and $\langle U^{\xi}_n : n < \om \rangle$ by induction on $n$. If $n$ is even, let $u^{\xi}_n$ be the first element $x_{\eta}$ in the enumeration $\{x_{\nu} : \nu < \om_1 \}$ of $Z$ such that
$u^{\xi}_n\not\in \bigcup \{U^{\mu}_k : \mu < \xi, k < \om \} \cup \bigcup \{U^{\xi}_k: k < n \}$, and let $U^{\xi}_n\subset V_{x_{\eta}}$ be a Lindel\"{o}f clopen cone on $u^{\xi}_n$ such that
$$U^{\xi}_n \cap (\bigcup \{U^{\mu}_k: \mu < \xi, k < \om \} \cup \bigcup \{U^{\xi}_k:
k < n \}) = \emptyset.$$
Now, suppose that $n$ is odd. Let $k\in \omega$ be such that $\al_k > \mbox{sup}\{\rho(u^{\xi}_m,Z) : m < n \}$. First, we pick $\tilde{u}^{\xi}_n$ as the first element $x_{\eta}$ in the enumeration $\{x_{\nu} : \nu < \om_1 \}$ of $Z$ such that $\tilde{u}^{\xi}_n\in I_{\al_k + 1}(Z)\cap N_{\zeta^*}$ for some $\zeta^*\in C^*$. So, $\tilde{u}^{\xi}_n\not\in \bigcup \{U^{\mu}_m: \mu < \xi, m < \om \} \cup \bigcup \{U^{\xi}_m:
m < n \}$. Now, we choose $\tilde{U}^{\xi}_n\subset V_{x_{\eta}}$ as a Lindel\"{o}f clopen cone on $\tilde{u}^{\xi}_n$ such that
$$\tilde{U}^{\xi}_n \cap (\bigcup \{U^{\mu}_m: \mu < \xi, m < \om \} \cup \bigcup \{U^{\xi}_m:
m < n \}) = \emptyset.$$
\noindent and
$$(\tilde{U}^{\xi}_n \setminus \{\tilde{u}^{\xi}_n\})\cap \bigcup \{N_{\nu} : \nu \leq \eta \} = \emptyset.$$
\noindent Then as $\tilde{u}^{\xi}_n = x_{\eta}$ and $V_{x_{\eta}} \cap \bigcup\{N_{\nu}: \nu \in C \}\subset \bigcup \{N_{\nu}: \nu \leq \eta \}$, we infer that $(\tilde{U}^{\xi}_n\setminus \{ \tilde{u}^{\xi}_n \})
\cap \bigcup\{N_{\nu}: \nu \in C \} = \emptyset$. However, note that if $\zeta$ is the ordinal such that $\tilde{u}^{\xi}_n \in N_{\zeta}$, it may happen that $\zeta\in C$. Then, we pick $u^{\xi}_n$ as the first element $x_{\rho}$ in the enumeration $\{x_{\nu} : \nu < \om_1 \}$ of $Z$ such that $u^{\xi}_n\in \tilde{U}^{\xi}_n \cap I_{\al_k}(Z)\cap N_{\delta}$ for some $\delta > \xi$. Note that $\delta\not\in C$, because $(\tilde{U}^{\xi}_n\setminus \{ \tilde{u}^{\xi}_n \})\cap \bigcup\{N_{\nu}: \nu \in C \} = \emptyset$. Now, we choose $U^{\xi}_n \subset \tilde{U}^{\xi}_n \cap V_{x_{\rho}}$ as a Lindel\"{o}f clopen cone on $u^{\xi}_n$ such that
$$(U^{\xi}_n \setminus \{u^{\xi}_n\})\cap \bigcup \{N_{\nu} : \nu \leq \rho \} = \emptyset.$$
\noindent Hence as $V_{x_{\rho}} \cap \bigcup \{N_{\nu} : \nu \in C \} \subset \bigcup \{N_{\nu} : \nu \leq \rho \}$ and $\delta\not\in C$, we infer that $U^{\xi}_n \cap \bigcup\{N_{\nu}: \nu \in C \} = \emptyset$.
\vspace{1mm} Now, let $\{A_{\xi} : \xi < \om_1 \}$ be a decomposition of $\om_1$. Fix $\xi < \om_1$. Let $y_{\xi} = \om_1 \cdot \al + \xi$. Then, we define
$$V_{y_{\xi}} = \{y_{\xi}\} \cup \bigcup \{U^{\mu}_n : \mu \in A_{\xi}, n \mbox{ odd} \}.$$
\noindent As $\bigcup \{U^{\mu}_n : \mu \in A_{\xi}, n \mbox{ odd} \} \cap \, \bigcup \{N_{\nu} : \nu \in C \} = \emptyset$, we deduce that $V_{y_{\xi}} \cap \, \bigcup \{N_{\nu} : \nu \in C \mbox{ and } \nu > \xi \} = \emptyset$. Then, we define a basic neighbourhood of $y_{\xi}$ in $X_{\al}$ as a set of the form
$$\{y_{\xi}\} \cup \bigcup \{U^{\mu}_n : \mu\in A_{\xi}, \mu \geq \zeta, n \mbox{ odd} \}$$
\noindent where $\zeta < \om_1$. Now, it is easy to see that conditions $(1)-(6)$ hold.
\vspace{2mm} Then, we define the desired space $X$ as the direct union of the spaces $X_{\al}$ for $\al < \om_2$.
\end{proof}
\vspace{1mm}\noindent {\bf Remark 2.2.} Note that by the construction carried out in the proof of Theorem 2.1, we have that
\begin{equation}\notag
\text{if $U\subs X$ is Lindel\"{o}f then $\{{\xi}:N_{\xi}\cap U\ne \empt\}\in NS({\omega}_1)$.
}
\end{equation}
\section{A stepping up theorem}
\vspace{2mm} In this section, for every cardinal $\la\geq \om_2$ we will construct from an $(\om_1,\la)$-LLSP space satisfying certain additional properties an $(\om_1,\al)$-LLSP space for every ordinal $\al < \la^{+}$. As a consequence of this construction, we will be able to extend Theorem 2.1 from $\om_2$ to any ordinal $\al < \om_3$. We need some preparation.
\begin{definitions}
{\em (a) Assume that $X$ is an LLSP space, $\be + 1 < \mbox{ht}(X)$, $x\in I_{\be +1}(X)$ and ${\mathbb B}_x$ is a neighbourhood basis for $x$. We say that ${\mathbb B}_x$ is {\em admissible}, if there is a pairwise disjoint family $\{U_{\nu} : \nu < \om_1 \}$ such that for every $\nu < \om_1$, $U_{\nu}$ is a Lindel\"{o}f clopen cone on some point $x_{\nu}\in I_{\be}(X)$ in such a way that ${\mathbb B}_x$ is the collection of sets of the form
$$\{x\}\cup \bigcup\{U_{\nu} : \nu \geq \xi \},$$
\noindent where $\xi < \om_1$.
Then, we will say that ${\mathbb B}_x$ is the {\em admissible basis for} $x$ {\em given by} $\{U_{\nu} : \nu < \om_1 \}$.
\vspace{1mm}
(b) Now, we say that $X$ is an {\em admissible space} if for every $x\in X$ there is a neighbourhood basis ${\mathbb B}_x$ such that for every successor ordinal $\be + 1 < \mbox{ht}(X)$ the following holds:
\begin{enumerate}[(1)]
\item ${\mathbb B}_x$ is an admissible basis for every point $x\in I_{\be + 1}(X)$,
\item if $x,y\in I_{\be + 1}(X) $ with $x\neq y$ and $\rho(x) = \rho(y)$, ${\mathbb B}_x$ is given by $\{U_{\nu} : \nu < \om_1 \}$ and ${\mathbb B}_y$ is given by $\{U'_{\nu} : \nu < \om_1 \}$, then for every $\nu,\mu < \om_1$ we have $U_{\nu}\cap U'_{\mu} = \emptyset$.
\end{enumerate}}
\end{definitions}
\vspace{2mm} Note that the space $X$ constructed in the proof of Theorem 2.1 is admissible.
\begin{definition} {\em We say that an LLSP space $X$ is {\em good}, if for every ordinal $\al < \mbox{ht}(X)$ and every set $\{U_n : n\in \om \}$ of Lindel\"{o}f clopen cones on points of $X$, the set $I_{\al}(X)\setminus \bigcup\{U_n : n\in \omega \}$ is uncountable.}
\end{definition}
\vspace{2mm} Note that the space $X$ constructed in the proof of Theorem 2.1 is good.
\vspace{1mm} Assume that $X$ is a good LLSP space. Then, we define the space $X^*$ as follows. Its underlying set is $X\cup \{z\}$ where $z\not\in X$. If $x\in X$, a basic neighbourhood of $x$ in $X^*$ is a neighbourhood of $x$ in $X$. And a basic neighbourhood of $z$ in $X^*$ is a set of the form
$$X^*\setminus \bigcup\{U_n : n\in \om \}$$
\noindent where each $U_n$ is a Lindel\"{o}f clopen cone on some point of $X$. Clearly, $X^*$ is a Lindel\"{o}f scattered Hausdorff P-space with $\mbox{ht}(X^*) = \mbox{ht}(X) + 1$.
\begin{theorem} Let $\la \geq \om_2$ be a cardinal. Assume that there is a good $(\om_1,\la)$-LLSP space that is admissible. Then, for every ordinal $\al < \la^+$ there is a good $(\om_1,\al)$-LLSP space.
\end{theorem}
So, we obtain the following consequence of Theorems 2.1 and 3.3.
\begin{corollary} For every ordinal $\al < \om_3$ there is a good $(\om_1,\al)$-LLSP space.
\end{corollary}
\begin{proof}[Proof of Theorem 3.3]
We may assume that $\la \leq \al < \la^+$. We
proceed by transfinite induction on $\al$. If $\al = \la$, the case is obvious. Assume that $\al =
\be + 1$ is a successor ordinal. Let $Y$ be a good $(\om_1,\be)$-LLSP space. For every $\nu <
\om_1$ let $Y_{\nu}$ be a P-space homeomorphic to $Y^*$ in such a way that $Y_{\nu}\cap Y_{\mu} =
\emptyset$ for $\nu < \mu < \om_1$. Clearly, the topological sum of the spaces $Y_{\nu}$ ($\nu <
\om_1$) is a good $(\om_1,\al)$-LLSP space.
\vspace{2mm} Now, assume that $\al > \la$ is a limit ordinal. Let $\theta = \mbox{cf}(\al)$. Note
that since there is a good admissible $(\om_1,\la)$-LLSP space and $\theta \leq \la$, there is a
good admissible LLSP space $T$ of width $\om_1$ and height $\theta$.
Let $\{\alpha_{\xi} : \xi < \theta \}$ be a closed strictly increasing sequence of ordinals
cofinal in $\al$ with $\al_0 = 0$. For every ordinal $\xi < \theta$, we put $J_{\xi} = \{\al_{\xi}
\} \times \om_1$. We may assume that the underlying set of $T$ is $\bigcup \{J_{\xi} : \xi <
\theta \}$, $I_{\xi}(T) = J_{\xi}$ for every $\xi < \theta$ and $I_{\theta}(T) = \emptyset$.
Fix a system of neighbourhood bases, $\{\mathbb B_x:x\in T\}$, which
witnesses that $T$ is admissible.
Write $V_s = \bigcup {\mathbb B}_s$ for $s\in T$.
So, writing $$T' = \{ s\in T : \rho(s,T) \mbox{ is a successor ordinal} \},$$
for each $s\in T$ with ${\rho}(s,T)={\xi}+1$, there is
$D_s=\{d^s_{\zeta}:{\zeta}<{\omega}_1\}\in [I_{\xi}(T)]^{{\omega}_1}$
and for each $d\in D_s$ there is a Lindelöf cone $U_d$ on $d$
such that
\begin{displaymath}
\mathbb B_s=\big\{\{s\}\cup \bigcup_{{\eta}\le {\zeta}}U_{d^s_{\zeta}}:{\eta}<{\omega}_1\big\}.
\end{displaymath}
\vspace{2mm} In order to carry out the desired construction, we will insert an adequate LLSP space
between $I_{\xi}(T)$ and $I_{\xi + 1}(T)$ for every $\xi < \theta$. If $\xi < \theta$, we define
$\delta_{\xi} = \mbox{o.t.}(\al_{\xi + 1}\setminus \al_{\xi})$.
We put $y^{\xi + 1}_{\nu} = \langle \al_{\xi + 1},\nu \rangle$ for $\xi < \theta$ and
$\nu < \om_1$, and we put
$D^{\xi}_{\nu} = \{x\in T : \rho(x,T) = \xi \mbox{ and } x\in V_{y^{\xi
+ 1}_{\nu}}\}=D_{y^{{\xi}+1}_{\nu}}$. Since $T$ is admissible,
$D^{\xi}_{\nu} \cap D^{\xi}_{\mu} = \emptyset$ for $\nu\neq \mu$.
Now, by the induction hypothesis, for every point $y = y^{\xi + 1}_{\nu}$ where $\xi < \theta$ and
$\nu < \om_1$ there is a Lindel\"{o}f scattered Hausdorff P-space $Z_y$ of height $\de_{\xi} + 1$
such that $I_0(Z_y) = D^{\xi}_{\nu}$,
$|I_{\nu}(Z_y)|={\omega}_1$ for ${\nu}<{\delta}_{\xi},$
$I_{\de_{\xi}}(Z_y) = \{y\}$ and $Z_y \cap T = D^{\xi}_{\nu}
\cup \{y\}$.
Also, we assume that $Z_{y^{\xi + 1}_{\nu}} \cap Z_{y^{\xi + 1}_{\mu}} = \emptyset$ for
$\nu\neq\mu$ and
$(Z_{y^{\xi + 1}_{\nu}}\setminus \{y^{\xi + 1}_{\nu}\}) \cap (Z_{y^{\eta + 1}_{\mu}}\setminus \{y^
{\eta + 1}_{\mu}\}) = \emptyset$
for $\xi\neq \eta$ and $\nu,\mu < \om_1$.
\vspace{2mm} Now, our aim is to define the desired $(\om_1,\al)$-LLSP space $Z$. Its underlying set is
$$Z=T \cup \bigcup\{Z_y : y\in T'\}.$$
\noindent If $V$ is a Lindel\"{o}f clopen cone on a point $z\in T$, we define
$$V^* = V \cup \bigcup\{ (Z_y\setminus T) : y\in V\cap T' \}.$$
Observe that if $y\in V\cap T'$, then $Z_y\setm V^*= D_y\setm V$
and $D_y\setm V$ is countable because $T$ is admissible. So $Z_y\cap V^*$ is open in $Z_y$
because $Z_y$ is a P-space.
Now, assume that $x\in Z_s$ for some $s\in T'$.
Then, if $U$ is a Lindel\"{o}f
clopen cone on $x$ in $Z_s$, we define
$$U^{\sim} = U \cup \bigcup \{(U_y)^* : y\in D_s\cap U \}.$$
\vspace{2mm} Note that for every $s\in T'$ we have $(V_s)^* = (Z_s)^{\sim}$.
\vspace{2mm}
After that preparation we can define the bases of the points of $Z$.
Suppose that $x \in Z = T \cup \bigcup\{Z_y : y\in T'\}$.
If $x\in T\setm T'$,
then let
\begin{displaymath}
\mathbb B^Z_x=\{V^*:\text{$V$ is a Lindel\"{o}f clopen cone on $x$ in $T$}\}.
\end{displaymath}
If $x\in (Z\setm T)\cup T'$,
then pick first the unique $s\in T'$ such that $x\in Z_s\setm I_0(Z_s)$, and
let
\begin{displaymath}
\mathbb B^Z_x=\{U^\sim :\text{$U$ is a Lindel\"{o}f clopen cone on $x$ in $Z_s$}\}.
\end{displaymath}
\noindent{\bf Claim 1.}
{\em $\{\mathbb B^Z_x:x\in Z\}$ is a system of neighbourhood bases of a topology ${\tau}_Z$.}
\begin{proof
Assume that $y\in W\in \mathbb B^Z_x$.
We should show that $\mathbb B^Z_y\cap \mathcal P(W)\ne \empt$.
Assume first that $x\in T\setm T'$, and so $W=V^*$ for some Lindelöf clopen clone $V$ on $x$ in $T$.
If $y\in T\setm T'$, then $y\in V$ and so $S\subs V$ for some Lindelöf clopen clone $
S$ on $y$ in $T$.
Thus $y\in S^*\subs V^*$ and $S^*\in \mathbb B^Z_y$.
If $y\in (Z\setm T)\cup T'$ then pick first the unique $s\in T'$ such that
$y\in Z_s\setm I_0(Z_s)$.
Then $s\in V$ because otherwise $y\in V^*$ is not possible.
So as we observed, $V^*\cap Z_s$ is open in $Z_s$.
So let $S$ be a Lindel\"{o}f clopen cone on $y$ in $Z_s$ with $S\subs V^*\cap Z_s$.
Then $y\in S^\sim\subs V^*$ and $S^\sim\in \mathbb B^Z_y$.
\smallskip
Assume now that $x\in (Z\setm T)\cup T'$, then pick first the unique $s\in T'$ such that $x\in Z_s\setm I_0(Z_s)$. Then $W=U^\sim $ for some Lindelöf clopen cone $U$ on $x$ in $Z_s$.
If $y\in Z_s\setm I_0(Z_s)$, then $S\subs U$ for some Lindelöf clopen clone $S$ on $y$ in $Z_s$,
and so $S^\sim \in \mathbb B^Z_y$ and $S^\sim\subs U^\sim$.
If $y\notin Z_s \setm I_0(Z_s)$, then $y\in (U_d)^*$ for some $d\in I_0(Z_s)\cap U$, and so
there is $S\in \mathbb B^Z_y$ with $S\subs (U_d)^*$ using what we proved so far.
Thus $S\subs U^\sim$ as well.
\end{proof}
\noindent{\bf Claim 2.}
{\em ${\tau}_Z$ is Hausdorff.}
\begin{proof
Assume that $\{x,y\}\in {[Z]}^{2}$. Let $s$ and $t$ be elements of $T$ such that
$x\in Z_s\setm I_0(Z_s)$ if $x\notin T\setm T'$ and $s=x$ otherwise, and
$y\in Z_t\setm I_0(Z_t)$ if $y\notin T\setm T'$ and $t=y$ otherwise.
If $s\neq t$, consider disjoint Lindel\"{o}f clopen cones $U$ and $V$ on $s$ and $t$ in $T$ respectively. Note that if $w\in U \cap T'$, then $Z_w\setminus T \subset U^*$ because $w\in U$, but $(Z_w\setminus T)\cap V^* = \emptyset$ because $w\not\in V$, and analogously if $w\in V \cap T'$ then $Z_w\setminus T \subset V^*$ but $(Z_w\setminus T)\cap U^* = \emptyset$. So, $U^*$ and $V^*$ are disjoint open sets containing $x$ and $y$ respectively.
If $s=t$, then there are disjoint cones in $Z_s$ on $x$ and $y$, $U$ and $V$, respectively.
Then $U^*$ and $V^*$ are disjoint open sets containing $x$ and $y$, respectively.
\end{proof}
It is trivial from the definition that $Z$ is a $P$-space because
$T$ is a P-space and the $Z_s$ are P-spaces.
By transfinite induction on ${\delta}<{\alpha}$
it is easy to check that
\begin{displaymath}
{I_{\delta}(Z)}=\left\{\begin{array}{ll}
{J_{\xi}}&\text{if ${\delta}={\alpha}_{\xi}$},\\\\
{\bigcup\{I_{\eta}(Z_s\}:s\in I_{{\xi}+1}(T)\}}&\text{if ${{\alpha}_{\xi}<\delta}={\alpha}_{\xi}+{\eta}<{\alpha}_{\xi+1}$,}\\
\end{array}\right.
\end{displaymath}
so $Z$ is scattered with height $\al$ and width ${\omega}_1$.
\smallskip
\smallskip
\noindent{\bf Claim 3.} {\em $Z$ is locally Lindelöf.}
\begin{proof
\vspace{1mm} Note that if $x\in T\setminus T'$ and $U^*\in \mathbb B^Z_x$, then for every $V^*\in \mathbb B^Z_x$ with $V^*\subset U^*$ we have that $U^*\setminus V^* = \bigcup \{W^*_n : n \in \nu \}$ where $\nu \leq \omega$ in such a way that each $W_n$ is a Lindel\"{o}f clopen cone on some point
$v_n\in T \cap U$ in $T$ with $\rho(v_n, T) < \rho(x,T)$.
\vspace{1mm} Also, if $x\in T' \cup (Z\setminus T)$ and $U^{\sim}\in \mathbb B^Z_x$ then for every $V^{\sim}\in \mathbb B^Z_x$ with $V^{\sim}\subset U^{\sim}$, if $s$ is the element of $T'$ with $x\in Z_s\setminus I_0(Z_s)$, we have that $U^{\sim}\setminus V^{\sim} = \bigcup \{U'_n : n\in \nu \}$ where $\nu\leq \omega$ in such a way that for every $n\in \nu$, either $U'_n = U^{\sim}_n$ where $U_n$ is a Lindel\"{o}f clopen cone on some point $u_n \in Z_s\cap U$ in $Z_s$ with $0 < \rho(u_n,Z_s) < \rho(x,Z_s)$ or $U'_n = U^*_n$ where $U_n$ is a Lindel\"{o}f clopen cone on some point $u_n\in D_s\cap U$ in $T$.
\vspace{1mm} Now, proceeding by transfinite induction on $\rho(x,Z)$, we can verify that if $x\in T\setminus T'$ and $U$ is a Lindel\"{o}f clopen cone on $x$ in $T$, then $U^*$ is a Lindel\"{o}f clopen cone on $x$ in $Z$, and that if $x\in Z_s\setminus I_0(Z_s)$ for some $s\in T'$ and $U$ is a Lindel\"{o}f clopen cone on $x$ in $Z_s$, then $U^{\sim}$ is a Lindel\"{o}f clopen cone on $x$ in $Z$. Therefore, $Z$ is locally Lindel\"{o}f.
\end{proof}
\noindent{\bf Claim 4.} {\em $Z$ is good.}
\begin{proof
Let ${\delta}<{\alpha}=ht(Z)$
and let $\{W_n : n\in \om \}$ be a family of Lindelöf cones in $Z$. Since every $W_n$ is covered by countably many
Lindelöf cones from the basis, we can assume that $W_n\in \mathbb B^Z_{x_n}$ for some $x_n\in Z$
for each $n\in {\omega}$.
For each $n$ pick $y_n\in T$ such that $y_n=x_n$ if $x_n\in T$ and $x_n\in Z_{y_n}$ otherwise.
Then $W_n\subs W'_n$ for some $W'_n\in \mathbb B^Z_{y_n}$, so we can assume that
$\{x_n:n\in {\omega}\}\subs T$.
We can also assume that if $x_n\in T'$, then $W_n$ is as large as possible, i.e.
$W_n=Z_{x_n}^\sim=(V_{x_n})^*$.
If $x_n\in T\setm T'$, then $W_n=S_n^*$ for some Lindelöf cone $S_n$ on $x_n$ in $T.$
If ${\delta}={\alpha}_{\xi}$ for some ${\xi}$, then
$I_{\delta}(Z)\cap W_n=I_{\delta}(Z)\cap V_{x_n}$ if $x_n\in T'$
and $I_{\delta}(Z)\cap W_n=I_{\delta}(Z)\cap S_n$ if $x_n\in T\setm T'$.
So $I_{\delta}(Z)\setm \bigcup_{n\in {\omega}}W_n$ is uncountable because
$T$ is good.
Assume that ${\alpha}_{\xi}< {\delta}<{\alpha}_{{\xi}+1}$
and let ${\delta}={\alpha}_{\xi}+{\eta}$.
Pick $s\in I_{{\alpha}_{{\xi}+1}}(Z)\setm \bigcup_{n\in {\omega}}W_n$. Then $Z_s\setm \bigcup_{n\in {\omega}}W_n\supset I_{\eta}(Z_s)$, and so $I_{\delta}(Z)\setm \bigcup_{n\in {\omega}}W_n\supset I_{\eta}(Z_s)$, and hence $I_{\delta}(Z)\setm \bigcup_{n\in {\omega}}W_n$ is uncountable.
\end{proof}
Thus, the space $Z$ is as required.
\end{proof}
\section{Cardinal sequences of length $<\om_4$}
\vspace{2mm} In this section, we will show the following result.
\begin{theorem}\label{tm:o4} If V=L, then there is a cardinal-preserving partial order ${\mathbb P}$ such that in $V^{\mathbb P}$ there is an
$(\om_1,\al)$-LLSP space for every ordinal $\al < \om_4$.
\end{theorem}
\vspace{2mm} If $S = \bigcup \{ \{\al\} \times A_{\al} : \al < \eta \}$ where $\eta$ is a non-zero ordinal and each $A_{\al}$ is a non-empty set of ordinals, then for every $s = \langle \al,\xi \rangle\in S$ we write $\pi(s) = \al$ and $\zeta(s) = \xi$.
\vspace{1mm} The following notion is a refinement of a notion used implicitly in \cite{bs87}.
\begin{definition} {\em We say that ${\mathbb S} = \langle S,\preceq, i \rangle$ is an {\em LLSP poset}, if the following conditions hold:
\begin{enumerate}[(P1)]
\item $\langle S,\preceq \rangle$ is a partial order with $S= \bigcup \{S_{\al} : \al < \eta \}$ for some non-zero ordinal $\eta$ such that each $S_{\al} = \{\al\} \times A_{\al}$ where $A_{\al}$ is a non-empty set of ordinals.
\item If $s \prec t$ then $\pi(s) < \pi(t)$.
\item If $\al < \be < \eta$ and $t\in S_{\be}$, then $\{s\in S_{\al} : s \prec t \}$ is uncountable.
\item
If $\ga < \eta$ with $\mbox{cf}(\ga) = \om$, $t\in S_{\gamma}$ and $\langle t_n : n \in \om \rangle$ is a sequence of elements of $S$ such that $t_n \prec t$ for every $n\in \om$, then for every ordinal $\be < \ga$ the set $\{s\in S_{\be} : s \prec t \mbox{ and } s\not\preceq t_n \mbox{ for } n\in \om \}$ is uncountable.
\item $i : [S]^2 \rightarrow [S]^{\leq {\om}}$ such that for every $\{s,t\}\in [S]^2$ the following holds:
\begin{enumerate}[(a)]
\item If $v\in i\{s,t\}$ then $v\preceq s,t$.
\item If $u\preceq s,t$, then there is $v\in i\{s,t\}$ such that $u\preceq v$.
\end{enumerate}
\end{enumerate}}
\end{definition}
\noindent If there is an uncountable cardinal $\lambda$ such that $|S_{\al}| = \lambda$ for $\al < \eta$, we will say that $\langle S,\preceq, i \rangle$ is a $(\lambda,\eta)$-{\em LLSP poset}.
\vspace{2mm} If ${\mathbb S} = \langle S,\preceq, i \rangle$ is an LLSP poset
with $S= \bigcup \{S_{\al} : \al < \eta \}$,
we define its {\em associated LLSP space} $X = X({\mathbb S})$ as follows.
The underlying set of $X({\mathbb S})$ is $S$. If $x\in S$ we write $U(x) = \{y\in S: y \preceq x \}$. Then, for every $x\in S$ we define a basic neighbourhood of $x$ in $X$ as a set of the form $U(x)\setminus \bigcup\{U(x_n) : n\in \om \}$ where each $x_n \prec x$. It is easy to check that $X$ is a locally Lindel\"{o}f scattered Hausdorff P-space (see \cite{ba2007} for a parallel proof).
And by conditions $(P3)$ and $(P4)$ in Definition 4.2, we infer that $\mbox{ht}(X) = \eta$ and
$I_{\al}(X) = S_{\al}$ for every $\al < \eta$.
\vspace{2mm} In order to prove Theorem 4.1, first we will
construct an $(\om_1,\om_3)$-LLSP space $X$ in a generic extension by means of
an $\om_1$-closed $\om_2$-c.c. forcing, by using an argument similar to the one
given by Baumgartner and Shelah in \cite{bs87}.
\vspace{2mm} Recall that a function $F:[\om_3]^2 \rightarrow [\om_3]^{\leq\om_1}$ has {\em property $\Delta$}, if $F\{\al,\be\}\subset \mbox{min} \{\al,\be\}$ for every $\{\al,\be\}\in [\om_3]^2$ and for every set $D$ of countable subsets of $\om_3$ with $|D| = \om_2$ there are $a,b\in D$ with $a\neq b$ such that for every $\al \in a\setminus b$, $\be \in b\setminus a$ and $\tau \in a\cap b$ the following holds:
\begin{enumerate}[(a)]
\item if $\tau < \al,\be$ then $\tau\in F\{\al,\be\}$,
\item if $\tau < \beta$ then $F\{\al,\tau\} \subset F\{\al,\be\}$,
\item if $\tau < \al$ then $F\{\tau,\be\} \subset F\{\al,\be\}$.
\end{enumerate}
\vspace{1mm} By a result due to Velickovic, it is known that $\square_{\om_2}$ implies the existence of a function $F:[\om_3]^2 \rightarrow [\om_3]^{\leq\om_1}$ satisfying property $\Delta$ (see \cite[Chapter 7 and Lemma 7.4.9.]{to}, for a proof ).
\begin{proof}[Proof of Theorem \ref{tm:o4}]
Let $F:[\om_3]^2 \rightarrow [\om_3]^{\leq\om_1}$ be a function with property $\Delta$. First, we construct by forcing an $(\om_1,\om_3)$-LLSP poset. Let $S= \bigcup \{S_{\al} : \al < \om_3 \}$ where $S_{\al} = \{\al\}\times \om_1$ for each $\al < \om_3$. $S$ will be the underlying set of the required poset. We define $P$ as the set of all $p = \langle x_p,\preceq_p,i_p\rangle$ satisfying the following conditions:
\begin{enumerate}[(1)]
\item $x_p$ is a countable subset of $S$.
\item $\preceq_p$ is a partial order on $x_p$ such that:
\begin{enumerate}[(a)]
\item if $s\prec_p t$ then $\pi(s) < \pi(t)$,
\item if $s\prec_p t$ and $\pi(t)$ is a successor ordinal $\be + 1$, then there is $v\in S_{\be}$ such that $s\preceq_p v \prec_p t$.
\end{enumerate}
\item $i_p : [x_p]^2 \rightarrow [x_p]^{\leq \om }$ satisfying the following conditions:
\begin{enumerate}[(a)]
\item if $s\prec_p t$ then $i_p\{s,t\} = \{s\}$,
\item if $s\not\preceq_p t$ and $\pi(s) < \pi(t)$, then $i_p\{s,t\}\subset \bigcup \{S_{\al} : \al \in F\{\pi(s),\pi(t)\}\}$,
\item if $s,t\in x_p$ with $s\neq t$ and $\pi(s) = \pi(t)$ then $i_p\{s,t\} = \emptyset$,
\item $v\preceq_p s,t$ for all $v\in i_p\{s,t\}$,
\item for every $u\preceq_p s,t$ there is $v\in i_p\{s,t\}$ such that $u\preceq_p v$.
\end{enumerate}
\end{enumerate}
\vspace{1mm} If $p,q\in P$, we write $p\leq q$ iff $x_q \subset x_p$, $\preceq_p \upharpoonright x_q = \preceq_q$ and $i_p\upharpoonright [x_q]^2 = i_q$. We put ${\mathbb P} = \langle P,\leq \rangle$.
\vspace{2mm} Clearly, ${\mathbb P}$ is $\om_1$-closed. And since the function $F$ has property $\Delta$, it is easy to check that ${\mathbb P}$ has the $\om_2$-c.c., and so ${\mathbb P}$ preserves cardinals.
\vspace{2mm} Now, let $G$ be a ${\mathbb P}$-generic filter.
We write $\preceq = \bigcup\{\preceq_p : p\in G \}$ and $i = \bigcup \{i_p : p\in G\}$. It is easy to see that
$S = \bigcup\{x_p : p\in G \}$ and $\preceq$ is a partial order on $S$.
Then, we have that $\langle S,\preceq, i \rangle$ is an $(\om_1,\om_3)$-LLSP poset. For this, note that conditions $(P1),(P2),
(P5)$ in Definition 4.2 are obvious, and condition $(P3)$ follows from a basic density argument. So, we verify condition $(P4)
$. For every $t\in S$ such that $\gamma = \pi(t)$ has cofinality $\om$,
for every sequence $\langle t_n : n\in \om \rangle$ of elements of
$S$,
for every ordinal $\be < \ga$ and for every ordinal $\xi <
\om_1$ let
\vspace{2mm}
$D_{t,\{t_n : n\in \om \},\be,\xi} = \{ q\in P : \{t\}\cup \{t_n:n\in \om \} \subset x_q \mbox{ and either } (t_n\not\prec_q t \mbox{ for some } n\in \om) \mbox{ or } (t_n \prec_q t \mbox{ for every } n\in \om \mbox{ and there is } y\in S_{\be}\,\cap\, x_q \mbox{ with }$ $\zeta(y) > \xi \mbox{ such that } y \prec_q t \mbox{ and } y\not\preceq_q t_n \mbox{ for every } n\in \om ) \}.$
\vspace{2mm} Since ${\mathbb P}$ is $\om_1$-closed, we have that $D_{t,\{t_n : n\in \om \},\be,\xi}\in V$. Then, consider $p = \langle x_p,\preceq_p,i_p\rangle\in P$. We define a $q\in D_{t,\{t_n : n\in \om \},\be,\xi}$ such that $q\leq p$. Without loss of generality, we may assume that $t\in x_p$. We distinguish the following cases.
\vspace{2mm}\noindent {\bf Case 1}. $t_n\not\in x_p$ for some $n\in \om$.
\vspace{1mm} We define $q =\langle x_q,\preceq_q,i_q \rangle$ as follows:
\vspace{1mm} (a) $x_q = x_p \cup \{t_n : n\in \om \}$,
\vspace{1mm} (b) $\prec_q = \prec_p$,
\vspace{1mm} (c) $i_q\{x,y\} = i_p\{x,y\}$ if $\{x,y\}\in [x_p]^2$, $i_q\{x,y\} = \emptyset$ otherwise.
\vspace{2mm}\noindent {\bf Case 2}. $t_n\in x_p$ for every $n\in \om$.
\vspace{1mm} If $t_n \not\prec_p t$ for some $n\in \omega$, we put $q = p$. So, assume that $t_n \prec_p t$ for all $n\in \om$. Let $u\in S_{\be}\setminus x_p$ be such that $\zeta(u) > \xi$. We define $q =\langle x_q,\preceq_q,i_q \rangle$ as follows:
\vspace{1mm} (a) $x_q = x_p \cup \{u\}$,
\vspace{1mm} (b) $\prec_q = \prec_p \cup \{\langle u,v \rangle : t \preceq_p v \}$,
\vspace{1mm} (c) $i_q\{x,y\} = i_p\{x,y\}$ if $\{x,y\}\in [x_p]^2$, $i_q\{x,y\} = \{x\}$ if $x \prec_q y$, $i_q\{x,y\} = \{y\}$ if $y \prec_q x$, $i_q\{x,y\} = \emptyset$ otherwise.
\vspace{2mm} So, $D_{t,\{t_n : n\in \om \},\be,\xi}$ is dense in ${\mathbb P}$, and hence condition $(P4)$ holds. Let $X = X(\langle S,\preceq,i \rangle )$. For every $x\in S$, we write $U(x) = \{y\in S : y\preceq x \}$. By conditions $(2)(b)$ and $(3)(c)$ in the definition of P, we see that if $x\in S_{\be + 1}$ for some $\be < \om_3$, then $x$ has an admissible basis in $X$ given by $\{U(y) : y \prec x, \pi(y) = \be \}$. Thus, $X$ is an admissible space. And clearly, $X$ is good. So, by Theorem 3.3, we can construct from the space $X$ an $(\om_1,\al)$-LLSP space for every ordinal $\om_3\leq \al < \om_4$.
\end{proof}
Now, assume that $\ka$ is an uncountable regular cardinal. Recall that a topological space $X$ is a $P_{\ka}$-{\em space}, if the intersection of any family of less than $\ka$ open subsets of $X$ is open in $X$. And we say that $X$ is $\kappa$-{\em compact}, if every open cover of $X$ has a subcover of size less $< \kappa$.
By an $SP_{\kappa}$ {\em space} we mean a scattered Hausdorff $P_{\ka}$-space. Then, we want to remark that by using arguments that are parallel to the ones given in the proofs of the above theorems, we can show the following more general results:
\vspace{1mm} (1) For every uncountable regular cardinal $\ka$ and every ordinal $\al < \ka^{++}$, there is a locally $\ka$-compact $SP_{\kappa}$ space $X$ such that $\mbox{ht}(X) = \al$ and $\mbox{wd}(X) = \kappa$.
\vspace{1mm} (2) If V=L and $\ka$ is an uncountable regular cardinal, then there is a cardinal-preserving partial order ${\mathbb P}$ such that in $V^{\mathbb P}$ we have that for every ordinal $\al < \ka^{+++}$ there is a locally $\ka$-compact $SP_{\kappa}$ space $X$ such that $\mbox{ht}(X) = \al$ and $\mbox{wd}(X) = \kappa$.
|
1,477,468,750,668 | arxiv | \section{Introduction}
Gauge theories are a cornerstone in the description of various naturally
occurring phenomena in Nature, whether in particle or in condensed matter
physics \cite{Wilczek_2016}. These theories are characterized by the presence
of local conservation laws, which are in general not enough to make the models
integrable. However, such local conservation laws greatly constrain these systems,
leading to exotic phenomena involving quantum entanglement of the fundamental
degrees of freedom over long distances, many of which remain unexplored due to
computational difficulties to study them on a classical computer. In addition,
one of the outstanding challenges in fundamental physics is to study real-time
dynamics of the quantum entanglement inherent in gauge theories that leads to
confinement. The rapid experimental development of quantum computers (both
analog and digital) \cite{Preskill2018quantumcomputingin, Noh_2016,superconductingQubits, Lanyon_2011, bloch_quantum_2012} following the
pioneering suggestion of Feynman \cite{feynman_simulating_nodate} provides an
opportunity to overcome these bottlenecks and make new fundamental progress in this
field.
While certain initial exciting developments have been obtained from the studies
of finite, relatively small systems using classical computations such as exact
diagonalizations and variational methods using the MPS ans\"{a}tze, it is pertinent
to understand the corresponding behaviour in large quantum systems. This is an
exponentially difficult problem in the system size for most of the classical
computational methods in use, thus demanding the use of new toolboxes
such as quantum computers. Although theoretically promising, current quantum
computers in use are either of the analog variety, where a certain experimental
set-up can very efficiently emulate only a limited variety of physical systems;
or of the digital kind, which are limited by the moderate number of
available (noisy) qubits. There has however, been some progress towards the
development of hybrid analog-digital approaches with the aim to combine the
desirable features of both \cite{Parra_Rodriguez_2020}. For the case of
digital quantum computation, which will be our main focus in this article, it
becomes important to devise efficient optimizations of the quantum circuitry so that
the studies can be extended to large quantum systems. The results need to be
benchmarked from an independent computational method at small or medium system
sizes.
Moreover, one of the crucial theoretical physics problems where quantum
computers could play a central role is establishing the emergence of
thermalization in isolated many-body quantum systems, necessary to describe
equilibrium properties of the system using quantum statistical mechanics
\cite{PhysRevE.50.888, PhysRevA.43.2046}. This has become well-known in the
literature under the eigenstate thermalization hypothesis (ETH). On the other hand,
in the absence of thermalization, the properties of the initial states are
preserved for a long time, and the growth of quantum entanglement is very slow.
This is known to occur in the many-body localized (MBL) phases \cite{ALET2018498},
and has raised the possibility of using such phases as quantum memories, which
can encode quantum information with high fidelity \cite{2016NatPh12907S}.
Confining phases of gauge theories could potentially offer the possibility of
realizing topologically stable qubits, unaffected by local decoherent noise
and act as quantum memories. Another relatively new development is the discovery of
atypical quantum states in (strongly) interacting quantum systems, dubbed as quantum
many body scars \cite{Serbyn_2021}, which do not follow the ETH unlike other quantum
states. Even though such states belong to highly excited part of the energy spectrum,
they have anomalously low entropy. Studying properties of such quantum states on
large systems would also benefit from a quantum computer given the computational
complexity for classical simulation methods.
In the context of particle physics, especially for non-perturbative ab-initio
computations in lattice chromodynamics (LQCD), a plethora of questions involving
physics at real-time and high baryon density cannot be reliably answered using
classical algorithms running on classical computers. Quantum computers, both
analog and digital have been proposed in order to make progress in this front
\cite{Banuls:2019bmf}. Several pioneering experiments \cite{Martinez:2016yna,
Bernien_2017, Schweizer_2019, Mil:2019pbt, Yang:2020yer, Davoudi:2019bhy} have
already demonstrated the possibility of harnessing the new technology to address
questions posed in the context of high-energy physics (HEP). Further, the
availability of noisy intermediate-scale (universal) quantum computers from the IBM
and the Rigetti corporations have empowered the theorists to perform experiments.
Recently, there have been many such preliminary efforts to address representative
questions in simpler gauge theories using quantum computing techniques. These
include investigation of scattering and real-time dynamics in spin systems
\cite{Lamm_2018, Gustafson:2019mpk, gustafson2021benchmarking} and in gauge theories
\cite{Klco_2018, Klco_2020}, static charges in gauge theories \cite{Zhang_2018}, as
well as mass spectra in Abelian and non-Abelian lattice gauge theories
\cite{Lewis:2019wfx, Atas:2021ext}. Naturally, the efforts to represent only
physical states of the corresponding gauge theory Hamiltonian, which are
invariant under the Gauss Law, in the limited quantum hardware available to us
have spurred a cascade of theoretical developments \cite{Stryker:2018efp,Raychowdhury:2019iki, Raychowdhury:2018osk, Davoudi:2020yln, Klco:2018zqz,Klco:2020aud, Ciavarella_2021, Bender_2020, aidelsburger2021cold,zohar2021quantum, kasper2020universal,Funcke_2021}.
A major obstacle in the design of quantum circuits and quantum algorithms is
the decoherence of the superconducting qubits in contemporary quantum computers,
also called noisy intermediate scale quantum (NISQ) devices, such as the IBM Q
and the Rigetti platforms. The qubits in these devices are only approximately isolated from the
environment, and the gate operations needed to induce some interaction terms
among them also depend on whether the operation is a single, or a multi-
qubit operation (the latter have smaller fidelities). Moreover, single gate
operations can have different gate times depending on the specific qubit they are
applied to. These factors induce errors in
the measured quantities, and although quantum error correction schemes have been devised
decades ago \cite{PhysRevA.52.R2493, PhysRevLett.77.793}, their
implementation is hindered by the fact that they require
additional qubits to correct the error on a single qubit, making them impractical for NISQ
era devices with a limited number of available qubits (typically of the order of 6-10).
A recent alternate approach exploits the available qubits, but repeats the experiments for a
different number of times, and with different sets of quantum gates. The resulting
data can be extrapolated to the case when there is no noise affecting the
experiment, assuming a general noise model. This approach known as the zero
noise extrapolation (ZNE) and has been intensively investigated in \cite{PhysRevX.7.021050, Kandala_2019,He_2020, larose2020mitiq, Giurgica_Tiron_2020,
lowe2020unified, sopena2021simulating}. It falls into the category of error mitigation
rather than error correction. Schemes for addressing depolarising errors
have been investigated in \cite{PhysRevLett.122.180501}, and readout errors
in \cite{funcke2020measurement, Nachman2020, Jattana2020}. Proposals of correcting
depolarizing noise in a hierarchical fashion in quantum circuits depending on whether
they contribute to the UV or IR physics have been put forward in
\cite{klco2021hierarchical}, and would allow targeted improvements in scientific
applications in appropriate energy windows.
Our main goal in this article is to implement quantum circuits in noisy
intermediate scale quantum computers (NISQ) for simulating real-time dynamics in
pure gauge theories on single and double plaquettes. The plaquette interaction has been
considered before in \cite{Lewis:2019wfx} following the usual Wilson formulation of
formulating lattice gauge fields, having an infinite dimensional Hilbert space for each
link degree of freedom. This necessarily needs a truncation in the allowed set of
states to be represented in an architecture with finite number of qubits. Instead, we
will consider a different formulation of lattice gauge theories, which are commonly
known as the quantum link models (QLMs)
\cite{Horn:1981kk, Orland:1989st,Chandrasekharan:1996ih}.
This formulation is ideally suited for implementation in quantum computers since
gauge invariance is realized exactly with a finite dimensional Hilbert space for
each link degree of freedom. In fact, the dimensionality of the local Hilbert space
can be tuned in a gauge invariant manner. Initial studies of construction
of quantum circuits for the plaquettes using the QLM approach were reported in
\cite{2011NJPh13h5007M, Mezzacapo:2015bra}. We focus on the theories with
$\mathbb{Z}_2$ and $U(1)$ local symmetries and explore their formulations on triangular and
square lattice geometries. The Hamiltonians with these local symmetries have been
used to describe physical systems in condensed matter and quantum information
\cite{Kitaev_2003,PhysRevB.69.220403, Hermele_2004}. A quantum circuit for a triangular
$U(1)$ quantum link model has been proposed in \cite{brower2020lattice} and tested
with classical hardware. Another recent work dealing with the triangular $U(1)$
quantum link model used dualization to obtain dual quantum height variables, which
allows a denser encoding in terms of qubits \cite{banerjee2021nematic}.
The rest of the paper is organized as follows. In Sec \ref{sec:models} we describe
the Hamiltonians as well as the corresponding local unitary Abelian
transformations which keep the Hamiltonian invariant, showing the constrained
nature of the Hilbert space in these models. In Sec \ref{sec:circuits} we describe the
quantum circuit used to implement the Hamiltonian interactions and perform the
real-time dynamics. We outline the methodology we adopted in mitigating the errors due
to decoherence and readout in Sec \ref{sec:errcorr}; and outline the experimental
results obtained in Sec \ref{sec:results}. Finally, we discuss possibilities of
extending this study to larger lattice dimensions as well as to non-Abelian gauge
theories in Sec \ref{sec:conc}.
\section{Abelian Lattice Gauge Theory Models} \label{sec:models}
In this section we discuss the quantum Hamiltonians, which are invariant under
local $\mathbb{Z}_2$ and the $U(1)$ transformations. The gauge theory Hamiltonians are
characterized by the plaquette term which is the simplest gauge invariant operator
that can be written down.
\subsection{The \texorpdfstring{$\mathbb{Z}_2$}{Z(2)} gauge theory}
Consider a square lattice for which the smallest closed loop would be a plaquette
containing the four links around an elementary square. Through a four spin
interaction involving $\SZ{} = \Sz{}/2$ operators, and single spin $\SX{} = \Sx{}/2$
operator on each of the links, we can realize the $\mathbb{Z}_2$ gauge theory Hamiltonian:
\begin{align}
H & = -g \sum_{\Box} U_{\Box} - \Gamma \sum_i \SX{i} \, , \\
U_{\Box} & = \SZ{r,\mu} \SZ{r+\mu,\nu} \SZ{r+\nu,\mu} \SZ{r,\nu} \, .
\end{align}
The gauge symmetry arises due to the invariance of the Hamiltonian under local
unitary transformations according to the operator:
\begin{equation}
\begin{aligned}
V_r &= \Sx{r,\mu} \Sx{r,\nu} \Sx{r-\mu,\mu} \Sx{r-\nu,\nu}\\
&= \exp \left[i \pi \sum_\mu (\SX{r,\mu} - \SX{r-\mu,\mu}) \right].
\end{aligned}
\end{equation}
This can be directly proven from the fact that the Hamiltonian commutes with the
local operator $V_r$, which is known as the Gauss Law operator. This commutation
relation $[U_\Box, V_r] = 0$ follows from a few lines of algebra.
The eigenstates of the Hamiltonian are classified into two super-selection sectors
according to $V_r \ket{\psi} = \pm 1 \ket{\psi}$ in the computational basis of
$\Sx{}$. For a square lattice, four links touch a single vertex, and $2^4$
spin configurations are possible, but only half of them have $V_r = 1$ and
the other half $V_r = -1$, giving rise to two super-selection sectors.
We are interested in implementing the real-time evolution of simple plaquette
models on superconducting-qubit-based IBM Q quantum computers. For our purposes,
we can work in the $\sigma^x$-basis where the Gauss Law as well as the $\Gamma$
term in the Hamiltonian are diagonal. We aim to start with initial product states
in the $\Sx{}$ basis, which is then evolved by an off-diagonal plaquette
Hamiltonian. We note that the $\Gamma$ term essentially contributes a phase for
the single plaquette (it would be non-trivial for a larger system), and thus
we choose $\Gamma = 0$ for the experiments performed on the quantum computer.
For the single plaquette system shown in Figure \ref{fig:GS1} (top row) with four
links in all and two links touching each vertex (labelled as A,B,C, and D), we start
by explicitly writing the Hamiltonian and the Gauss Law:
\begin{equation}
\begin{split}
H & = -g~\SZ1 \SZ2 \SZ3 \SZ4 \, , \\
V_{A} & = \Sx1 \Sx4;~V_{B} = \Sx1 \Sx2;~
V_{C} = \Sx2 \Sx3;~ V_{D} = \Sx3 \Sx4.
\end{split}
\end{equation}
For a single plaquette, 16 states are possible in total, which comprise the full
Hilbert space. We construct the Hamiltonian in each of the sectors characterized by
particular local values of the Gauss Law. Since this a $\mathbb{Z}_2$ theory, the
Gauss' Law can only take $\pm 1$ values. The two states illustrated in the top row of
Figure \ref{fig:GS1} have $V_x |\psi\rangle = 1 |\psi\rangle$ at each site.
Similarly, it is possible to obtain two configurations which have
$V_x |\psi\rangle = -1 |\psi\rangle$ at each site. Furthermore, it is possible
to place two positive and two negative $\mathbb{Z}_2$ charges, giving rise to 6 more
sectors. Each sector has two states which are related to each other by charge
conjugation (global $\SX{} \leftrightarrow -\SX{}$ flip).
\begin{figure}
\centering
\includegraphics[scale=0.6]{FIGS/Z2plaq1A.pdf}
\caption{Basis states of the $\mathbb{Z}_2$ gauge theory in the $\SX{}$
basis for both the square plaquette (upper row) and the triangular
plaquette (lower row). The configurations (i) and (ii) satisfy the
Gauss Law $V_r = 1$ at all sites for the square and the ones (iii)
and (iv) satisfy $V_r = 1$ at all sites for the triangular plaquette.}
\label{fig:GS1}
\end{figure}
For our purposes, we consider the quench dynamics within the sector
$(V_A, V_B, V_C, V_D) = (+,+,+,+)$. The Hamiltonian is two dimensional in this sector with
the eigenstates
\begin{equation}
\begin{split}
|\Psi_1\rangle & = (|1111\rangle + |0000\rangle)/\sqrt{2}, \\
|\Psi_2\rangle & = (|1111\rangle - |0000\rangle)/\sqrt{2}. \\
\end{split}
\end{equation}
Here the notation $|0000\rangle$ denotes all spins aligned in the $+1$
direction of the $\SX{}$ (computational) basis, and $|1111\rangle$ denoting all spins
aligned in the $-1$ direction. Similarly, for the $(-,-,-,-)$ sector, we get
\begin{equation}
\begin{split}
|\Psi_3\rangle & = (|1010\rangle + |0101\rangle)/\sqrt{2}, \\
|\Psi_4\rangle & = (|1010\rangle - |0101\rangle)/\sqrt{2}. \\
\end{split}
\end{equation}
Again, the $0$'s and $1$'s denote spins aligned in the $+1$ and $-1$
directions of the $\SX{}$ basis, respectively. The real-time evolution starting from
an initial state $|1111\rangle$ is therefore a two-state Rabi oscillation.
A useful quantity to measure is the return or the Loschmidt amplitude,
defined as the projection of the time-evolved initial state on to
the initial state: ${\cal G}(t) = \langle \psi_0 | e^{-i H t} | \psi_0 \rangle$.
In Figure~\ref{fig:oscillations_th}, we show the return or the Loschmidt
probability ${\cal L}(t) = |{\cal G}(t)|^2$, which is an indicator for
the so-called dynamical quantum phase transitions \cite{Heyl_2019}.
\begin{figure}
\centering
\includegraphics[scale=0.45]{FIGS/LechoSqrZ2_simu.pdf}
\caption{Oscillations of the Loschmidt probability ${\cal L}(t) = p(1111)$ for
the square $\mathbb{Z}_2$ plaquette on the ibmq\_qasm\_simulator, which is a general
purpose simulator. The system has a two dimensional gauge invariant Hilbert
space, and there is a two-state Rabi oscillation when started from the state
$|1111\rangle$ to the state $|0000\rangle$. An identical behavior is also
observed in the triangular $Z(2)$ plaquette.
}
\label{fig:oscillations_th}
\end{figure}
It is also possible to consider the $\mathbb{Z}_2$ gauge theory on different
lattices,
such as the triangular, hexagonal, or the checkerboard lattice. Here we will also
consider the example of a triangular lattice. Again, considering a single plaquette
as illustrated in Figure \ref{fig:GS1} (below), there are three links in a plaquette,
and each vertex contains two links where the Gauss Law can be imposed. In this case,
labelling the three vertices as $A,B,$ and $C$; and the three links as $1,2,3$, the
Hamiltonian and the Gauss Law are:
\begin{equation}
\begin{split}
H &= -g~ \SZ1 \SZ2 \SZ3, \\
V_A &= \Sx1 \Sx2;~V_B=\Sx2 \Sx3;~V_C=\Sx3 \Sx1 \, .
\end{split}
\end{equation}
The analysis of the triangular plaquette is also similar to the square plaquette,
leading to two quantum states in each Gauss Law sector (and four sectors total),
and thus the real-time evolution also displays a characteristic Rabi
oscillation similar to the one in the square plaquette.
In the following sections, we study both plaquette models on a quantum
hardware, where decoherence will cause mixing among the different sectors. The extent
of the mixing can help us to understand the (in-)efficiency of the quantum hardware,
and which optimizations, error corrections or mitigations are likely to help.
\subsection{The U(1) quantum link model}
We next consider the case of the $U(1)$ lattice gauge theory, which has considerably
richer physics; and as a stepping stone to studying QED, has relevance to the fundamental
physics of Nature. We will consider the theory on both the square and the triangular
lattice, as in the case of the $\mathbb{Z}_2$ theory. The phase diagrams of both systems have
been studied in the literature \cite{Banerjee_2013, banerjee2021nematic}, as well as
aspects of dynamics and thermalization of the model on the square lattice
\cite{Banerjee_2021} and its potential realization on analog and digital computers
\cite{Marcos_2014, Glaetzle_2015, Celi_2020} . Since we want to implement the models
using actual quantum hardware, we will consider very small systems involving single
and double plaquettes, as shown in Figure \ref{fig:U1basis}.
\begin{figure}
\centering
\includegraphics[scale=0.48]{FIGS/U1plaq1.pdf}
\caption{Sample basis states for the square (top) and triangular (bottom)
plaquettes of the $U(1)$ QLM, where the spins are quantized in the $\Sz{}$
basis. For the square lattice, the spins pointing up (down) indicated by arrows
on the vertical links correspond to $E=+\frac{1}{2}(-\frac{1}{2})$. For the
links along the x-axis (the horizontal links), the arrows pointing to the right
(left) indicate spins quantized along $E = +\frac{1}{2} (-\frac{1}{2})$.
For the triangular plaquette, the arrows indicate the
direction along which the flux is coming into a site, and the direction
along which it exits.}
\label{fig:U1basis}
\end{figure}
To implement a local $U(1)$ symmetry for the Hamiltonian in a simple way, we
need the spin raising and lowering operators, given by: $U_l = \SP{l} =
\frac{1}{\sqrt{2}}
(\Sx{l} + i \Sy{l})$ and $U^\dagger_l = \SM{l} = \frac{1}{\sqrt{2}}(\Sx{l} - i\Sy{l})$.
The U(1)-invariant plaquette operators are then
\begin{equation}
\begin{split}
U_\Box & = -g( \SP1 \SP2 \SM3 \SM4 + \SM1 \SM2 \SP3 \SP4 ); \\
U_\triangle & = -g ( \SP1 \SP2 \SP3 + \SM1 \SM2 \SM3); \\
\end{split}
\label{eq:splaq}
\end{equation}
The operators $U_l$ (and $U_l^\dagger$) are canonically conjugate to the
electric flux operator living on the same link, $E_l = \SZ{l}$, and obey the
following commutation relations:
\begin{equation}
[E, U] = U;~~ [E, U^\dagger] = - U^\dagger;~~[U , U^\dagger] = 2 E \, .
\end{equation}
With these operators, we can now define the lattice $U(1)$ Gauss Law:
\begin{equation}
G_x = \sum_{\mu} \left( E_{x,\mu} - E_{x-\mu,\mu} \right).
\label{eq:gl}
\end{equation}
Note that $\mu$ denotes the lattice unit vectors, and thus for the square lattice
$\mu=1,2$, while for the triangular lattice $\mu=1,2,3$. This operator $G_x$ generates
the gauge transformations, which can be expressed as $V = \prod_x \exp \left(
- i \alpha_x G_x \right)$, where $\alpha_x$ is the (local) parameter associated
with the local unitary transformation. This operator commutes with the plaquette
Hamiltonian defined on the entire lattice. For the square lattice, the local Hamiltonian
involves links around a plaquette, and the model has the form
\begin{equation}
\begin{split}
H_\Box & = -g \sum_{\Box} \left( U_{\Box} + U^\dagger_{\Box} \right), \\
U_{\Box} & = \SP{r,\mu} \SP{r+\mu,\nu} \SM{r+\nu,\mu} \SM{r,\nu},
\end{split}
\end{equation}
where $\mu,\nu$ are the lattice axes and $r$ is the bottom left corner of a square plaquette.
For the triangular lattice, the 3-link plaquette Hamiltonian has the form:
\begin{equation}
\begin{split}
H_\triangle &= -g \sum_{\triangle} \left( U_\triangle + U^\dagger_{\triangle} \right), \\
U_{\triangle} &= \SP{xy} \SP{yz} \SP{zx} ,
\end{split}
\end{equation}
where the points $x,y,z$ are the vertices of a triangle. Mathematically, the commutation
relation $[G_x, H] = 0$ ensures that the Hamiltonian is invariant under local unitary
transformations $H = V H V^{\dagger}$, resulting in a highly constrained system.
From these equations, the single-plaquette case can be obtained by only keeping the
links that exist in the triangle or the square geometry, and gives rise to Equation
(\ref{eq:splaq}). For our purposes, it is useful to further simpify Equation
(\ref{eq:splaq}) and express the Hamiltonian in terms of the Pauli matrices, which
will allow us to construct the quantum circuits using the circuit identities
introduced in the next section.
For the square plaquette we obtain:
\begin{equation}
\begin{split}
H_\Box & = -\frac{g}{2} \left[ \Sx1 \Sx2 \Sx3 \Sx4 + \Sy1 \Sy2 \Sy3 \Sy4 - \Sx1 \Sx2 \Sy3 \Sy4 \right. \\ &\qquad\qquad\left. - \Sy1 \Sy2 \Sx3 \Sx4 + \Sy1 \Sx2 \Sy3 \Sx4 + \Sy1 \Sx2 \Sx3 \Sy4 \right. \\
&\qquad\qquad\left. + \Sx1 \Sy2 \Sy3 \Sx4 + \Sx1 \Sy2 \Sx3 \Sy4 \right].
\end{split}
\end{equation}
Thus there are eight terms for a single plaquette when expressed with the Pauli
matrices. For the triangular plaquette, we have six independent plaquette terms which
have to be implemented in a quantum circuit:
\begin{equation}
\begin{aligned}
H_\triangle &= -g/\sqrt{2} \left[ \Sx1 \Sx2 \Sx3 - \Sy1 \Sy2 \Sx3\right. \\ &\left. \qquad\qquad\quad- \Sy1 \Sx2 \Sy3 - \Sx1 \Sy2 \Sy3 \right].
\end{aligned}
\end{equation}
The solution of the single-plaquette problem is straightforward: we consider the
system quantized in the $\Sz{}$-basis, such that the spin-up and the spin-down can be
denoted by arrows pointing in and pointing out respectively from a given site. This
means that there are only $2^3=8$ basis states for the triangular lattice, and $2^4=16$
basis states for the square lattice. The Gauss law further selects only two basis
states
for each of the two lattices. For the triangular lattice with $G_x=0$ everywhere as an
example, we denote them as $| 000 \rangle$ and $| 111 \rangle$; while for the square
lattice with $G_x=0$ we denote them as $| 0011 \rangle$ and $| 1100 \rangle$. Note
that $0$ denotes a spin-up and $1$ a spin-down in the $\Sz{}$ basis. The states are shown
in Figure \ref{fig:U1basis}. The Hamiltonian for both cases is therefore a
two-dimensional
off-diagonal matrix. The two eigenstates are thus given by a symmetric and
anti-symmetric linear superposition of the two basis states. The real-time evolution
-- with the Loschmidt amplitude oscillating between the two basis states -- is
qualitatively the same as that given in Figure \ref{fig:oscillations_th}, the period
simply differs as a function of $g$.
\subsection{Two-plaquette system}
As one more test of the quantum hardware, we consider a two-plaquette system on a
square lattice with periodic boundary conditions for the $Z(2)$ gauge theory. The
geometry of the system is shown in Figure \ref{fig:2plaq}.
\begin{figure}
\centering
\includegraphics[scale=0.5]{FIGS/Z2plaq2.pdf}
\caption{The set-up for two plaquettes which have periodic boundary conditions
in the longer direction. The links are marked with numerals while the sites
are marked with letters. }
\label{fig:2plaq}
\end{figure}
For clarity, let us explicitly write the Hamiltonian and the Gauss law for this case:
\begin{equation}
\begin{split}
H & = -g \SZ1 \SZ2 \SZ3 \SZ4 - g \SZ5 \SZ4 \SZ6 \SZ2, \\
G_{A} &= \Sx1 \Sx4 \Sx5; ~~~~G_{B} = \Sx5 \Sx2 \Sx1; \\
G_{C} &= \Sx6 \Sx2 \Sx3; ~~~~G_{D} = \Sx3 \Sx4 \Sx6,
\end{split}
\label{eq:Z2_2plaquette}
\end{equation}
following the labeling in Figure \ref{fig:2plaq}. Because
the $\sigma_{\rm N}^z$ commute with each other, the time evolution given by this
Hamiltonian can be decomposed as the evolution given by the product of the time
evolution given by each of the two terms for $H$ in Equation \eqref{eq:Z2_2plaquette}.
This
decomposition is exact and not subject to any Trotter errors. For each term we can
use the strategy to be described in the next section: introduce an ancillary qubit
which couples to the rest of qubits in the plaquette, and perform dynamics with
the help of the ancillary qubit. Further, the structure of the Gauss law implies
that we can impose the constraint $G_{x} = 1$ for all the sites. Without the
constraint, there are $2^6 = 64$ states. The Gauss law constraint will then reduce
this number. For example imposing $G_{A} = 1$ affects the spins on the links 1, 4,
and 5. Only those configurations are allowed where either all three have $+1$ in
the $\Sx{}$ basis, or exactly two of the spins 1,2, and 5 have $-1$ in the $\Sx{}$
basis, and the third spin is $+1$.
\begin{figure}
\centering
\includegraphics[scale=0.45]{FIGS/simulatorRuns_2plaqZ2.pdf}
\caption{Quench dynamics of the two-plaquette simulation from state 1 into the states
2,3, and 4, given by the ibmq\_qasm\_simulator. The Loschmidt probability oscillates between 0 and 1 for the states 1 and
3, while it oscillates between 0 and 0.25 for the states 2 and 4. Moreover, the
probability oscillations between state 1 and 3 are exactly out of phase, as in the
two-state systems considered previously, but it has equal projections into states 2
and 4.}
\label{fig:2plaqSimulator}
\end{figure}
While the solution of the two plaquette system is worked out in Appendix B, we
summarize the relevant points for the simulation of quench dynamics of this
system. The two plaquette system in the sector $G_{x} = 1$ for all $x$ has 8 basis
states. These 8 states can be further divided into two sectors using the global
winding number symmetry, which cuts the plaquettes
horizontally and vertically respectively. For our case, the expressions
for the operators are
\begin{align}
W_x & = \sigma^x_4 \sigma^x_2;~~~W_y(13) = \sigma^x_1 \sigma^x_3;~~~W_y(56) = \sigma^x_5 \sigma^x_6.
\label{eq:wind}
\end{align}
The last two expressions for $W_y$ are actually the same as can be seen by using
the Gauss Law for the sites. Thus, in a perfect implementation only 4 basis states
entangle with each other under a unitary evolution. In Figure
\ref{fig:2plaqSimulator} we show the Loschmidt probablity for starting in one of
these states, and the oscillations into the other three states. This system thus
provides a good playground
for tuning quantum hardware to reproduce these involved oscillations, as well as
benchmarking to what extent local and global symmetries can be preserved in these
circuits.
If we were to consider the $U(1)$ theory on two plaquettes, the entire Hamiltonian
would have a total of 16 terms, which represented by the quantum gates are:
\begin{equation}
\begin{split}
H &= -\frac{J}{2} \left[ \Sx1 \Sx2 \Sx3 \Sx4 + \Sy1 \Sy2 \Sy3 \Sy4 - \Sx1 \Sx2 \Sy3 \Sy4 \right. \\
&- \Sy1 \Sy2 \Sx3 \Sx4 + \Sy1 \Sx2 \Sy3 \Sx4 + \Sy1 \Sx2 \Sx3 \Sy4 + \Sx1 \Sy2 \Sy3 \Sx4 \\
&+ \Sx1 \Sy2 \Sx3 \Sy4 + \Sx5 \Sx4 \Sx6 \Sx2 + \Sy5 \Sy4 \Sy6 \Sy2 - \Sx5 \Sx4 \Sy6 \Sy2 \\
&- \Sy5 \Sy4 \Sx6 \Sx2 + \Sy5 \Sx4 \Sy6 \Sx2 + \Sy5 \Sx4 \Sx6 \Sy2 \\
&+ \left. \Sx5 \Sy4 \Sy6 \Sx2 + \Sx5 \Sy4 \Sx6 \Sy2 \right] .
\end{split}
\end{equation}
These terms do not all commute with each other, so Trotterization would
be necessary to simulate their real-time evolution. In this paper we only
consider the $\mathbb{Z}_2$ case which involves no Trotter steps.
\section{Quantum Hardware and Circuits}
\label{sec:circuits}
In our plaquette model simulations we make use of IBM Q hardware, which is based on
superconducting (transmon) qubits. We discuss below a few details on how we work with
this NISQ hardware, both in terms of selecting the platform for each experiment and
in terms of circuit implementation.
\subsection{Hardware Selection}
Superconducting qubits have the advantage of being relatively fast at running
experiments compared to trapped-ion qubits, but a disadvantage of relatively short
decoherence times.\cite{Linke3305}
Because of this, the topology of the circuits is important, as it will make a
difference for how many gates are necessary to realize a particular simulation.
Figure \ref{fig:topology} shows three real-hardware topologies that are used in this
paper. For each experiment, we may select hardware depending on optimal topology.
\begin{figure}[h]
\centering
\includegraphics[width=7cm]{FIGS/topology.png}
\caption{Three circuit topologies used for the simulations. Images taken from IBM Quantum Experience.}
\label{fig:topology}
\end{figure}
Another important consideration for choosing hardware is the \textit{quantum
volume} of the device, which is generally a measure of the most complex circuit
that can compute accurate quantities according to a
particular threshold for a given device.
IBM Q measures quantum volume using the following formula,
\begin{equation}
V_Q = 2^{{\rm min}(d,m)},
\end{equation} where $d$ is the depth of the circuit (measured according to two-qubit
gates), and $m$ is the number of qubits, so that ${\rm min}(d,m)$ tells us the
largest square circuit possible that still meets the set accuracy threshold
\cite{PhysRevA.100.032328}. The IBM Q devices each have a $V_Q$ measured and so in
our experiments we favor using those with the highest $V_Q$ available.
\subsection{Circuit Implementation and Scaling}
The real-time simulation of plaquette dynamics involves realizing Hamiltonians
of several spins on a plaquette. A very simple case looks like
\begin{equation}
H_{\rm N} = -g \sigma^3_{xy} \sigma^3_{yz} \sigma^3_{zw} \sigma^3_{wx},
\label{eq:hex}
\end{equation}
with ${\rm N}=4$ and the sites ${x, y, z, w}$ are corners of a square
plaquette. To realize a real-time evolution with the above Hamiltonian, we implement the
following gate sequence
\cite{2011NJPh13h5007M,Mezzacapo:2015bra}.
\begin{equation}
\begin{aligned}
U_{\rm S,A} (t) &= \exp \left[ i \frac{\pi}{4} \sigma^3_{\rm A} \sum_{j=1}^N \sigma^3_j \right] \exp \left[ i g t \sigma^1_{\rm A}\right] \\
&\qquad \qquad \times\exp \left[- i \frac{\pi}{4} \sigma^3_{\rm A} \sum_{j=1}^N \sigma^3_j \right]
\label{eq:qgate1}
\end{aligned}
\end{equation}
A proof for Equation~\ref{eq:qgate1} is detailed in the Appendix.
This identity allows for the time-evolution portion to be done entirely on the
ancillary qubit. In theory one would only need one ancillary qubit for the entire
system, but due to topological issues it may be more efficient in terms of circuit
depth to add more ancillary qubits in systems with more plaquettes. Still, with at
most one ancillary qubit per plaquette, the number of qubits needed for simulation
scales linearly with the number of links in the system.
If all terms in the Hamiltonian commute, the number of gates needed is constant as a
function of real time, but in the more generic case where the terms do not commute and
so Trotterization is necessary, the circuit depth scales linearly with time. In
our examples below we focus only on cases where no Trotterization is needed.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{FIGS/meas-mit.png}
\caption{Response mitigation matrix computed by qiskit-ignis for IBM Q Manila
(5 qubits system).}
\label{fig:output}
\end{figure}
\section{Error Mitigation methodologies}
\label{sec:errcorr}
As mentioned earlier, one major practical obstacle to develop physical devices
to perform quantum computations is the inherent noise that affects NISQ quantum
devices. In theory, quantum error correction is possible by encoding the information
of the desired circuit into a highly entangled state composed of a very large number
of physical qubits \cite{PhysRevA.52.R2493, PhysRevLett.77.793}. However, this large
number of qubits makes the hardware requirements too demanding to be implemented in
practice (although promising results point in the right direction \cite{Chen2021}).
An alternative is to take advantage of systematic and reproducible properties of the
hardware. These properties are exploited as part of the so-called error mitigation
schemes, which have proven to be successful in NISQ era devices
\cite{PhysRevX.7.021050, Kandala_2019,He_2020, larose2020mitiq, Giurgica_Tiron_2020, lowe2020unified, sopena2021simulating, funcke2020measurement, Nachman2020, Jattana2020, 2020QS&T5cLT01G}.
Among those, we consider two types, readout error mitigation and zero noise
extrapolation (ZNE); which aim to reduce noise coming from two different sources:
readout and gate operation decoherence.
\subsection{Readout error mitigation}
One important source of errors are the so called ``readout'' errors, which arise
due to the comparable measurement and decoherence times
\cite{funcke2020measurement, Maciejewski2020, Jattana2020, Nachman2020}. This can
cause undesired
state decays, affecting the state captured in the measurement. Assuming a
classical stochastical model for the noise affecting measurements,
the statement of the problem can be
formulated by using the response matrix $P(m | t)$, which connects a
noisy measurement $m$ to the true/ideal measurement $t$ by the relation $m =
P t$. Naively one can use the inverse of the response matrix to obtain $t =
P^{-1} m$ and recover the true value of the measurement. The problem then consists in
performing a series of calibration experiments to measure $P$,
and then use it to recover $t$ given $m$ in subsequent independent experiments.
Packages such as qiskit-ignis \cite{Qiskit} are based on the response matrix formulation
of the readout error mitigation scheme, but (by default) do not try to compute $P^{-1}$
directly by matrix inversion.
Instead, $t$ is recovered by finding the minimum of the least squares expression:
\begin{equation}
f(t) = \sum_{i=1}^{2^n} \left(m_i - \left(P \cdot t\right)_i \right)^2 \, ,
\end{equation}
where $n$ is the total number of qubits in the circuit. This methodology is
more robust than matrix inversion for general NISQ hardware \cite{Qiskit, Nachman2020}.
More involved methods
combine the previous approach with gate inversion to further improved the error
mitigation results \cite{Jattana2020}; while unfolding methods have also been proposed
and tested in the literature \cite{Nachman2020}.
In most cases, the ability to apply readout error mitigation is limited by the
number of qubits ($n$) in the circuit, as the number of calibration experiments required to
evaluate $P$ grows as $2^n$. Moreover, the calibration step of estimating $P$ is
hardware dependent and needs to be performed immediately before running the
experiments to guarantee temporal deviations in the particular hardware are accounted for.
An example of the response matrix obtained for
a 5 qubit system using qiskit-ignis is shown in \ref{fig:output}.
As expected, the diagonal entries
have probability values close to 1, but there is still significant drift towards non-diagonal
entries. As presented and discussed in Sec \ref{sec:results}, correcting for these small
deviations resulted in significant improvements in the final mitigated data.
Clearly, going beyond circuits with a small number of qubits would be prohibitively
expensive due to the number of experiments required to evaluate the response
matrix. Some proposals have considered the possibility of assuming close to
uncorrelated readout
errors between the qubits, which would drastically reduce the number of experiments
required \cite{Maciejewski2020}.
Studying these potential improvements goes beyond the scope of this work.
\begin{figure*}[!ht]
\centering
\includegraphics[width=12cm]{FIGS/exam4.png}
\caption{An example of using the Mitiq package for folding a circuit that gives the
time evolution of the $Z(2)$ gauge theory on a triangular plaquette. Both circuits are
equivalent, but the second one contains additional identity insertions of CNOT gates
such that when measured using the CNOT circuit depth the second circuit is 1.6
times longer than the former.}
\label{fig:folding}
\end{figure*}
\subsection{Corrections against decoherence -- Mitiq}
The second source of error comes from the gate portion of the circuit before
measurements occur. Longer circuits will consist of more gates and both the longer
runtimes and the gate implementation (transmon qubits in the case of the IBM Q devices)
will
cause additional errors to pile up. To mitigate this source of error we use a method
known as zero-noise extrapolation (ZNE), where we introduce additional noise in a
controlled way in order to empirically develop a noise model that we can extrapolate
to the zero noise case.
Implementations of ZNE include those that involve pulse control and run multiple
experiments with pulses of different durations \cite{Kandala_2019}, and those that
involve \textit{folding}, which consists of insertions of additional gate identities
to the circuit which would not change the results in an ideal simulation, but will
make results on real hardware more noisy. This information on how the gates affect
the noise level can then be used to develop a noise model and extrapolate back to
an ``ideal'' result.
We used the folding option in this paper and specifically we used the \textit{Mitiq}
package to implement it \cite{larose2020mitiq}. As an example, Figure \ref{fig:folding}
shows two equivalent circuits, but the second circuit has three extra identity
insertions, each consisting of two identical CNOT gates in a row. Because the error
rates of the two-qubit CNOT gates are significantly higher than those for the single
qubit gates (roughly ten times different on IBM Q devices), we will assume perfect
fidelities for the single qubit gates and model all the error coming from the
two-qubit gates (an option within \textit{Mitiq}). With this in mind, because circuit
\textbf{a} in Figure \ref{fig:folding} has ten CNOT gates, and circuit \textbf{b} has
sixteen CNOT gates, the scale factor of the circuit \textbf{b} is 1.6 times that of
circuit \textbf{a}.
Figure \ref{fig:zne} shows a comparison of different extrapolations for several
circuits with the ideal result (determined using a simulator) marked at scale factor
``0''. The first row shows example extrapolations for $\mathbb{Z}_2$ model on the
square plaquette, at two different times in the evolution. The bottom left image shows
an extrapolation at $t=0$ for the $\mathbb{Z}_2$ theory on the triangular plaquette,
and the bottom right image shows one at $t=0$ for the U(1) theory on the square
plaquette. The two extrapolations shown are a quadratic fit and a Richardson
extrapolation, explained in Kandala et. al.\cite{Kandala_2019} From this empirical
data we decided to use the quadratic extrapolation for our data, as it appeared less
susceptible to experimental outliers (such as those in the bottom left of Figure
\ref{fig:zne}).
\begin{figure*}[b]
\centering
\includegraphics[width=6cm]{FIGS/zne_extrapolation_t0_z2_sqr.pdf}
\includegraphics[width=6cm]{FIGS/zne_extrapolation_t10_z2_sqr.pdf}
\includegraphics[width=6cm]{FIGS/zne_extrapolation_t0_z2_tri.pdf}
\includegraphics[width=6cm]{FIGS/zne_extrapolation_t0_u1_sqr.pdf}
\caption{The plots in the top row show zero-noise extrapolation for the
$\mathbb{Z}_2$ theory on a square plaquette (IBM Q Valencia hardware) at two
times: $t=0$ (left) and $t=0.6$ (right). The bottom row shows zero-noise
extrapolation for the $\mathbb{Z}_2$ gauge theory on a triangular plaquette
(IBM Q Bogota) at $t=0$ (left) and a U(1) gauge theory on a triangular
plaquette (IBM Q Santiago) at $t=0$ (right).}
\label{fig:zne}
\end{figure*}
It is interesting to note the presence of two regimes which display sensitivity
to a change in the circuit depth. For larger scale factors which exceed the
quantum volume of the system, the dependence on the scale factor becomes insensitive.
At $t=0$, the measurements for increasing the circuit length decay only slowly
until the scale factors exceed $3$ for the $\mathbb{Z}_2$ model, and about $6$ for
the $U(1)$ model. For $t=0.6$ this decay is much faster for the $U(1)$ model than
the $\mathbb{Z}_2$ model. Typically the $U(1)$ circuit is significantly more entangled,
and becomes more so when the extrapolation is attempted at finite $t$.
\section{Results}
\label{sec:results}
This section gives our real-time evolution results for the Loschmidt echo, as well as
observables $G_x$ and $W_y$ for plaquette simulations on NISQ hardware. In each
simulation, we take five measurements (8192 shots per measurement) at every point in
time and at each of the eight different scale factors illustrated by Figure
\ref{fig:zne}. This allows us to get error bars and perform ZNE at every time. Each
simulation consists of 20 points in time total, leading to $5\times8\times20=800$
circuit measurements to produce the error-mitigated plots for a theory on a particular
plaquette.
\subsection{\texorpdfstring{$\mathbb{Z}_2$}{Z(2)} Theory on Single Plaquettes}
\begin{figure*}
\centering
\includegraphics[width=5.75cm]{FIGS/LechoState1_z2_sqr_g1.pdf}
\includegraphics[width=5.75cm]{FIGS/LechoState1_z2_sqr_g2.pdf}
\includegraphics[width=5.75cm]{FIGS/LechoState1_z2_tri.pdf}
\includegraphics[width=5.75cm]{FIGS/LechoState2_z2_tri.pdf}
\includegraphics[width=5.75cm]{FIGS/GLaw_z2_tri.pdf}
\caption{Real-time evolution of the $\mathbb{Z}_2$ theory on a single plaquette.
The two top-left images show the Loschmidt echo data for a square plaquette on
IBM Q Valencia (with two couplings: g=1.0,2.0), then the top-right and
bottom-left images show the Loschmidt echo data for a triangular plaquette on IBM
Q Bogota. The last image plots the Gauss law observable $V_A$, which means the
observable using the links 1 and 2, as shown in Figure \ref{fig:GS1}.}
\label{fig:outputz2}
\end{figure*}
We first discuss the results we get for the $\mathbb{Z}_2$ theory on square and
triangular plaquettes, which were simulated on IBM Q Valencia and IBM Q Bogota,
respectively. The results are plotted in Figure \ref{fig:outputz2}. The first two plots
in the top-left of the figure show a simulation of a single square plaquette system for
two different couplings: $g=1.0$ and $g=2.0$. We chose IBM Q Valencia for this
simulation because of its T-shaped topology, illustrated in \textbf{b} of Figure
\ref{fig:topology}, which reduced the circuit depth necessary since the ancillary qubit
could be placed at a junction directly connected to three other qubits. There was other
hardware available with better $V_Q$ (32 versus 16 for Valencia), but the topological
advantage of the T-shaped hardware made for better results despite the worse $V_Q$. In
these plots we give the ideal simulator measurement of the Loschmidt echo in addition
to the original (raw) data from the circuit, followed by the readout error correction,
followed by the readout and ZNE error corrections in combination. Here we see that with
both these corrections we are able to get to the correct simulator measurements within
errors.
The next two plots, in the top-right and the bottom-left of Figure \ref{fig:outputz2}
give the results for a $\mathbb{Z}_2$ theory on a triangular plaquette instead. Here a
smaller circuit depth is needed compared to the square plaquette, so we use IBM Q
Bogota due to its better quantum volume (it has a linear topology, as seen in
\textbf{a} in Figure \ref{fig:topology}). These plots give the time-evolution for the
two states in the $V_A = V_B=V_C = 1$ sector: $\left|000\right\rangle$ and
$\left|111\right\rangle$, and one can see from the simulator lines that their
probabilities always add up to 1. As in the case for the $\mathbb{Z}_2$ theory on the
square plaquette, the error mitigation methods allow for the fully mitigated data to
track the simulator data within error bars. The last plot in the lower right corner is
a measure of how well the circuits for the system on the triangular plaquette are
producing only states that have $V_A=1$. It shows measurements throughout the time
evolution of $\left\langle V_A\right\rangle$, and as the simulator line shows, ideally
it would remain exactly equal to 1 the whole time. The mitigated measurements show how
for most time measurements we are able to produce $\left\langle V_A\right\rangle =1$
within error bars.
We further note that the circuit depths for the simulations of the $\mathbb{Z}_2$
theory on the square plaquette lead to circuit volumes clearly greater than the
quantum volume $V_Q$ measurements of the quantum hardware ($d=8$, $m=5$ leading to a
circuit volume of $40$ for the square plaquette, whereas $V_Q$ is 16 on IBM Q
Valencia--suggesting a maximum square circuit volume of $16$, with $d=m=4$.) The simple
mitigation techniques employed thus seem to allow us to ``beat'' the quantum volume
limitations for the hardware and get results consistent with the simulator within
errors. For the triangular plaquette on IBM Q Bogota, we have $d=8$, $m=4$, leading to
a circuit volume of $32$, whereas the $V_Q$ of the hardware is $32$, corresponding to a
$d=m=5$ square. It is less clear whether we have exceeded quantum volume limitations for
this simulation, and indeed empirically most Loschmidt echo data seems to meet the IBM
Q threshold of $67\%$ of the ideal amplitude\cite{PhysRevA.100.032328}, but again we
see that our mitigation efforts are successful at restoring the full measurement
values.
\begin{figure*}
\centering
\includegraphics[width=5cm]{FIGS/LechoState1_u1_sqr.pdf}
\includegraphics[width=5cm]{FIGS/LechoState2_u1_sqr.pdf}
\includegraphics[width=5cm]{FIGS/GLaw_u1_sqr.pdf}
\includegraphics[width=5cm]{FIGS/LechoState1_u1_tri.pdf}
\includegraphics[width=5cm]{FIGS/LechoState2_u1_tri.pdf}
\includegraphics[width=5cm]{FIGS/GLaw_u1_tri.pdf}
\caption{Real-time evolution of the $U(1)$ theory on a single plaquette. The top
row shows results for the square plaquette on IBM Q Quito hardware, with the
first two plots showing Loschmidt echo data, and the third plot showing the $G_A$
Gauss law observable, defined in Equation (\ref{eq:gl}). The bottom row shows
results for the triangular plaquette on IBM Santiago hardware, with again the
first two plots showing Loschmidt echo data, and the third showing the $G_A$
Gauss law observable.}
\label{fig:outputu1}
\end{figure*}
\subsection{\texorpdfstring{$U(1)$}{U(1)} Theory on Single Plaquettes}
We next look at the data for the $U(1)$ theory on a single square plaquette and a
single triangular plaquette, which we ran on IBM Q Quito and IBM Q Santiago,
respectively. Similar to the $\mathbb{Z}_2$ case, IBM Q Quito has a T-shaped
architecture (as seen in \textbf{b} of Figure \ref{fig:topology}) with $V_Q=16$, while
IBM Q Santiago has a linear topology (as seen in \textbf{a} of Figure
\ref{fig:topology}) with $V_Q=32$. We ran the square plaquette simulation on the
T-shaped architecture because despite its lower $V_Q$, the topological advantages
requiring fewer two-qubit gates made for better data. Indeed we could not get any
signal at all for the square plaquette $U(1)$ model on current linear IBM Q devices.
Figure \ref{fig:outputu1} shows the data for the $U(1)$ simulations. The first row
of plots gives the square plaquette simulation data, with the first two
plots showing Loschmidt echo data for the two states
$\left|1100\right\rangle$ and $\left|0011\right\rangle$ in the $G_A=0$ sector. Here we
are running circuits that have much greater volume than the quantum volume
limitations, with $m=5$ and $d=80$, and so we cannot come close to the correct
amplitudes of the oscillations (shown by the dashed simulator lines), but we are able
to make out some oscillations and see some qualitative similarity between the
experimental data and the simulator data. It is clear however that the folding ZNE is unable
to help us at this level. The last plot in the top row is a test of how well we are
staying in the $G_x=0$ sector by measuring $G_A$ in particular--and indeed the data
stays close to zero--but it is clear from the limited amplitudes in the previous plots
that many states outside of this sector are being generated and we are seeing the fact
that we are just as likely to ``leak'' into states that are part of sectors where
$G_A=1$, as we are to ``leak'' into states that are part of sectors where $G_A=-1$.
The second row of Figure \ref{fig:outputu1} shows the data for the $U(1)$ theory on
the triangular plaquette, with the first two plots giving the Loschmidt echo for states
$\left|000\right\rangle$ and $\left|111\right\rangle$, which are the two states in the
$G_x=0$ sector. Again with $d=4$ and $m=40$ we are likely far past the volume
threshold suggested by $V_Q=32$, and indeed the original data never comes close to the
maximum amplitudes of $1$ in the oscillations. However, again we are able to make out
a qualitative agreement in behavior. We also see a close agreement in the frequency of
the oscillations and that ZNE does still help us a bit, unlike in the square plaquette
case. The last figure in the bottom row again measures $\left\langle G_A\right\rangle$
and again it is mostly close to $0$; but once more, this can be explained by ``leaky''
states in both $G_A=1$ and $G_A=-1$ sectors being sampled with close to equal
likelihoods.
\begin{figure*}
\centering
\includegraphics[width=5cm]{FIGS/LechoState1_2plaqZ2.pdf}
\includegraphics[width=5cm]{FIGS/LechoState3_2plaqZ2.pdf}
\includegraphics[width=5cm]{FIGS/LechoState2_2plaqZ2.pdf}
\includegraphics[width=5cm]{FIGS/LechoState4_2plaqZ2.pdf}
\includegraphics[width=5cm]{FIGS/Wy2_2plaqZ2.pdf}
\includegraphics[width=5cm]{FIGS/Wy1_2plaqZ2.pdf}
\caption{Plots for the two-plaquette $\mathbb{Z}_2$ System, which was run on IBM
Lagos. The first two plots give the Loschmidt echoes for states
$\left|000000\right\rangle$ and $\left|101011\right\rangle$ which oscillate
between $0$ and $1$, and the next two plots (top-right and bottom-left) give
the Loschmidt echoes for states $\left|010111\right\rangle$ and
$\left|111100\right\rangle$, which oscillate between $0$ and $0.25$. The last
two plots are for the winding number observables in the $y$-direction, the first
involving links 5 and 6, and the second involving links 1 and 3, as defined in
Figure \ref{fig:2plaq}.}
\label{fig:output2p}
\end{figure*}
\subsection{\texorpdfstring{$\mathbb{Z}_2$}{Z(2)} Theory: Two-plaquette System}
Finally we turn to the time-evolution of the $\mathbb{Z}_2$ theory on the
two-square-plaquette system, whose ideal behavior was shown in Figure
\ref{fig:2plaqSimulator}, where we see that if we begin in the sector where $G_x=1$,
the system's evolution involves only the four states that fall into that sector. As
illustrated by Figure \ref{fig:2plaq}, we are using periodic boundary conditions, and
so there are six distinct links in the two-square-plaquette system. With the addition
of an ancillary qubit, that brings us to seven qubits minimum for our simulation, and
so we used the seven-qubit IBM Q Lagos device to obtain real-time dynamics data.
Figure \ref{fig:output2p} gives the results for the simulation, with the first four
plots giving Loschmidt echo data for the four states in the $G_x=1$ sector; which we
label $\left|000000\right\rangle$, $\left|101011\right\rangle$,
$\left|010111\right\rangle$, and $\left|111100\right\rangle$ in reference to the
numbered links in Figure \ref{fig:2plaq}. The $V_Q=32$ for IBM Q Lagos tells us that
the maximum square circuit meeting the accuracy threshold is $5\times 5$. Comparing that to
the two plaquette system circuit requirement with $m=7, d=48$ indicates that we are
way beyond the quantum volume limit.
However, especially for the states $\left|000000\right\rangle$ and
$\left|101011\right\rangle$; where the simulator shows us the maximum amplitude goes
up to $1$, we are able to see qualitative agreement and the readout error and ZNE
error corrections do help us out a bit.
The last two plots give data for the winding number observable $W_y$, defined in
Equation \ref{eq:gl}. As noted from before, the winding number in the $y$-direction
can be
measured using links 1 and 3 as well as links 5 and 6, and in each case the result
should be the same throughout the time-evolution for the initial conditions that we
chose: $W_y=1$. Indeed when we take the data and use ZNE, we do see a bias in the data
closer to $+1$ than $-1$ for both $W_y$ observables.
\section{Conclusions}
\label{sec:conc}
In this paper we have explored the possibilities for real-time simulations of
plaquette theories on current NISQ hardware, including theories with $\mathbb{Z}_2$
symmetries as well as the $U(1)$ symmetry, which is of particular interest from the QED
perspective. We find that for many of our experiments, we can successfully overcome
quantum volume, $V_Q$, limitations with the error mitigation schemes of readout error mitigation as well as
ZNE through circuit folding. Even in cases where we cannot overcome $V_Q$ limitations,
we are still able to see qualitative signals of the real-time dynamics for
circuits that are many times deeper than the $V_Q$ measurements for the hardware.
We have seen that topology is also an important consideration for quantum
simulations with superconducting qubits, and found significant quantitative advantages
in choosing the best topology for each experiment .
Future improvements would involve using pulse control for ZNE rather than folding, as
well as denser data points to capture the time evolution for a plaquette model.
Additionally, future work could involve finding ways to simulate the real-time dynamics of
non-Abelian plaquette models. Another immediate attempt would be to use different
encoding strategies already with the microscopic model. For example, the $U(1)$ or
the $\mathbb{Z}_2$ models can be represented in terms of dual height variables in 2-spatial
dimensions, which already removes much of the gauge non-invariant states. Formulating
quantum circuits on the dualized versions of such models would enable bigger
lattices to be realized on quantum circuits \cite{banerjee2021nematic}. Similarly,
the use of rishons allows a gauge invariant formulation of several non-Abelian
gauge theories, which can then be used to construct quantum circuits on
NISQ devices \cite{Brower_1999, Rico_2018}.
\section*{Acknowledgments}
We would like to thank Sebastian Hassinger, IBM, and Roger Melko, for arranging an
Academic Research Program agreement for us with the IBM Q Experience. We would also
like to thank the Unitary Fund for additional research account access. Research of EH
at the Perimeter Institute is supported in part by the Government of Canada through the
Department of Innovation, Science and Economic Development and by the Province of
Ontario through the Ministry of Colleges and Universities.
|
1,477,468,750,669 | arxiv | \section{Introduction}
In 1965, Buchberger developed the method of Gr\"obner bases for solving systems of multivariate non-linear polynomials. Since then, computing power has grown, and improved algorithms developed, but, even now, the method remains impractical for many problems. The main approach to improvement has been to develop algorithms which avoid unnecessary computation culminating in Faug\`ere's F5 algorithm. On the other hand, various methods of representing polynomials have been explored and the impact of data structures on algorithm performance evaluated. A central element of the method is that it is based on the ordering of \emph{power products} (known as \emph{terms} in Faug\`ere's papers)\footnote{In our implementations, we represent monomials as coefficient and power product tuples.}. Orderings that have been used include lexicographic, total degree and variations on these. Prime-based ordering does not appear to have been exploited, and it is the purpose of this paper to explore this case.
The plan of this article is as follows. First, we introduce prime-based ordering as the natural ordering of power products imposed by encoding the indeterminates as distinct prime numbers. Then, we show that this ordering is admissible. We report our implementation of Buchberger's improved algorithm using both total degree ordering and prime-based ordering. Experimental measurements show that significant gains are achieved by using prime-based ordering.
\section{Prime-based ordering: an admissible ordering based on prime numbers}
Total orderings used in Gr\"obner Basis Algorithms are required to be \emph{admissible}. That is they must satisfy the conditions:
\begin{equation} \forall t : t \not= 1 : 1 < t \end{equation}
\begin{equation} s < t \Rightarrow s \cdot u < t \cdot u \end{equation}
Common admissible total orderings include lexicographical, total degree lexicographical and degree reverse lexicographical. Variations of these and other orderings are possible [\cite{Buch-85}].
An ordering that, to the best of our knowledge, has not been used in implementations of Buchberger's algorithm for Gr\"obner Bases is one based on prime numbers. Given a power product, ${t = x_1^{\alpha_1} x_2^{\alpha_2} \ldots x_n^{\alpha_n}}$, each indeterminate $x_i$ is mapped to a unique prime number:
\begin{eqnarray} x_1 &\leftrightarrow& 2 \nonumber \\
x_2 &\leftrightarrow& 3 \\
&\ldots& \nonumber \end{eqnarray}
so that ${t \leftrightarrow n_t \in N}$. For example, ${x^3y^2z \leftrightarrow 2^3 3^2 5 = 360}$.
We define
\begin{equation} s < t \Leftrightarrow n_s < n_t \end{equation}
In other words, the ordering of $s$ and $t$ is determined by the natural ordering of the integers $n_s$ and $n_t$. We call the integer $n_t$ the {\em prime image} of $t$.
Now, if all the exponents of $s$ are zero, ${s = 1}$ and ${n_s = 1}$. Similarly, if ${t \not= 1}$, at least one exponent of $t$ is positive, so that ${n_t > 1}$. This establishes condition (1).
Similarly, from (4), ${s < t \Rightarrow n_s < n_t \Rightarrow n_s n_u < n_t n_u \Rightarrow s \cdot u < t \cdot u}$, establishing condition (2).
Hence, the mapping of indeterminates onto a set of prime numbers is an admissible ordering.
Prime-based order is neither a total degree nor a lexicographical ordering. For example, in total degree, ${x^3 > y^2}$; but, in prime-based ordering, ${x^3 \leftrightarrow 8 < 9 \leftrightarrow y^2}$; so ${x^3 < y^2}$. Similarly, ${x^3 < xy }$ in lexicographical ordering; but, ${x^3 \leftrightarrow 8 > 6 \leftrightarrow xy}$, implying ${x^3 > xy }$ in prime-based ordering.
\section{Implementations of Buchberger's Improved Algorithm}
We have developed four versions of Buchberger's improved algorithm [\cite{Buch-85}], which generate a reduced Gr\"obner Basis. They are written in the object oriented programming language Eiffel so that the algorithm is the same but the implementations of power products and coefficients differ in each version. Hence, variations can be readily created and verified not only by comparing output but also by using Eiffel's built-in preconditions and postconditions. Eiffel allows these conditions to be discarded during compilation so that the algorithm can run at full speed. Furthermore, mathematical constructs can be represented as structures of interconnected objects.
The versions differ in the power-product representation and the integer precision used for the coefficients. In the first version, monomials are represented as structures consisting of a rational coefficient and a power product, ${t = x_1^{\alpha_1} x_2^{\alpha_2} \ldots x_n^{\alpha_n}}$. The coefficient is represented by a pair of 64 bit integers -- numerator and denominator -- and the power product is represented as a string of characters, in expanded form. For example, ${x^3y^2z}$ is stored as the string ``aaabbc''. Power products are compared using total degree in this case. This representation was chosen for simplicity, and, because object oriented programming methods are used, may be changed easily to any other form of representation, such as a vector of powers, ${[3, 2, 1]}$ or as a list. Operations on power products are implemented as iterations or recursions on the underlying data structure. For example, multiplying ${x^3y^2z}$ by ${xyz}$ involves merging the strings ``aaabbc'' and ``abc'' to yield ``aaaabbbcc'', representing ${x^4y^3z^2}$.
In the second version, primes are used to represent the indeterminates. Power products are ordered by their integer image. This avoids iteration or recursion in the basic operations, which reduce to integer operations. For example, multiplication of power products is integer multiplication, as in ${x^3y^2z \times xyz \leftrightarrow 360 \times 30 = 10800 = 2^4 3^3 4^2 \leftrightarrow x^4y^3z^2}$. Similarly, the functions of division, lowest common multiple and greatest common divisor also reduce to integer operations. The same is true for Boolean operations such as comparison, equality, divisibility and so on. The prime based implementation of power-products is the same as the original total degree version, but for replacing the routine bodies with the prime based equivalent.
As very large coefficients are often generated, the Gnu multiple precision library, GMP, is used to create the other implementations of the algorithm. This was done by changing the coefficient implementation to use the multiple-precision integer type in the GMP library.
\section{Validation}
Steps were taken to demonstrate that our version of Buchberger's improved algorithm generates Gr\"obner Bases.
The necessary and sufficient condition to be met by a Gr\"obner Basis, $F$, is :
\begin{equation}
\forall f_1, f_2 \in F : : NormalForm(F, SPolynomial(f_1, f_2)) = 0 \end{equation}
where $NormalForm(., .)$ is the normal form or reduction function, and $SPolynomial(., .)$ is leading term cross-reduction function as defined by Definition 6.4 in [\cite{Buch-85}].
In the case of a {\em reduced} Gr\"obner Basis, in addition to the above, it is also necessary that
\begin{equation}
\forall f \in F : : NormalForm(F - \{f\}, f) = f
\end{equation}
All sets of polynomials generated by the algorithm were tested for these conditions. It was also verified that each result set reduced the given polynomial set, confirming that the Ideal was unchanged, and that the basis generated by the Gr\"obner Bases package supplied with Maple reduced the result, hence providing independent confirmation.
We found one problem with the implementation of the algorithm as presented in [\cite{Buch-85}]. This was that the set of pairs of polynomials was not updated after a new polynomial was generated by reduction in the $Subalgorithm$ $NewBasis(., ., .)$. To the best of our knowledge, this has not been previously pointed out. When this deficiency was remedied, results proved to be consistent.
\section{Experimental Results and Discussion}
Our timing tests have been performed using the four implementations as shown in Table 1. Examples 1, 2 and 3 are simple manufactured examples, while most of the others are taken from [\cite{Giovini}]. The examples entitled ``Arnold" are taken from [\cite{Arnold}].
Care was taken in the ordering of the indeterminates in the case of ``Parametric Curve" as it includes the power product $x^{31}$. This power product can be encoded as $2^{31}$ when using 64-bit integers, but it cannot be encoded as $5^{31}$, which requires 72-bits.
The main result is that representing the indeterminates by prime numbers and the power products by unique integers significantly reduces the computation time in most cases. In eight cases, the computations complete successfully when the coefficients are encoded as 64-bit integers, and the reduction in time is at least 30\%, or a 40\% speedup. The greatest reduction, (``Gerdt 1"), is 96.8\%, or more than a $30\times$ speedup. As there are more efficient coding schemes than the one used for the total degree implementation, these speedups are on the optimistic side.
A second result is that the number of polynomials may be different. For example, ``Gerdt 1" reduces to 56 polynomials when total-degree ordering is used. When prime-based ordering is used, only 36 polynomials are generated\footnote{Plex ordering, however, generates 26 polynomials for this case.}. However, this is not a general rule, as ``Gerdt 3" generates 21 polynomials for total-degree and 23 for prime-based ordering.
For some examples, integer overflow problems arise because the coefficients are encoded as pairs of 64-bit integers. To counter this, the Gnu Multiple Precision (GMP) library is used to support coefficients based on very large integers. Using the library allows eleven cases to be compared because they complete successfully with both prime-based and string based power products. A twelfth was completed when garbage collection was turned off.
\begin{center}
\begin{tabular}{||l||c|c|c||c|c|c||}
\multicolumn{7}{c}{\textbf{Table 1. Experimental Results, showing the execution time in ms and}}\\
\multicolumn{7}{c}{\textbf{the number of polynomials in the generated basis}}
\\\hhline{=======}
Time (ms)/ & \multicolumn{3}{c||}{64-bit integer coefficients} & \multicolumn{3}{c||}{Multiple-precision integer coefficients} \\ \cline{2-7}
\multicolumn{1}{||c||}{Number of } & \multicolumn{2}{c|}{Power Products} & Reduction & \multicolumn{2}{c|}{Power Products} & Reduction\\ \cline{2-3} \cline{5-6}
\multicolumn{1}{||r||}{polynomials} & total degree & prime based & { (\%)} & total degree & prime based & (\%) \\\hhline{=======}
Example 1 & 13.22/1 & 9.19/1 & 30.5 & 29.92/1 & 34.09/1 & 12.2 \\\hline
Example 2 & 2.93/6 & 1.98/6 & 32.4 & 7.32/6 & 6.4/6 & 12.6 \\\hline
Example 3 & 9.44/6 & 6.35/6 & 32.7 & 24.82/6 & 20.26/6 & 18.4 \\\hline
Cyclic 4 & 10.73/7 & 5.43/7 & 49.4 & 21.4/7 & 15.54/7 & 27.4 \\\hline
Cyclic 5 & 12855/20 & \footnotesize{a} & & \footnotesize{b} & 14289/24 & \\\hline
Gerdt 1 & 345790/56 & 11059/36 & 96.8 & 387004/56 & 14975/36 & 96.1 \\\hline
Gerdt 2 & 56.8/8 & 4.91/5 & 91.4 & 146.55/8 & 17.74/5 & 87.9 \\\hline
Gerdt 3 & 2693/21 & 1886/23 & 30.0 & \footnotesize {b} & 3217/23 & \\\hline
Gerdt $3^c$ & 4596/21 & 3051/23 & 33.6 & 5584/21 & 4564/23 & 18.3 \\\hline
Arnborg-Lazard & \footnotesize{a} & \footnotesize{a} & & 3042/15 & 2476/11 & 18.6 \\\hline
Parametric Curve & 420.7/16 & 17.72/10 & 95.8 & 522.5/16 & 35.15/10 & 93.3 \\\hline
Katsura 4 & \footnotesize{a} & \footnotesize{a} & & 1059/13 & 873/13 & 21.1 \\\hline
Arnold 1 & 153.78/3 & \footnotesize{a} & & 2276563/3 & 264810/3 & 88.4 \\\hline
Arnold 2 & \footnotesize{a} & \footnotesize{a} & & 2919337/2 & 1499222/2 & 48.6 \\\hline
\end{tabular}
\end{center}
\footnotesize
a. Integer overflow.
b. GMP problem.
c. Garbage collection turned off.
d. Memory exhausted when garbage collection is turned off.
e. Total degree based ordering faster than prime-based ordering.
\normalsize
\hspace{.5in}
With the ``Katsura 4" example, we chanced upon a case in which the algorithm using total-degree ordering was faster than that with prime-based ordering. So, we have carried out a limited investigation of the effect of permuting the relative ordering of the indeterminates of the two polynomial sets, for ``Example 2" with three indeterminates, and for ``Katsura 4" with five indeterminates. We found permutations in which the prime-based ordering was faster than the fastest total-degree ordering in both cases.
The effects of permuting the relative order of the indeterminates for ``Example 2" are presented in Table 2. In all cases using the 64 bit coefficients, the prime-based ordering version is faster by as much as 46\%. When using GMP, the prime-based ordering is faster by as much as 30\% in all but one case, when it is 16.7\% slower. The fastest computation was the prime-based 64 bit case of 1.98ms when the indeterminate order was $acb$. The ratio of maximum to minimum times is given for each implementation. For example, the ratio of the slowest prime-based 64 bit case to the fastest is 11.
\begin{center}
\begin{tabular}{||c||c|c|c||c|c|c||}
\multicolumn{7}{c}{\textbf{Table 2. Effect of the relative ordering of the indeterminates in ``Example 2" }} \\ \hhline{=======}
& \multicolumn{3}{c||}{64-bit integer coefficients} & \multicolumn{3}{c||}{Multiple-precision integer coefficients} \\ \cline{2-7}
\multicolumn{1}{||r||} {} & \multicolumn{2}{c|}{Power Products} & Reduction & \multicolumn{2}{c|}{Power Products} & Reduction\\ \cline{2-3} \cline{5-6}
\multicolumn{1}{||c||}{Order} & total degree & prime based & { (\%)} & total degree & prime based & (\%) \\\hhline{=======}
$abc$ & 15.85 & 10.56 & 33.6 & 38.22 & 33.57 & 12.2 \\\hline
$acb$ & 2.93 & 1.98 & 32.4 & 7.32 & 6.4 & 12.6 \\\hline
$bac$ & 24.06 & 18.76 & 22.0 & 60.95 & 58.73 & 3.6 \\\hline
$bca$ & 41.48 & 22.29 & 46.3 & 125.42 & 88.22 & 29.7 \\\hline
$cab$ & 3.71 & 2.62 & 29.4 & 9.42 & 8.36 & 11.3 \\\hline
$cba$ & 15.08 & 13.15 & 12.8 & 37.50 & 45.01 & $\textbf{(16.7)}^a$ \\\hhline{=======}
max/min & 14 & 11 && 17 & 14 &\\\hhline{=======}
\end{tabular}
\end{center}
\footnotesize
a. Total degree based ordering faster than prime-based ordering.
\normalsize
\hspace{.5in}
In the case of ``Katsura 4", there are five indeterminates, and 120 orders to consider. Only the multiple precision implementations work, as the 64-bit coefficient implementations overflow. The fastest case is prime based, with a duration 17.6\% less than fastest total degree based case. The ratio of the slowest to fastest is 1.9 for the total degree based implementation and 3 for the prime-based implementation.
In the ``Gerdt 3" example and some other examples, using the GMP library triggered a fault which caused the program to crash. This crash occurs when objects are collected by the garbage collector while running or when the program terminates. This is believed to be a problem in the interface between Eiffel and the GMP library rather than GPM itself. Turning off garbage collection avoided the fault, but proved costly, as it is faster to re-allocate memory internally than by repeatedly requesting additional memory from the operating system. Secondly, with garbage collection off, if memory is exhausted, the computation becomes disk-bound; these cases were abandoned.
\section{Conclusions}
Prime-based ordering, based on ordering power products by encoding the indeterminates as prime numbers and using the natural number order, is an admissible ordering. Prime-based ordering is not a lexicographical or total degree ordering. Implementations of this ordering reduce power product operations to integer operations.
Several versions of Buchberger's improved algorithm have been developed and tested. Each result has been verified to satisfy the necessary and sufficient conditions to be a reduced Gr\"obner Basis. Resulting bases also reduce their respective given polynomial sets, confirming that the Ideal is correctly preserved. They were also shown to be reduced by bases generated using the Gr\"obner package in Maple.
Duration reductions measured using the improved Buchberger algorithm range from 30\% (40\% speedup) to 96.8\% ($30\times$ speedup).
The number of polynomials can differ according to the ordering scheme. For example, in the ``Gerdt 1" case, prime-based ordering generates a Gr\"obner basis with 36 polynomials, whereas the total degree result has 56 polynomials.
Finally, we have also explored the effect of permuting the indeterminates in some examples, and have found that the duration of computation varies significantly. In these examples, the fastest case was always using the prime-based ordering. We are continuing to investigate this matter together with issues associated with large coefficient size.
\bibliographystyle{elsart-harv}
|
1,477,468,750,670 | arxiv | \section{Introduction}
Tracing the dense central regions of prestellar cores is essential
for an understanding of their kinematics and density distribution.
Many of the well studied nearby cores are thought to be static with
a "Bonnor-Ebert" radial density distribution (see Bergin \&
Tafalla \cite{bt07} and references therein) comprising a roughly $1/r^2$
fall off surrounding a central region of nearly constant
density. However, our understanding of the central regions
with H$_{2}$ number densities of $10^5$ cm$^{-3}$ or more is limited
both by our uncertainties about dust emissivity and molecular
abundances. Dust emission, because it is optically thin, is a
useful tracer of mass distribution, but both temperature gradients
and gradients in the dust characteristics (refractive index and
size distribution) mean that dust emission maps should be interpreted
with caution. Molecular lines offer the advantage that they permit
an understanding of the kinematics, but depletion and other chemical
effects render them often untrustworthy. Depletion is more rapid
at high density and low temperature and these are just the effects
that become important in the central regions of cores.
One interesting result from studies to date (e.g. Tafalla et al. \cite{tm06}) is
that the only molecular species to have a spatial distribution
similar to that of the optically thin dust emission are
N$_{2}$H$^{+}$ and NH$_{3}$, consisting solely of nitrogen and hydrogen. This is
in general attributed to the claim that their abundance is closely
linked to that of molecular nitrogen (one of the main gas phase
repositories of nitrogen) which is highly volatile and hence one
of the last species to condense out (Bisschop et al. \cite{bf06}).
However N$_{2}$ is only marginally more volatile than CO which is
found observationally
to condense out at densities of a few 10$^{4}$ cm$^{-3}$
(Tafalla et al. \cite{tm02}, \cite{tm04}) and it has thus been puzzling to have a
scenario where C-containing species were completely frozen out
but N$_{2}$ not. This has led to a number of models aimed at
retaining gas phase nitrogen for at least a short time
after CO has disappeared, e.g. Flower et al. \cite{fp06},
Akyilmaz et al. \cite{af07}, and Hily-Blant et al. \cite{hw10} (hereafter paper I).
There have also
been attempts to obtain convincing evidence for
the existence in the gas phase of species
containing both C and N at densities above a few 10$^5$ to 10$^6$ cm$^{-3}$
at which CO depletes (e.g. Hily-Blant et al. \cite{hw08} and paper I).
A partial success in this regard was obtained by Hily-Blant et
al. \cite{hw08} who found that the intensity of $^{13}$CN(1$-$0) behaved in
similar fashion to the dust emission in two of the densest cores:
L183 and L1544. The implication of this is that at least
some form of carbon, probably in the form of CO, remains in the
gas phase in these objects at densities above $10^5$ cm$^{-3}$.
Thus, a tentative conclusion is that
the CO abundance at densities above 10$^5$ cm$^{-3}$
is much lower than the canonical value
of $10^{-4}$ relative to H$_2$ (Pontoppidan \cite{p06}),
but nevertheless sufficiently high to
supply carbon for minor species.
Putting this interpretation on a more solid foundation requires
observation of possible tracers of the depleted region in cores
carried out in a manner which will permit distinguishing species
associated with high density gas from those present in the surrounding
lower density envelope. This is complicated though perhaps facilitated
by the gradients in molecular abundance due to depletion. While
molecular abundances in general are expected to drop at high
densities when depletion takes over, this is not necessarily true
for all species at least within a limited density range (see the
results for deuterated species discussed by Flower et al. \cite{fp06}).
However, proving that one is observing emission from high density
gas involves either showing that one can detect transitions which
require high densities to excite or showing that the emission
comes from a compact region coincident with the dust emission
peak. The former is rendered more difficult by the temperature
gradients believed to exist in some cores (see e.g. Crapsi
et al. \cite{cc05}). The latter requires
highly sensitive high angular resolution observations.
A combination of the two is likely to be the best strategy.
One approach to these problems which has not been fully exploited
to date is to make use of the hyperfine splitting present in
essentially all low lying transitions of N-containing species.
It is clear in the first place that the relative populations
of hyperfine split levels of species such as HCN and
CN are out of LTE in many situations
(see Walmsley et al. \cite{wc82}, Monteiro \& Stutzki \cite{ms86}, and paper I).
Interpreting such anomalies requires
accurate collisional rates between individual hyperfine split
levels, but such rates can be determined from the rates between
rotational levels (e.g. Monteiro and Stutzki \cite{ms86}) and clearly
in principle, they allow limits to be placed on the density of
the emitting region. It is also the case that species of
relatively low abundance like H$^{13}$CN are found to have
rather minor deviations from LTE between hyperfine levels of
the same rotational level (see paper I) and in this
case, one can determine the line excitation temperature
and optical depth based on the relative hyperfine satellite
intensities. While this is clearly doubtful procedure in that
the non-LTE anomalies can cause errors in the inferred optical
depth, it is nevertheless (as we shall discuss) an approach which can
give a zero order estimate of relative rotational level populations
and hence of the local density.
In the case of more abundant species such as the main
isotopologues of HCN or CN, one finds that "self absorption" or
absorption in foreground relatively low density material can
obliterate any signal from the high density central regions
of a core. However, from these tracers, one can in principle
glean information on the kinematics of foreground layers of
the core. Thus, one can hope to find evidence for infall.
We indeed find in this study that profiles change in gradual
fashion as a function of transition line strength and this does
indeed yield evidence for either infall or expansion of foreground
layers.
Here we extend the results of paper I to three other
cores which have been chosen on the basis of their CO depletion
properties (see e.g. Bacmann et al. \cite{bl02}, Brady Ford \& Shirley \cite{bs11}).
We in particular chose objects with large CO depletion
holes indicative of a relatively large age. This might occur
for example in sources where magnetic field is capable of slowing
down or preventing collapse. We also used as a guide the angular
size measured in the N$_2$H$^+$(1$-$0) transition by Caselli et al. (\cite{cm02})
which is thought to correspond roughly to the area of CO depletion.
Thus, we chose the cores \object{L1498} and \object{TMC 2} which appear to have large
CO depletion holes (0.08 parsec in the case of \object{TMC 2}). As a comparison source,
we included in our study \object{L1521E}, which shows little or no depletion
in C-bearing species (Tafalla \& Santiago \cite{ts04}).
In this paper then, we present IRAM 30m observations of
the $J=1\rightarrow0$ transition of HCN, H$^{13}$CN and HN$^{13}$C along the major
and the minor axes of the three selected sources, \object{L1498}, \object{L1521E} and \object{TMC 2}, as well as
the $J=1\rightarrow0$ transition of $\mathrm{N_{2}H^{+}}$}%{N$_{2}$H$^{+}$\ and the
$J=2\rightarrow1$ transition of C$^{18}$O towards \object{TMC 2}.
In Sect. \ref{HCN:observations}, we
discuss the observational and data reduction procedures, summarising the results in
Sect. \ref{HCN:obsresults}.
In Sect.~\ref{HCN:depletion} we compare the distribution of the line emission with
respect to the dust emission, investigating about the presence of depletion.
In Sect.~\ref{HCN:lineprofiles} we describe
the evidence of the asymmetry of the HCN(1$-$0)
line profiles. In Sect.~\ref{HCN:Texestimate} we explain the methods used for
determining the excitation temperature, while in Sect.~\ref{HCN:Nx}
we give the
estimated column densities and abundances
for the observed lines.
In Sect.~\ref{HCN:conclusions} we give our conclusions.
Comment on non-LTE hyperfine populations are provided in Appendix~\ref{nonltepop},
while in Appendix~\ref{app2} we summarise
the spectroscopic data and observational parameters
of the observed molecules.
\section{Observations}\label{HCN:observations}
\object{L1498}, \object{L1521E}, and \object{TMC 2} were observed in July 2008, using frequency switching
and raster mode, with a spacing of
20$^{\prime\prime}$,
25$^{\prime\prime}$ and 50$^{\prime\prime}$, respectively, along the major and
the minor axes. The axes were identified from the continuum
emission maps showed in Tafalla et al. (\cite{tm02}) for \object{L1498}, Tafalla et al. (\cite{tm04})
for \object{L1521E}, and Crapsi et al. (\cite{cc05}) for \object{TMC 2}. Observing parameters are
given in Table~\ref{tab:obspam}.
Observations of the 89 GHz HCN(1$-$0) and the 86 GHz H$^{13}$CN(1$-$0) multiplets (see
Table \ref{tab:HCN10}) plus the 87 GHz HN$^{13}$C(1$-$0) multiplet
(see Table \ref{tab:HN13C10}) were carried out using the VESPA autocorrelator
with 20 kHz channel spacing (corresponding to about
0.069 km s$^{-1}$) with 40 MHz bandwidth for HCN(1$-$0) and 20 MHz bandwidth for
H$^{13}$CN(1$-$0) and HN$^{13}$C(1$-$0).
The final rms, in $T_{\rm mb}$ unit is $\sigma_{\rm T}\sim50$ mK, $\sim20$ mK,
and $\sim30$ mK for HCN(1$-$0), H$^{13}$CN(1$-$0), and HN$^{13}$C(1$-$0),
respectively.
Observations of the 93 GHz N$_{2}$H$^{+}$(1$-$0) multiplet and of the 219 GHz
C$^{18}$O(2$-$1) line were carried out in \object{TMC 2} using
10 kHz channel spacing with 40 MHz bandwidth for $\mathrm{N_{2}H^{+}}$}%{N$_{2}$H$^{+}$(1$-$0) and 20 kHz
channel spacing with 40 MHz bandwidth for C$^{18}$O(2$-$1).
For $\mathrm{N_{2}H^{+}}$}%{N$_{2}$H$^{+}$\, the final rms, in $T_{\rm mb}$ units, in channels of width $\delta v = 0.031$
km s$^{-1}$, is $\sigma_{\rm T}\sim100$ mK while for
C$^{18}$O(2$-$1),
the final rms (channels of width $\delta v=0.027$ km s$^{-1}$) is
$\sigma_{\rm T}\sim 250$ mK.
Data reduction and analysis were completed using the CLASS software
of the GILDAS\footnote{\tt http://www.iram.fr/IRAMFR/GILDAS} facility,
developed at the IRAM and the Observatoire de Grenoble.
In what follows, all temperatures are on the main-beam scale,
$T_{\rm mb}=F_{\rm eff}T_{\rm A}^{*}/B_{\rm eff}$, where $T_{\rm A}^{*}$ is the antenna
temperature corrected for atmospheric absorption, while B$_{\rm eff}$ and F$_{\rm eff}$
are the beam and the forward efficiencies, respectively
(see Table \ref{tab:obspam} for the numerical values of efficiencies).
\section{Observational results}\label{HCN:obsresults}
Figure \ref{17134fg1} shows the continuum emission of the
three cores together with the positions mapped in lines described in Sect.
\ref{HCN:observations}.
The selected sources have already been widely observed in the past
as possible tracers of conditions in the high density
core nucleus.
We succeeded in observing HN$^{13}$C(1$-$0) and
the three hyperfine components of HCN(1$-$0) and H$^{13}$CN(1$-$0).
\begin{figure*}[!ht]
\begin{center}
\includegraphics[angle=0,scale=.75]{17134fg1}
\caption{Observed positions in \object{L1498} (left panel), \object{L1521E} (middle panel), and
\object{TMC 2} (right panel), superposed on the dust emission map
smoothed to 28$^{\prime\prime}$, from Tafalla et al. (\cite{tm02}), Tafalla et al. (\cite{tm04}), and
Crapsi et al. (\cite{cc05}), respectively. Mapped positions in
HCN(1$-$0) ({\em white crosses}),
H$^{13}$CN(1$-$0) and HN$^{13}$C(1$-$0) ({\em white squares}), and
N$_{2}$H$^{+}$(1$-$0) ({\em yellow circles}). Contours represent 35, 50, 65, 80 and 95
per cent of the peak value of the dust emission which is 18.0, 21.0, and 45.0
mJy/(11$^{\prime\prime}$ beam) for \object{L1498}, \object{L1521E}, and \object{TMC 2}, respectively.
The (0,0) position corresponds to $\alpha(2000)=04^{\rm h}10^{\rm m}51.5^{\rm s}$,
$\delta(2000)=25^\circ09^{\prime}58^{\prime\prime}$ for \object{L1498}, to
$\alpha(2000)=04^{\rm h}29^{\rm m}15.7^{\rm s}$,
$\delta(2000)=26^\circ14^{\prime}05^{\prime\prime}$ for \object{L1521E}, and to
$\alpha(2000)=04^{\rm h}32^{\rm m}48.7^{\rm s}$,
$\delta(2000)=24^\circ25^{\prime}12^{\prime\prime}$ for \object{TMC 2}.}
\label{17134fg1}
\end{center}
\end{figure*}
Figure \ref{17134fg2} shows a comparison of the line profiles for the different
tracers.
We
plotted the weakest \mbox{($F=0\rightarrow1$)} component of
HCN(1$-$0) at 88633.936 MHz,
the strongest ($F=2\rightarrow1$) component of H$^{13}$CN(1$-$0) at
86340.184 MHz, HN$^{13}$C(1$-$0) at 87090.675 MHz, the isolated component
($F_{1},F=0,1\rightarrow1,2$) of
$\mathrm{N_{2}H^{+}}$}%{N$_{2}$H$^{+}$(1$-$0) at 93176.2650 MHz, and the C$^{18}$O(2$-$1) transition at 219560.319 MHz.
We used the components enumerated here also for the comparison between line and
dust emission described below, supposing them to be the most
optically thin.
The offsets for each source are relative to the dust peak emission
(see caption of Fig.~\ref{17134fg1}).
\begin{figure}[]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg2}}
\caption{Emission line for the different tracers of the source sample towards the
dust emission peak. The
88633 MHz component of
HCN(1$-$0) is shown together with the 86340 MHz
component of H$^{13}$CN(1$-$0), HN$^{13}$C(1$-$0), the 93176 MHz
isolated component of
$\mathrm{N_{2}H^{+}}$}%{N$_{2}$H$^{+}$(1$-$0), and C$^{18}$O(2$-$1). Spectra have been multiplied by
a scaling factor to allow the simultaneous comparison.
Red dotted lines show the
systemic LSR velocity of the sources evaluated from the well-determined frequencies
of $\mathrm{N_{2}H^{+}}$}%{N$_{2}$H$^{+}$(1$-$0), see Pagani et al. (\cite{pd09}).}
\label{17134fg2}
\end{center}
\end{figure}
There are clear trends in the line widths which we derive for our
three sources with values of order 0.2 km s$^{-1}$ for \object{L1498}, 0.3 km s$^{-1}$
for \object{L1521E}, and 0.45 km s$^{-1}$ for \object{TMC 2}.
Besides, we noticed that
there is a reasonable agreement between
the systemic velocity, $V_{\rm LSR}$, of $\mathrm{N_{2}H^{+}}$}%{N$_{2}$H$^{+}$\ and H$^{13}$CN
to within 0.08 km s$^{-1}$, while
line widths of H$^{13}$CN
seem larger by 0.05 km s$^{-1}$ with respect to $\mathrm{N_{2}H^{+}}$}%{N$_{2}$H$^{+}$,
but this may be due to $^{13}$C hyperfine splitting
(Schmid-Burgk et al. \cite{sm04}).
There is a clear difference between the HCN and H$^{13}$CN profiles
in the three sources: while in \object{L1521E}, the HCN profile is broader than its
isotopologue, in \object{L1498} and \object{TMC 2}, HCN and H$^{13}$CN have very different profiles,
presumably because of high optical depth in HCN.
All these molecules show hyperfine structure, although in the case of
HN$^{13}$C it is
not possible to avoid the blending of the components,
and the hyperfine fitting has been conducted using the HFS method in CLASS.
To fit this line, we followed van der Tak et al. (\cite{vdtm09}) who found that
it consists of eleven hyperfine components which can however be reduced to four
``effective'' components.
As an instance, in the upper panel of Fig.~\ref{17134fg3},
we show the fit of the HN$^{13}$C
line at the offset $(10,20)$ towards \object{L1498} together with the four distinguishable
hyperfine components.
The fit gives a value for the total optical depth equal to
$\tau=4.58\pm0.32$ and all the observed points show similar values,
as shown in Figure~\ref{17134fg4}, with an
average of $\langle\tau\rangle=5.16\pm0.86$.
Also
for H$^{13}$CN we found a good simultaneous fit of the three hyperfine components
(see lower panel of Fig. \ref{17134fg3})
and, as HN$^{13}$C, this line is somewhat optically thick with a total optical depth of
$\tau=3.11\pm0.77$ at the offset $(10,20)$ and a mean optical depth of $\langle\tau\rangle=4.34\pm1.15$.
Hence one can expect the optical depth in
HCN hyperfine components to be of order 30--100 assuming the canonical isotopic
ratio for $[^{12}$C]/$[^{13}$C] of 68 (Milam et al.~\cite{ms05}).
\begin{figure}[]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg3.eps}}
\caption{HN$^{13}$C(1$-$0) (upper panel) and
H$^{13}$CN(1$-$0) (lower panel)
emission towards \object{L1498} at the offset $(10,20)$;
{\em black solid histogram}, hyperfine components used for the fit,
{\em blue dashed lines}, centred at the frequencies listed in Table
\ref{tab:HN13C10}, and line fit, {\em magenta solid
line}.}
\label{17134fg3}
\end{center}
\end{figure}
\begin{figure}[]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg4.ps}}
\caption{Total optical depth of HN$^{13}$C(1$-$0), {\em solid squares}, and
H$^{13}$CN(1$-$0), {\em empty squares},
for all the observed positions in \object{L1498} as a function of
the molecular hydrogen column density.}
\label{17134fg4}
\end{center}
\end{figure}
\section{Probing the presence of depletion}
\label{HCN:depletion}
In this section, we study the dependence of the integrated
intensity, $W$, of the HCN, H$^{13}$CN and HN$^{13}$C transitions
on offset from the dust emission peaks. In the case of HCN,
we consider the weakest $F=0\rightarrow1$
line and in the case of H$^{13}$CN, the
strongest $F=2\rightarrow1$ satellite.
The observational results are shown in
Figures~\ref{17134fg6},
\ref{17134fg7},
and \ref{17134fg8}
where we compare the observed intensity of these
lines with the dust emission which is taken to be representative of
the H$_{2}$ column density. It is useful first to comment briefly on
what one can expect qualitatively to learn from such studies.
One notes in the first place that optically thick transitions,
due to scattering and line saturation, are naturally likely to
have a broader spatial distribution than thin (or very moderate
optical depth) transitions. Thus, the HCN($F=0\rightarrow1$)
transition
can be expected to be roughly an order of magnitude more optically
thick than H$^{13}$CN($F=2\rightarrow1$)
assuming the local interstellar
[$^{12}$C]/[$^{13}$C] ratio
(Milam et al. \cite{ms05}) and no fractionation.
One
thus expects a broader spatial distribution of HCN than H$^{13}$CN and indeed this
is confirmed by the results shown in all the three sources. For
example, the half power size of HCN in \object{L1498} along the NW-SE cut
is 170$^{\prime\prime}$ as
compared with 120$^{\prime\prime}$ for H$^{13}$CN
and HN$^{13}$C,
and 180$^{\prime\prime}$ for the dust emission. One concludes
that the $^{13}$C substituted isotopologue is a much better tracer of HCN
column density than the more abundant form even when using the weakest
hyperfine component.
However, one also notes from these figures that the
C$^{18}$O distribution is essentially flat as has been seen in a variety
of studies (Tafalla et al. \cite{tm02}, \cite{tm04}).
This has been attributed to depletion of CO at densities
$n$(H$_{2}$) above a critical value of order a few times 10$^{4}$ cm$^{-3}$
and our observations are
entirely consistent with this.
It is also true however (see also Tafalla et al. \cite{tm06} and paper I)
that species like H$^{13}$CN and HN$^{13}$C,
while they clearly have different spatial distributions from the
dust emission, have half power sizes which are roughly similar
(see results for \object{L1498} above).
Such effects clearly do not apply to $\mathrm{N_{2}H^{+}}$}%{N$_{2}$H$^{+}$\ which does
not contain carbon and which in general
has been found to have a spatial distribution similar to that
of the dust (Tafalla et al. \cite{tm06}).
However, one sees in \mbox{Fig.~\ref{17134fg6} -- \ref
{17134fg8}}
that
HN$^{13}$C and H$^{13}$CN have a spatial distribution closer to that
of the dust emission and we
conclude that
these two isotopologues may often be very useful tracers of kinematics
as N$_{2}$H$^{+}$ at
densities above the critical values at which CO depletes. It is possible that
CO, while depleted by roughly an order of magnitude relative to the
canonical [CO]/[H$_{2}]$ abundance ratio of 10$^{-4}$, is nevertheless
sufficiently abundant to account for minor species such as HCN and
HNC.
Finally,
we note from Fig.~\ref{17134fg8}
that towards \object{TMC 2}, where the dust peak
H$_{2}$ column density is larger than in the other two sources, there seems
to be a good general accord between the dust emission and the intensity of
HN$^{13}$C, H$^{13}$CN,
and $\mathrm{N_{2}H^{+}}$}%{N$_{2}$H$^{+}$. The CO however has again a flat distribution
suggesting once more that it is depleted in the vicinity of the dust
peak. There is no reason in this source to suppose that $\mathrm{N_{2}H^{+}}$}%{N$_{2}$H$^{+}$\ traces
the high density gas around the core nucleus better than the $^{13}$C
substituted isotopologues of HCN and HNC. There are however slight
differences close to the dust peak between the different species
which are perhaps attributable to excitation and optical depth effects,
but could also be caused by the depletion which presumably all
molecules undergo if the density is sufficiently high.
We conclude
that cyanides and isocyanides are useful tracers of gas with densities
around $10^5$ cm$^{-3}$.
We note that models of a collapsing prestellar core indeed suggest that HCN and
HNC should remain with appreciable gas-phase abundance at an epoch when the CO
abundance has depleted to a few percent of the canonical value of $10^{-4}$
relative to H$_2$.
To illustrate the differential freeze out of HCN relative to that of CO found in the
chemical models, Fig.~\ref{17134fg5} displays the depletion of HCN as a function of
the depletion of CO computed in the collapsing gas, starting from steady-state
abundances at a density of 10$^4$ cm$^{-3}$. The numerical gravitational collapse
model is described in Sect. 6.2 of Hily-Blant et al. (\cite{hw10}).
Two models are shown for two different collapse time scale: free-fall time
($t_{\rm ff}$)
and $10t_{\rm ff}$. It may be seen from Fig.~\ref{17134fg5} that the depletion of
HCN relative to that of CO strongly depends on the collapse time scale:
the model shows that HCN is depleted on longer timescales than
CO, in particular if the dynamical timescale is several free fall
times.
HNC, not shown in the plot, has the same behaviour of HCN.
\begin{figure}[]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg5.eps}}
\caption{The HCN depletion as a function of the CO depletion during the collapse,
computed by the gravitational collapse model of Hily-Blant et al. (\cite{hw10}), see
their Fig.~8. Values are normalised with respect to the initial abundances.
Each point is labelled with the time of the collapsing gas. Two models
are shown: a free fall model labelled $t_{\rm ff}$ ({\em red dashed curve}) and a
collapsing model with a time scale multiplied by a factor ten, labelled
$10t_{\rm ff}$ ({\em blue solid curve}). The {\em black dotted curve}
shows the positions of equal depletion for CO and HCN.}
\label{17134fg5}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg6.eps}}
\caption{Comparison between the dust emission ({\em gray histograms}) and the integrated
intensity of the observed species along the two cuts in \object{L1498}. Upper panels:
{\em solid squares}, HCN(1$-$0) [$\times 3$];
{\em empty circles}, C$^{18}$O(1$-$0) [$\times0.8$];
{\em solid circles}, C$^{18}$O(2$-$1) [$\times1.4$].
Lower panels: {\em empty triangles}, H$^{13}$CN(1$-$0) [$\times10$];
{\em empty squares}, HN$^{13}$C(1$-$0) [$\times2.5$];
{\em empty pentagons}, N$_{2}$H$^{+}$(1$-$0) [$\times 3$].
The typical error on the integrated intensity is about 20 mK km s$^{-1}$.}
\label{17134fg6}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg7.eps}}
\caption{Comparison between the dust emission ({\em gray histograms}) and the integrated
intensity of the observed species along the two cuts in \object{L1521E}. Upper panels:
{\em solid squares}, HCN(1$-$0) [$\times 2.5$];
{\em solid circles}, C$^{18}$O(2$-$1) [$\times0.4$].
Lower panels: {\em empty triangles}, H$^{13}$CN(1$-$0) [$\times27$];
{\em empty squares} HN$^{13}$C(1$-$0) [$\times5.5$];
{\em empty pentagons}, N$_{2}$H$^{+}$(1$-$0) [$\times 10$].
The typical error on the integrated intensity is about 20 mK km s$^{-1}$.}
\label{17134fg7}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg8.eps}}
\caption{Comparison between the dust emission ({\em gray histograms}) and the
integrated intensity of the observed species
along the two cuts in \object{TMC 2}. Upper panels:
{\em solid squares}, HCN(1$-$0) [$\times 4$];
{\em solid circles}, C$^{18}$O(2$-$1).
Lower panels: {\em empty triangles}, H$^{13}$CN(1$-$0) [$\times25$];
{\em empty squares}, HN$^{13}$C(1$-$0) [$\times3.5$];
{\em empty pentagons}, N$_{2}$H$^{+}$(1$-$0) [$\times4.5$].
The typical error on the integrated intensity is about 20 mK km s$^{-1}$.}
\label{17134fg8}
\end{center}
\end{figure}
\section{Line profiles}\label{HCN:lineprofiles}
In this Section, we discuss the behaviour of the HCN line profiles
as a function of optical depth. It is clear from Fig.~\ref{17134fg2} that towards
\object{L1498} and \object{TMC 2}, the $F=0\rightarrow1$ component
of HCN has a profile
skewed to the blue relative to $F=2\rightarrow1$
of H$^{13}$CN and it is tempting
to interpret this as being due to absorption in a foreground
infalling layer.
Fig.~\ref{17134fg9} illustrates this well showing all three
HCN components compared with
the strongest $F=2\rightarrow1$ component of H$^{13}$CN.
One sees that
there is in fact a progression going from the highly skewed optically
thick $F=2\rightarrow1$ component
of HCN to the essentially symmetric profile of H$^{13}$CN.
In the following, we attempt to quantify this trend.
Looking at Fig.~\ref{17134fg9}, related to the three hyperfine components of
HCN and the H$^{13}$CN($F=2\rightarrow1$) component, one
can qualitatively argue that
the greater the relative intensity of a line, the higher is the skewness
degree. It can be seen that the H$^{13}$CN($F=2\rightarrow1$) component
(gray
histogram)
is fairly symmetric
and indeed, as seen earlier,
its optical depth is not high
since it is next to optically thin limit,
but HCN components are skewed
towards the blue in \object{L1498} and
\object{TMC 2}, while in \object{L1521E} they are skewed towards the red.
The superposition of the different hyperfine components of HCN
led us to the discovery of a correlation between the line profile and
its intensity.
In particular, a red-absorbed line profile (that is skewed towards the blue) is a hint
for the presence of an outer layer which is absorbing the emission of the inner
layer while moving away from the observer, suggesting infall motions.
Conversely, a blue-absorbed line profile (that is skewed towards the red) is an
indication of the motion of the outer absorbing layer towards the observer, that is
outflow motions.
\begin{figure*}[]
\begin{center}
\includegraphics[angle=-90,scale=.6]{17134fg9}
\caption{Superposition of the three hyperfine components of
HCN(1$-$0) ($F=1\rightarrow1$, {\em red line}; $F=2\rightarrow1$, {\em green line};
$F=0\rightarrow1$, {\em blue
line}) with the strongest component ($F=2\rightarrow1$, {\em gray histograms})
of H$^{13}$CN(1$-$0) [$\times$4], see Table \ref{tab:HCN10} for component indices.
In order to compare line shapes and intensities, components have been shifted
in frequency.}
\label{17134fg9}
\end{center}
\end{figure*}
To quantify these deviations from the expected Gaussian shape in the optically thin
limit, we compared the asymmetry degree towards the different positions mapped, using
the definition of skewness, $\delta V$, given by Mardones et al. (\cite{mm97})
\begin{equation}
\delta V=\frac{V_{\rm thick}-V_{\rm thin}}{\Delta V_{\rm thin}}\,,
\end{equation}
where $V_{\rm thick}$ and $V_{\rm thin}$ are the velocities at the peak of the optically
thick and thin components, respectively,
while $\Delta V_{\rm thin}$ is the line width of the thin
component. The normalisation of the velocity difference with $\Delta V_{\rm thin}$
reduces bias arising from lines of different width measured in different sources,
allowing a more realistic comparison of the values of $\delta V$ in our sample.
Velocities and line widths have been determined by Gaussian fits; in the event that
optically-thick line profiles were double peaked, we fitted these two components
with two Gaussians, assigning to $V_{\rm thick}$ the central
velocity of the Gaussian relative to the stronger of the two peaks.
We supposed the $F=2\rightarrow1$
component of H$^{13}$CN to be optically thin, so that
$V_{\rm thin}=V_\mathrm{H^{13}CN}$ and $\Delta V_{\rm thin}=\Delta V_{\mathrm{H^{13}CN}}$.
A value of $\delta V$ lower than zero suggests a red absorption of the line, while a
positive value a blue absorption.
Fig. \ref{17134fg10} shows the values of the skewness degree,
$\delta V$, for the three hyperfine components of HCN as a function of the
column density of molecular hydrogen
for our source sample.
Hence we confirm the suggestion made earlier that:
$(i)$ the absolute value of $\delta V$ is greater for
the strongest hyperfine components,
see also Fig.~\ref{17134fg11}, in fact, in all the three sources,
$|\delta V(F=2\rightarrow1)|>%
|\delta V(F=1\rightarrow1)|>%
|\delta V(F=0\rightarrow1)|$;
$(ii)$ in \object{L1498} and \object{TMC 2}, the emission lines are red-absorbed, since $\delta V<0$, and
this is a hint for the presence of infall motions;
$(iii)$ in \object{L1521E}, the emission lines are blue-absorbed ($\delta V>0$), suggesting
expansion;
($iv$) as expected, $\delta V$ decreases from the centre to the outer part of the
cores,
that is for decreasing values of $N$(H$_{2}$), dropping with line intensities, though sometimes not much;
($v$) $\delta V(F=0\rightarrow1)$
for HCN seems to be rather independent of the H$_{2}$ column
density, being almost constant
for \object{L1498} and \object{L1521E}, probably because this hyperfine component is the weakest line and
it is closer to the optically thin limit.
\begin{figure*}[]
\begin{center}
\includegraphics[angle=0,scale=1]{17134fg10}
\caption{Degree of skewness as a function of the column density of molecular hydrogen:
$\delta_{i}=(V_{i}-V_\mathrm{H^{13}CN})/\Delta V_\mathrm{H^{13}CN}$,
where $i=1,2,3$ is the $i$th component of HCN(1$-$0), and $V_\mathrm{H^{13}CN}$ and
$\Delta V_\mathrm{H^{13}CN}$ are the velocity and the line width of component 2
($F=2\rightarrow1$, the
strongest) of the isotopologue H$^{13}$CN(1$-$0). Values of $\delta V$ from observations
of component 1 ($F=1\rightarrow1$), {\em triangles};
component 2 ($F=2\rightarrow1$), {\em squares};
component 3 ($F=0\rightarrow1$), {\em pentagons}. See also Table \ref{tab:HCN10} for component indices.}
\label{17134fg10}
\end{center}
\end{figure*}
\begin{figure*}[]
\begin{center}
\includegraphics[angle=0,scale=1]{17134fg11}
\caption{Degree of skewness as a function of the relative intensities of the hyperfine
components of HCN(1$-$0).
Linear regressions emphasise the relationship between
$\delta V$ and line strength ({\em dashed red lines}).
Values of $\delta V$ from observations of
the $F=1\rightarrow1$ component ({\em triangles}),
the $F=2\rightarrow1$ component ({\em squares}), and
the $F=0\rightarrow1$ component ({\em pentagons}).}
\label{17134fg11}
\end{center}
\end{figure*}
\section{Excitation temperature results}\label{HCN:Texestimate}
Even though real deviations from LTE populations are present
(see Appendix~\ref{nonltepop}),
the LTE assumption is an approximation which is useful for many purposes and we
calculated the excitation
temperature,
$T_{\rm ex}$, from a simultaneous LTE fit of the hyperfine components in the observed
species, using the measured intensity
of optically
thick transitions.
The main-beam
temperature is given by
\begin{equation}\label{TMB}
T_{\rm mb}= f_{\rm B}[J_{\nu}(T_{\rm ex})-J_{\nu}(T_{\rm bb})](1-e^{-\tau})\,,
\end{equation}
where $J_{\nu}(T)=T_{0}/[\exp(T_{0}/T)-1]$ is the Planck-corrected brightness
temperature and $T_{\rm bb}=2.73$ K is the temperature of the cosmic background.
$T_{0}\equiv h\nu/k$, where $\nu$ is the transition frequency, and $h$ and $k$
represent Planck's and Boltzmann's constants, respectively.
If both the source and the beam are Gaussian shaped, the beam filling factor,
$f_{\rm B}$, is given by
\begin{equation}
f_{\rm B}=\frac{\Omega_{\rm S}}{\Omega_{\rm B}+\Omega_{\rm S}}\,,
\end{equation}
where $\Omega_{\rm B}=1.133\theta_{\rm B}^{2}$ and
$\Omega_{\rm S}=1.133\theta_{\rm S}^{2}$ denote the solid angles covered by the
beam and the source, respectively, while $\theta_{\rm B}$ and $\theta_{\rm S}$ are
the half-power beamwidth
of the beam and the source, respectively, this latter
evaluated by taking the line intensities
stronger than the 50\% of the peak value. We found $f_{\rm B}$ to be equal to
unity for
most of the tracers, because $\Omega_{\rm S}$ is at least one order of magnitude
greater than $\Omega_{\rm B}$, except for
H$^{13}$CN and
HN$^{13}$C in \object{L1521E} ($f_{\rm B}=0.88$).
The correct determination of the
excitation temperature for HN$^{13}$C and H$^{13}$CN is
fundamental. In fact, the gas density of the source sample
($\sim10^5$ cm$^{-3}$,
see Tafalla et al. \cite{tm04}, Tafalla \& Santiago \cite{ts04}, and
Crapsi et al. \cite{cc05} for \object{L1498}, \object{L1521E}, and \object{TMC 2}, respectively)
corresponds to a region where the $T_{\rm ex}$ of the two
isotopologues changes rapidly with density, namely between the radiation- and
collision-dominated density ranges.
We can thus in principle use our $T_{\rm ex}$
determinations to constrain the density and temperature in the
region where the H$^{13}$CN and HN$^{13}$C lines are formed.
Since these transitions are emitted in the region where CO is
depleted (see Sect.~\ref{HCN:depletion}),
we gain information about the high density
``core of
the core''. It is worth noting that our observations also
are a test of the collisional rates for HCN and HNC
(Sarrasin et al. \cite{sa10}, Dumouchel et al. \cite{df10}).
We illustrate this in the next section.
\subsection{Comparison of observed and expected excitation temperature}\label{RADEX}
In Fig.~\ref{17134fg12}, we show the comparison between the
values of the excitation temperatures of H$^{13}$CN and HN$^{13}$C evaluated from the
simultaneous fit of the hyperfine components for the three sources examined, revealing
that $T_{\rm ex}$(HN$^{13}$C) is essentially always greater than $T_{\rm ex}$(H$^{13}$CN)
which we presume to be due to the differing
collisional rates. In the same plot, two curves trace the
values for $T_{\rm ex}$ computed using RADEX (van der Tak et al. \cite{vdtb07}) which
uses
the new collisional rates for HNC (Sarrasin et al. \cite{sa10}, Dumouchel et al. \cite{df10})
that are no longer assumed equal to HCN.
These curves assume
kinetic temperatures, $T_{\rm kin}$, equal to 6 and 10 K, and typical values of
$2\times10^{12}$ cm$^{-2}$ and 0.2 km s$^{-1}$ for column density and line width,
respectively.
We also assumed that the species are cospatial and hence come from regions of the
same kinetic temperature.
Notice that we rely on the fact
that these lines are not too optically thick and thus $T_{\rm ex}$ determinations are
relatively
insensitive to column density and line width.
We notice an impressive good agreement between theory and observations, even if \object{L1498}
shows values a little below the expected theoretical trend.
For \object{L1498}, we conclude from the RADEX results that the observed
$T_{\rm ex}$ are consistent with a density of a few times 10$^4$ cm$^{-3}$ but
this estimate is sensitive to the assumed temperature. If the kinetic
temperature is 10 K, then we can exclude densities as high as
10$^5$ cm$^{-3}$ in the region where H$^{13}$CN and HN$^{13}$C are undepleted.
On the other hand, for a temperature of 6 K, such densities are
possible. Clearly, more sophisticated models are needed to
break this degeneracy. In general, our results are consistent
with the conclusion of Tafalla et al. (\cite{tm06}) that the HCN-HNC ``hole''
is smaller than that seen in CO isotopologues. This suggests that
some CO, but much less than the canonical abundance of about 10$^{-4}$, is present in
the CO depleted region to supply carbon for HCN and HNC.
For \object{L1521E} and \object{TMC 2}, a central kinetic temperature of 6 K seems possible.
From dust emission models, they are thought to reach central densities
of around $3\times10^5$ cm$^{-3}$ (Tafalla \& Santiago~\cite{ts04}, Crapsi et al.~
\cite{cc05})
which would correspond to
excitation temperatures for H$^{13}$CN and HN$^{13}$C higher than
observed if the temperature was as high as 10 K.
\begin{figure}[!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg12}}
\caption{Excitation temperatures of H$^{13}$CN(1$-$0) versus
HN$^{13}$C(1$-$0) evaluated
from observations of \object{L1498} ({\em yellow squares}), \object{L1521E} ({\em green circles}), and
\object{TMC 2} ({\em black triangles}). The {\em solid} and {\em dashed black lines} trace the
values for $T_{\rm ex}$ computed using RADEX for kinetic
temperatures, $T_{\rm kin}$, of 6 and 10 K; {\em red empty circles} and {\em red solid circles} show temperatures
where number density assumes values equal to 10$^4$ and 10$^5$ cm$^{-3}$ for
$T_{\rm kin}=6$ K and 10 K, respectively.
The {\em dotted black line}
depicts the positions where the two excitation temperatures would be equal.}
\label{17134fg12}
\end{center}
\end{figure}
In Fig.~\ref{17134fg13},
we show the observed excitation temperature of
HN$^{13}$C and H$^{13}$CN
against the H$_{2}$ column density inferred from
the dust emission. In all the three sources, one notes a clear
correlation between $T_{\rm ex}$ and $N$(H$_{2}$).
This is an important confirmation
of the hypothesis that $T_{\rm ex}$ is a measure of collisional rates and
that the dust emission peak is also a peak in hydrogen number
density. It is also noteworthy that in \object{TMC 2}, which has the largest
central column density of our three sources, the excitation
temperatures continue to increase up to column densities of $8\times10^{22}$ cm$^{-2}$.
This suggests that HN$^{13}$C and H$^{13}$CN are still present in the gas phase
at densities close to the maximum value in \object{TMC 2} ($3\times10^5$ cm$^{-3}$
according to Crapsi et al. \cite{cc05}). On the other hand, the $T_{\rm ex}$ values at
the dust peak of \object{TMC 2} are not greatly different from that measured
at the dust peak of \object{L1498} suggesting perhaps that the central
temperature is lower in \object{TMC 2}. In any case, we conclude that
excitation temperature determinations of H$^{13}$CN and HN$^{13}$C are a
powerful technique for investigating physical parameters towards the
density peaks of cores.
\begin{figure*}[]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg13}}
\caption{Excitation temperatures of H$^{13}$CN(1$-$0) and
HN$^{13}$C(1$-$0), {\em empty} and {\em solid points}, respectively, as a function of
the molecular hydrogen column density
from observations of \object{L1498} ({\em yellow squares}), \object{L1521E} ({\em green circles}), and
\object{TMC 2} ({\em black triangles}).}
\label{17134fg13}
\end{center}
\end{figure*}
\subsection{Monte Carlo treatment of radiative transfer in \object{L1498}}
\object{L1498} has been already extensively studied in molecular lines
(Tafalla et al. \cite{tm04}, \cite{tm06}, Padovani et al. \cite{pw09}) and we used
the Monte Carlo radiative transfer code in Tafalla et al. (\cite{tm02}) to model
H$^{13}$CN and HN$^{13}$C exploiting the recent collisional rate calculations
(Sarrasin et al. \cite{sa10}, Dumouchel et al. \cite{df10}).
The core model is the same as the one used in the analysis of the molecular survey
of \object{L1498} published in Tafalla et al. (\cite{tm06}).
The core has a density distribution
derived from the dust continuum map and an assumed constant gas temperature of 10 K,
as suggested by the ammonia analysis. The radiative transfer is solved with a slightly
modified version of the Monte Carlo model from Bernes (\cite{b79}).
The molecular parameters were taken from the
LAMDA\footnote{\tt http://www.strw.leidenuniv.nl/$\thicksim$moldata/.} database, where
the rates are computed for
collisions with He and we assumed that the H$_{2}$ rates are larger than the He rates
by a factor of 2, the same criterion being used for HCN in Tafalla et
al.~(\cite{tm06}).
With regard to H$^{13}$CN,
Tafalla et al. (\cite{tm06}) made use
of the collisional rates of Monteiro \& Stutzki (\cite{ms86}) which consider
hyperfine structure complemented for higher transitions with those of Green \&
Thaddeus (\cite{gt74}) for higher $J$. In this model, the abundance law is a step
function with an outer value of $1.7\times10^{-10}$ and a central hole of
$8\times10^{16}$ cm.
We ran a model with the same abundance law, but this time using the
new LAMDA
file for HCN, which does not account for
hyperfine structure.
To compensate for the lack of hyperfine structure, which spreads the photons in
velocity, we broadened
the line by increasing the amount of turbulence from 0.075 km s$^{-1}$ to 0.2
km s$^{-1}$.
The upper panel of Fig.~\ref{17134fg14} shows this alternative model which nicely fits
the radial profile of observed H$^{13}$CN intensities and this
allows us to conclude that the use of the LAMDA file for HCN without
hyperfine structure has little effect on the abundance determination, the result
being as good as that of Tafalla et al. (\cite{tm06}).
For HN$^{13}$C, we used the collisional rates for the main
isotopologue, HNC.
As for H$^{13}$CN, the HNC hyperfine structure is not taken into account,
so we set again the parameter relative to the turbulent velocity to 0.2 km s$^{-1}$.
The lower panel of Fig.~\ref{17134fg14} shows the observed HN$^{13}$C
integrated intensities
for two fits: one assumes equal abundances for H$^{13}$CN and HN$^{13}$C, the other
assumes that the HN$^{13}$C abundance is 1.2 times that of H$^{13}$CN. These
radial profiles suggest an abundance ratio between the two molecules
close to 1, with a likelihood of being closer to 1.2 towards the centre.
\begin{figure}[]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg14.ps}}
\caption{Radial profile of observed H$^{13}$CN(1$-$0) and HN$^{13}$C(1$-$0)
integrated intensities
(upper and lower panel, respectively) and model prediction for
a core with a central hole of $8\times10^{16}$ cm and
an outer abundance
value of $1.7\times10^{-10}$ ({\em red solid lines}) and $2.04\times10^{-10}$
({\em blue dashed line}). HN$^{13}$C(1$-$0)
data come from this study and H$^{13}$CN(1$-$0) data
come from this study and Tafalla et
al. (\cite{tm06}).}
\label{17134fg14}
\end{center}
\end{figure}
The Monte Carlo code also computes the radial profile of excitation temperature, which
is presented in the upper panel of
Fig.~\ref{17134fg15}. We notice that $T_{\rm ex}$ for both species
decreases with the radius from about 4.8 K and 3.8 K for HN$^{13}$C
and H$^{13}$CN, respectively,
in the core interior, towards the cosmic background temperature near the outer edge of
the core. Even more interesting, we found $T_{\rm ex}$(HN$^{13}$C)
systematically higher than
$T_{\rm ex}$(H$^{13}$CN), in nice agreement with the values computed from observations
(see lower panel of Fig.~\ref{17134fg15} compared with
Fig.~\ref{17134fg12}). It is important to stress that
$T_{\rm ex}$ values from
observations refer to some kind of weighted mean along the line of sight
for different positions, while $T_{\rm ex}$ values from Monte Carlo analysis are
related to each shell in the core. This means that the two numbers are
closely connected, but do not exactly have the same meaning and a
proper comparison would require simulating the hyperfine
analysis in the Monte Carlo spectra.
However, the effect is global over the cloud and affects every layer,
so it cannot be ignored in any analysis.
\begin{figure}[]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg15.ps}}
\caption{Upper panel: radial profile of excitation temperature for
H$^{13}$CN(1$-$0), {\em red solid line}, and HN$^{13}$C(1$-$0), {\em blue
dashed line}, in \object{L1498}
as predicted by our best
fit Monte Carlo model; the {\em vertical dashed line} represents the central
hole radius and the {\em horizontal dashed line} shows the cosmic background
temperature limit. Lower panel: comparison between HN$^{13}$C(1$-$0) and
H$^{13}$CN(1$-$0) local values of the excitation temperature in each core shell;
the {\em dashed line} depicts the positions where the two excitation temperatures
would be equal.}
\label{17134fg15}
\end{center}
\end{figure}
\section{Column density and abundance estimates}\label{HCN:Nx}
Based on our
earlier discussion, HCN hyperfine components are too optically thick
to be used for column density determinations.
This means that, while H$^{13}$CN, as well as HN$^{13}$C, originate
in the central part
of the core, HCN is dominated by the foreground layer emission and hence it is not
possible to observe the centre of the core.
Besides, there is an unequivocal difference between the H$^{13}$CN($F=2\rightarrow1$)
and the HCN($F=0\rightarrow1$) profiles (see Fig.~\ref{17134fg9}).
In fact, H$^{13}$CN seems to be much more
Gaussian than the weakest HCN line and it is likely to underestimate the optical
depth of HCN.
However, we found that even HN$^{13}$C and H$^{13}$CN are moderately optically thick
(see Sect.~\ref{HCN:obsresults}) and in this case one can derive the column
density in the $j$th level, $N_{j}$, integrating Eq.~(\ref{TMB}) over frequency
to obtain the integrated intensity, $W_{j}$,
with the optical depth given by
\begin{equation}\label{tau}
\tau=\frac{c^{3}}{8\pi\nu_{ji}^{3}}A_{ji}N_{j}(e^{h\nu_{ji}/kT_{\rm ex}}-1)\ \phi(\nu)\,,
\end{equation}
where $A_{ji}$ is the Einstein coefficient, and $\phi(\nu)$ the profile function,
which is a sum of Gaussians (assuming a maxwellian distribution of the particle
velocities) with the appropriate weights and shifts with respect to the
central frequency, properly accounting for the hyperfine structure.
As shown in Fig.~\ref{17134fg16}, for small optical depths ($\tau<1$)
$N_{j}$ is in direct ratio to $W_{j}$ while,
as $\tau$ increases, the curve flattens even if the flattening is not very sharp
because when
the main component is thick, the satellites are still thin.
Allowing for increasing optical depth is important not to underestimate
column density:
as an instance, at the emission peak of H$^{13}$CN in \object{L1498}, where
$\tau=4.58$ (see Sect.~\ref{HCN:obsresults}), the linear approximation causes an
inaccuracy of about 45\%
and this percentage mainly depends on the optical depth, but not on $T_{\rm ex}$. In
fact, for decreasing excitation temperatures, the linear approximation decreases its
slope, but also the flattening due to opacity is accentuated.
HN$^{13}$C indicates a similar deviation.
\begin{figure}[!htbp]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg16}}
\caption{H$^{13}$CN(1$-$0) integrated intensity as a function of the
column density of the $J=1$ level for
$T_{\rm ex}=3.56$ K (the highest excitation temperature observed in \object{L1498}).
The percentage values show the
deviation of the linear approximation with respect to the correct value taking
account for optical depth.}
\label{17134fg16}
\end{center}
\end{figure}
Hence,
we determine the column density $N_{j}$ corresponding to the observed $W_{j}$
carrying out this procedure using the maximum and the minimum value of the
excitation temperature in order to estimate errors in $N_{j}$.
It is important to remark that in this way column densities are directly
estimated from the integrated intensity of the spectra, avoiding the use of
optical depth values evaluated from the fit of the hyperfine components which have
often large uncertainties (see Fig.~\ref{17134fg4}).
Finally, for the total column density, $N$, it holds
\begin{equation}\label{NjN}
\frac{N_{j}}{N}=\frac{g_{j}}{Q}e^{-E_{j}/kT_{\rm ex}}\,,
\end{equation}
where
$Q=\Sigma_{j=0}^{\infty}g_{j}\ e^{-E_{j}/kT_{\rm ex}}$ is the partition function,
$E_{j}$ and $g_{j}$ being the energy and the statistical weight of the upper
$j$th level, respectively.
Hence, we derive the column density and the abundance of these species with
respect to molecular hydrogen for the main isotopologues HNC and HCN,
assuming the canonical isotopic [$^{12}$C]/[$^{13}$C] ratio
(Milam et al. \cite{ms05}).
Figure~\ref{17134fg17} shows HNC and HCN column densities as a function of H$_{2}$
column density which has been calculated
from the millimetre dust emission, assuming a dust
opacity per unit mass of 0.005 cm$^{2}$ g$^{-1}$ and a dust temperature of 10 K.
Notice the correlation between HNC and HCN column densities and their
optical depths in Fig.~\ref{17134fg4} for \object{L1498}: for
$N$(H$_{2})\gtrsim2.5\times10^{22}$ cm$^{-2}$, $N$(HNC) becomes higher than
$N$(HCN) as well as $\tau(\mathrm{HNC})>\tau(\mathrm{HCN})$.
Besides, HNC and HCN abundances increase towards the peak
in \object{TMC 2} but not in the other two cores.
As an instance, in Table~\ref{coldensabund} we show the values of column density
and abundance with respect to H$_{2}$ for HNC and HCN at the dust emission peak.
\begin{figure*}[!htbp]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg17}}
\caption{Column density of HNC(1$-$0), {\em solid squares}, and HCN(1$-$0), {\em empty squares} for the source sample as a function of
the H$_{2}$ column density.}
\label{17134fg17}
\end{center}
\end{figure*}
\begin{table}[!h]
\caption{Column densities and abundances for HNC and HCN
at the dust emission peak in our source sample.}
\begin{center}
\begin{tabular}{lcc}
\hline\hline
source & $N$(HNC) & $N$(HCN)\\
& [$10^{14}$ cm$^{-2}$] & [$10^{14}$ cm$^{-2}$]\\
\hline
\object{L1498} & 1.58(0.56) & 1.24(0.18)\\
\object{L1521E} & 0.51(0.14) & 0.47(0.12)\\
\object{TMC 2} & 2.41(0.42) & 1.86(0.26)\\
\hline
& [HNC]/[H$_2$] & [HCN]/[H$_2$]\\
& [10$^{-9}$] & [10$^{-9}$]\\
\hline
\object{L1498} & 5.00(1.95) & 3.92(0.96)\\
\object{L1521E} & 1.36(0.42) & 1.27(0.37)\\
\object{TMC 2} & 3.05(0.57) & 2.35(0.36)\\
\hline
\end{tabular}\\[2pt]
\end{center}
\label{coldensabund}
\end{table}%
From the point of view of the chemistry, it is even more interesting to check the
ratio of abundances
[HNC]/[HCN] as
a function of the column density of molecular hydrogen.
Results
for the three sources of our sample are showed in Fig. \ref{17134fg18}
\begin{figure*}[!htbp]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg18}}
\caption{Abundance ratio [HNC]/[HCN] for the source sample as a function of
the H$_{2}$ column density.}
\label{17134fg18}
\end{center}
\end{figure*}
We found a rather constant value for this
ratio, with weighted mean values of
$0.94\pm0.11$ for \object{L1498}, $1.14\pm0.11$ for \object{L1521E}, and $1.05\pm0.10$ for \object{TMC 2},
confirming the result achieved
by Sarrasin et al. (\cite{sa10}) and Dumouchel et al. (\cite{df10}), and
consistent
with detailed chemical models available in the literature
(e.g. Herbst et al. \cite{ht00}).
This result resolves an important discrepancy
between theory and observations which has lasted almost twenty years. There
are rather few clear predictions of chemistry theory, but we can confirm that HNC and
HCN seem to have similar abundances.
However, given that what we measure is the column
density of isotopologues,
this implies that any fractionation should be the same for the two of
them.
Observations of higher-level transitions of HN$^{13}$C and H$^{13}$CN as well
as H$^{15}$NC and HC$^{15}$N would help to refine the determination of column
densities and would allow to estimate the isotopic [$^{14}$N]/[$^{15}$N] ratio, too.
\section{Conclusions}\label{HCN:conclusions}
We have studied in this article the behaviour of the $J=1\rightarrow0$
transitions of HCN(1$-$0), H$^{13}$CN(1$-$0), and HN$^{13}$C(1$-$0) as a function
of position in the three starless cores \object{L1498}, \object{L1521E}, and \object{TMC 2}.
We also observed N$_{2}$H$^{+}$(1$-$0) and C$^{18}$O(2$-$1) in \object{TMC 2}. Our main
conclusions are as follows.
\begin{itemize}
\item[1.] H$^{13}$CN(1$-$0) and HN$^{13}$C(1$-$0) are often assumed
to be optically thin when computing
column densities. Our results show that in the sources studied by us,
this is inaccurate and indeed, optical depths are sufficiently
high to make a reasonable estimate of the excitation temperatures of
these transitions which can be compared with model predictions.
\item[2.] The plot of H$^{13}$CN(1$-$0) excitation temperature
against HN$^{13}$C(1$-$0)
excitation temperature follows the curve expected based on
the collision rates recently computed by Sarrasin et al. (2010) and
Dumouchel et al. (2010) thus confirming these results.
This plot also stresses the importance of
calculations of potential surfaces and
collisional coefficients for isotopologues separately.
Moreover these
excitation temperatures correlate well with H$_{2}$ column density
estimated on the basis of dust emission showing that dust
emission peaks really do trace peaks in the gas density.
\item[3.] These latter results combined with our intensity-offset
plots demonstrate convincingly that, at least in \object{TMC 2}, HCN and HNC survive in the
gas phase at densities above 10$^5$ cm$^{-3}$ where CO has depleted out.
The implication of this is likely that CO survives at abundances
of a few percent of its canonical value of around $10^{-4}$ and
supplies the carbon required in lower abundance species.
\item[4.] The profiles of the three satellites of HCN(1$-$0) become increasingly
``skew'' with increasing optical depth and indeed the corresponding
H$^{13}$CN(1$-$0) profiles are reasonably symmetric. This behaviour suggests the
possibility of modelling the velocity field and abundance along the line of sight.
\item[5.] We have used a model of the density distribution in \object{L1498} to
describe the HN$^{13}$C(1$-$0) and
H$^{13}$CN(1$-$0) results and find reasonable agreement with
a model based on the previous observations of Tafalla et al. (2006)
containing a central ``depletion hole'' of radius $8\times10^{16}$ cm.
This does not exclude models without a depletion hole, but does
confirm our conclusions on the excitation discussed above.
Indeed, rather surprisingly, our results suggest that HN$^{13}$C(1$-$0) and
H$^{13}$CN(1$-$0) trace the high density nuclei of these cores when compared
with other carbon-bearing species.
\item [6.] Our results are consistent with the models of Herbst et
al.~(\cite{ht00})
who found that HNC and HCN should have similar abundances in prestellar
cores.
\end{itemize}
\begin{acknowledgements}
This work has benefited from research funding from the European Community's Seventh Framework Programme. We thank the anonymous referee for her/his very interesting comments that helped to improve the paper.
\end{acknowledgements}
\begin{appendix}
\section{Non-LTE hyperfine populations}\label{nonltepop}
For a homogeneous slab with LTE between different
hyperfine levels, one has
\begin{equation}
R_{ij}=\frac{1-\exp(-f_{i}\tau)}{1-\exp(-f_{j}\tau)},
\end{equation}
where $\tau$ is the total transition optical depth
and $f_{i}$ is the relative line strength of the $i$th component as
in Table~\ref{tab:HCN10}.
We applied this procedure to all the transitions,
except for HN$^{13}$C(1$-$0), because of the blending of the hyperfine components.
Using the same method adopted in Padovani et al. (\cite{pw09}) for $\mathrm{C_{2}H}$}%{C$_{2}$H, we consider a
homogeneous slab, then a two-layer
model, and we compare different couples of ratios one versus the other to quantify
the possible departure from LTE. As for $\mathrm{C_{2}H}$}%{C$_{2}$H, we conclude that also a two-layer
model cannot explain the observed intensities, requiring a proper non-LTE treatment.
Comparing the upper and the lower panels of Fig. \ref{17134fg19} related to HCN(1$-$0)
and
H$^{13}$CN(1$-$0), respectively, it is clear that real
departures from LTE are present and are stronger in \object{L1498} than in \object{L1521E} and
\object{TMC 2}. In \object{L1498}, $R_{32}$, the ratio between the weakest (88633 MHz) and the strongest
(88631 MHz) component, is
larger than unity and $R_{31}>1$ in all the positions.
This suggests a very high self (or foreground) absorption of the strongest component,
as found for the C$_{2}$H(1$-$0) emission in the same core
(Padovani et al. \cite{pw09}).
The other two cores show
ratios which deviate from the LTE curve revealing again the presence of optical depth
effects. As an instance, in the optically thin limit, $R_{32}$ should be equal to about
0.2, but the arithmetic mean of $R_{32}$ is $\sim0.60$
for \object{L1521E} and $\sim0.47$ for \object{TMC 2}.
Finally, an important remark follows from the H$^{13}$CN(1$-$0) ratios
(lower panels): minor
deviations from LTE are present even in this less abundant isotopologue
which should be more optically thin. Even if optically thick lines
are the foremost responsible for the arising of non-LTE effects, the density can be
high enough to allow transition within the same rotational level
(e.g. the transition $J,F = 1,2\rightarrow1,1$
has the same order of magnitude of rates relative to a transition between different
rotational levels, see Monteiro \& Stutzki, \cite{ms86}).
\begin{figure}[!h]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[angle=0]{17134fg19}}
\caption{Ratio of the integrated intensities of couples of components of
HCN(1$-$0) (upper panels),
and H$^{13}$CN(1$-$0) (lower panels), where $R_{ij}$ represents the ratio between
the integrated intensities $W_{i}$ and $W_{j}$
(see Table \ref{tab:HCN10} for component indices).
Observational data: \object{L1498} ({\em yellow squares}),
\object{L1521E} ({\em green circles}),
and \object{TMC 2} ({\em black triangles}).
One-layer model ({\em black solid curve}),
two-layer model ({\em magenta dashed curves}).}
\label{17134fg19}
\end{center}
\end{figure}
\section{Spectroscopic data and observational parameters}\label{app2}
\begin{table}[!h]
\caption{HCN(1$-$0) and H$^{13}$CN(1$-$0) frequencies of the hyperfine components
(From the JPL Molecular Spectroscopy Database: {\tt http://spec.jpl.nasa.gov}).}
\begin{center}
\begin{tabular}{cccc}
\hline\hline
Comp no. & $F^{\prime}-F$ & Frequency [MHz] & $f$\\
\hline
\multicolumn{4}{c}{HCN(1$-$0)}\\
\hline
1 & 1--1 & 88630.4160 & 0.333\\
2 & 2--1 & 88631.8470 & 0.556\\
3 & 0--1 & 88633.9360 & 0.111\\
\hline
\multicolumn{4}{c}{H$^{13}$CN(1$-$0)}\\
\hline
1 & 1--1 & 86338.7670 & 0.333\\
2 & 2--1 & 86340.1840 & 0.556\\
3 & 0--1 & 86342.2740 & 0.111\\
\hline
\end{tabular}
\end{center}
\label{tab:HCN10}
\normalsize
\end{table}%
\begin{table}[!h]
\caption{HN$^{13}$C(1$-$0) frequencies of the hyperfine components.
For a complete list of
the hyperfine components, see van Der Tak et al. (\cite{vdtm09}). In fact, there are four
overlapped hyperfine components with $F_{2}^{\prime}=2$, three components with $F_{2}^{\prime}=3$, and
three components with $F=1\rightarrow1$.}
\begin{center}
\begin{tabular}{cccc}
\hline\hline
Comp no. & transition & Frequency [MHz] & $f$\\
\hline
\multicolumn{4}{c}{HN$^{13}$C(1$-$0)}\\
\hline
1 & $F_{1}'-F_{1}=0-1$ & 87090.675 & 0.065\\
2 & $\;\;F_{2}'=2$ & 87090.791 & 0.264\\
3 & $\;\;F_{2}'=3$ & 87090.834 & 0.432\\
4 & $F_{1}'-F_{1}=1-1$ & 87090.886 & 0.239\\
\hline
\end{tabular}
\end{center}
\label{tab:HN13C10}
\end{table}%
\begin{table}[!h]
\caption{Summary of observed molecules together with the observing parameters:
half power beamwidth, beam and forward efficiencies, system temperature, and
precipitable water vapor.}
\begin{center}
\begin{tabular}{cccccc}
\hline\hline
transition & HPBW & B$_{\rm eff}$ & F$_{\rm eff}$ & T$_{\rm sys}$ & pwv\\
& [$^{\prime\prime}$] & & & [K] & [mm]\\
\hline
HCN(1$-$0) & 28 & 0.77 & 0.95 & $\sim$130 & 1$-$2\\
H$^{13}$CN(1$-$0) & 28 & 0.77 & 0.95 & $\sim$120 & 1$-$2\\
HN$^{13}$C(1$-$0) & 28 & 0.77 & 0.95 & $\sim$140 & 1$-$2\\
N$_{2}$H$^{+}$(1$-$0) & 26 & 0.77 & 0.95 & $\sim$160 & 1$-$2\\
C$^{18}$O(2$-$1) & 11 & 0.55 & 0.91 & $\sim$320 & 1$-$2\\
\hline
\end{tabular}
\end{center}
\label{tab:obspam}
\end{table}%
\end{appendix}
\newpage
|
1,477,468,750,671 | arxiv | \section*{Introduction}
The awesome statistics of the four LEP collaborations have pinned down
with great precision a host of measurable parameters in the standard
model\cite{LEP94}. The only quantity that shows a possible significant
discrepancy with the theoretical prediction of the standard model is $R_b$
which is defined as
\begin{equation}
R_b \equiv {{\Gamma (Z \rightarrow b \overline b)} \over {\Gamma (Z
\rightarrow {\rm hadrons})}}.
\end{equation}
Using $m_t = 175$ GeV and $m_H = 300$ GeV, the standard model predicts
that $R_b = 0.2158$, whereas LEP obtained $R_b = 0.2202 \pm 0.0020$ if
the similarly defined $R_c$ is assumed to be independent. If the latter
is fixed at its standard-model value, then $R_b = 0.2192 \pm 0.0018$.
In either case, the excess is about $2\% \pm 1\%$. If this is taken
seriously, physics beyond the standard model is indicated.
\section*{Two Higgs Doublets}
The simplest extension of the standard model is to have two Higgs doublets
instead of just one. The relevance of this model to $R_b$ was studied
in detail already a few years ago\cite{hollik91}. To establish notation,
let the two Higgs doublets be given by
\begin{equation}
\Phi_i = \left( \begin{array} {c} \phi_i^+ \\ \phi_i^0 \end{array} \right)
= \left[ \begin{array} {c} \phi_i^+ \\ 2^{-1/2} (v_i + \eta_i + i\chi_i)
\end{array} \right].
\end{equation}
Let $\tan \beta \equiv v_2/v_1$, then
\begin{eqnarray}
h^+ &=& \phi_1^+ \cos \beta - \phi_2^+ \sin \beta, \\
h_1 &=& \eta_1 \sin \alpha + \eta_2 \cos \alpha, \\
h_2 &=& \eta_1 \cos \alpha - \eta_2 \sin \alpha, \\
A &=& \chi_1 \cos \beta - \chi_2 \sin \beta.
\end{eqnarray}
Note that the $\overline b b A$ and $\overline t b h^+$ couplings involve
the ratio $m_b \tan \beta / M_W$, hence they could be important for large
values of $\tan \beta$. It was shown\cite{hollik91} that for $\tan \beta
= 70 \simeq 2m_t/m_b$, the $R_b$ excess peaks at about 4\% near $m_A =
m_{h_1} \simeq 40$ GeV for $\alpha = 0$. However, since $Z \rightarrow
Ah_1$ is not observed, $m_A + m_{h_1} > M_Z$ is a necessary constraint.
We show in Fig.~1 the contours in the $m_{h_1} - m_A$ plane for 3 values
of $R_b$. It is clear that relatively light scalar bosons are required
if the $R_b$ excess is to be explained.
\begin{flushright}
UCRHEP-T144\\
TRI-PP-95-16\\
April 1995
\end{flushright}
\input psfig
\begin{figure}
\centerline{\psfig{figure=fig1.eps,height=3.0in,width=3.0in}}
\vspace {0.2in}
\caption{$R_b=0.2192$ (solid), $0.2174$ (dashed) and $0.2164$ (dotted)
contours in the $m_{h_1} - m_A$ plane for $\alpha=0$ and $\tan \beta = 70$.
The straight line corresponds to $m_A + m_{h_1} = M_Z$. We have also assumed
$m_{h^\pm} = m_{h_2} = 175$ GeV.}
\end{figure}
For $A(h_1)$ lighter than $M_Z$ and having an enhanced coupling to
$\overline b b$, the decay $Z \rightarrow b \overline b + A(h_1)$
becomes nonnegligible\cite{dzz91}. As an illustration, we show in
Fig.~2 the branching ratios of these two decays as functions of $m_A$
with the constraint $m_A + m_{h_1} = M_Z + 10$ GeV so that a reasonable
fit to the $R_b$ excess is obtained. It is seen that the sum of these
two branching ratios is at least of order $10^{-4}$. Once produced,
$A$ or $h_1$ decays predominantly into $b \overline b$ as well. Hence
this scenario for explaining $R_b$ can be tested at LEP if the
sensitivity for identifying a $b \overline b$ pair as coming from $A$
or $h_1$ in $b \bar{b} b \bar{b}$ final states can be pushed down
below $10^{-4}$.
\begin{figure}
\centerline{\psfig{figure=fig2.eps,height=3.0in,width=3.0in}}
\vspace{0.2in}
\caption{The branching ratios, $Br(Z \rightarrow b \bar{b} A)$ (dashed)
and $Br(Z \rightarrow b \bar{b} h_1)$ (dotted) and their sum (solid), as
functions of $m_A$ where we take $m_A = m_{h_1} = M_Z + 10$ GeV,
$\tan \beta = 70$, $\alpha = 0$, and $m_{h^\pm} = m_{h_2} = 175$ GeV.}
\end{figure}
\section*{Minimal Supersymmetric Standard Model}
In the Minimal Supersymmetric Standard Model (MSSM), there are two Higgs
doublets, but their parameters are further constrained, hence the
allowed region in the $m_{h_1} - m_A$ plane which gives a large enough
$R_b$ is further reduced by the experimental nonobservation of MSSM
signals at LEP\cite{sopczak94}.
There is of course another possible contribution to $R_b$ in the MSSM:
the $Z \rightarrow b_L b_L$ vertex may be enhanced by the supersymmetric
coupling of $b_L$ to a scalar top quark and a chargino\cite{misc}.
In this case, both of the new particles must again be light, but now $Z$
cannot decay into just one of these particles because of the assumed
conservation of $R$ parity, hence no further constraint is obtainable at LEP.
\section*{Necessary Top Decays}
Since $b_L$ is involved in any enhanced coupling to light
particles in explaining the $R_b$ excess, its doublet partner $t_L$
must necessarily have the same enhanced coupling to related particles.
In the two-Higgs-doublet case, we must have an enhanced $\bar{t} b h^+$
coupling. Therefore, unless $m_{h^+} > m_t - m_b$, the branching ratio of
$t \rightarrow b h^+$ will dominate over all others. In particular, the
standard $t \rightarrow b W$ branching ratio will be seriously degraded.
We show in Fig.~3 the branching ratio $Br(t \rightarrow bW)$ as a function
of $m_{h^+}$. Large values of $m_{h^+}$ are disfavored in this scenario
because the splitting with $A$ and $h_1$ would result in a large
contribution to the $\rho$ parameter\cite{grant95}.
This poses a problem for top production at the TEVATRON because the
number of observed top events is consistent with the assumption that
top decays into $b W$ 100\% of the time. If that is not so, then top
production must be enhanced by a large factor beyond that of the standard
model. The two-Higgs-doublet model itself certainly does not provide for
any such mechanism.
\begin{figure}
\centerline{\psfig{figure=fig3.eps,height=3.0in,width=3.0in}}
\vspace{0.2in}
\caption{The branching ratio $Br(t \rightarrow b W)$ as a function of
$m_h^+$ for $\tan \beta = 70$ (solid), 50 (dashed), and 20 (dotted).}
\end{figure}
In the MSSM, if the $R_b$ excess is attributed to a light scalar top quark
and a light chargino, then we should look at the latter's doublet partner
which is in general a linear combination of neutralino mass eigenstates.
At least one of these, {\it i.e.} the Lightest Supersymmetric Particle (LSP),
will be light enough to allow the top quark to decay into it and the
scalar top. The $\rho$ parameter also serves to disfavor large neutralino
masses in this scenario. Hence the $t \rightarrow b W$ branching ratio is
again seriously degraded. Turning the argument around, this means that for
every observed top event, there must be several others which correspond to
the production of supersymmetric particles. If the $R_b$ excess is really
due to supersymmetry, top decay is the place to discover it!
\section*{Conclusion}
If the $R_b$ excess at LEP is real and we want to explain it in terms of
renormalizable loop corrections, then light particles are unavoidable.
However, these light particles may be produced also in $Z$ decay such as
in the two-Higgs-doublet case, where $Z \rightarrow b \bar{b} +
A~{\rm or}~h_1$ is at least of order $10^{-4}$ in branching ratio.
More importantly, there is necessarily a corresponding top decay into
one of these light particles (such as the scalar top in the MSSM) and
the other particle's doublet partner (the neutralino), which seriously
degrades the $t \rightarrow b W$ branching ratio. Unless there is
accompanying new physics which enhances the top production by a large
factor at the TEVATRON, this generic explanation of the $R_b$ excess
in terms of light particles does not appear to be viable.
\section*{Acknowledgement}
The work of E.M. was supported in part by the U.S. Department of Energy
under Contract No. DE-AT03-87ER40327. The work of D.N. was supported by the
Natural Sciences and Engineering Research Council of Canada.
|
1,477,468,750,672 | arxiv | \section{Introduction}
Recall that a permutation group $G$ acting on a set $X$ is sharply $2$-transitive if for any two pairs $(x,y)$ and $(x',y')$ of distinct elements of $X$ there is a unique $g\in G$ with $gx=x'$ and $gy=y'$. Then $G$ has involutions, and either involutions have fixed points and $G$ is of permutation characteristic $2$, or the action on $X$ is equivalent to the conjugation action on the set $I$ of involutions. In that case, all translations, i.e.\ products of two distinct involutions, are also conjugate and have the same order $p$, which is either an odd prime or $\infty$; the number $p$ (or $0$ if $p=\infty$) is the permutation characteristic of $G$. We say that $G$ splits if it has a regular normal subgroup $N$; in that case $G=N\rtimes C_G(i)$ for any involution $i\in I$. Note that Tent, Rips and Segev have constructed non-split sharply $2$-transitive permutation groups of characteristic $0$ and $2$.
V. D. Mazurov asked in the Kourovka Notebook (question 12.48):\\
Let G be a sharply $2$-transitive permutation group.\begin{enumerate}
\item Does G possess a regular normal subgroup if a point stabilizer is locally finite?
\item Does G possess a regular normal subgroup if a point stabilizer has an abelian
subgroup of finite index?\end{enumerate}
We shall answer question (b) affirmatively in permutation charateristic $0$. In fact, we shall show a more general result for near-domains.
\section{Near-domains and near-fields.}
Instead of working with sharply $2$-transitive groups, we shall work in the equivalent setting of near-domains.
\begin{definition}
$(K,0,1,+,\cdot)$ is a {\em near-domain} if for all $a,b,c\in K$
\begin{enumerate}
\item $(K,0,+)$ is a {\em loop}, i.e.\ $a+x=b$ and $y+a=b$ have unique solutions, with $a+0=0+a=a$;
\item $(K\setminus\{0\},1,\cdot)$ is a group, and $0\cdot a=a\cdot 0=0$;
\item left distributivity holds: $a\cdot(b+c)=a\cdot b+a\cdot c$;
\item for all $a,b\in K$ there is $d_{a,b}\in K$ such that $a+(b+x)=(a+b)+d_{a,b}\cdot x$ for all $x$.\end{enumerate}
A near-domain is a {\em near-field} if addition is associative.\end{definition}
Hence a near-field is a skew field iff right distributivity holds.
\begin{fact}[Tits, Karzel]
A sharply $2$-transitive permutation group $G$ is isomorphic to the group of affine transformations of some near-domain $K$, i.e.\ of the set of permutations $\{x\mapsto a+bx:a,b\in K,\,b\not=0\}$; the centraliser of any involution is isomorphic to the multiplicative group $K^\times$. It is split iff $K$ is a near-field.\end{fact}
Let $E$ be the set $\{d\in K:1+d=d+1\}$. Since the additive loop of $K$ is power- associative, it is easy to see that $1$ generates a subfield of $K$ contained in $E$, which is either $\mathbb Q$ or $\mathbb F_p$. Thus $K$ has a characteristic, which is easily seen to be equal to the permutation characteristic of $G$. Note that in characteristic $>2$ there is a unique maximal sub-near-field, which is equal to $E$.
\begin{fact}[\cite{Ker74}] For all $a,b,c\in K$ we have:\begin{enumerate}
\item $d_{a,a}=1$.
\item $d_{a,b}(b+a)=a+b$.
\item $cd_{a,b}c^{-1}=d_{ca,cb}$.
\item $d_{a,b}=d_{a,c}d_{c+a,-c+b}d_{-c,b}$.
\item If $a,b\in E$ then $(a+b)\,2\in E$.
\item $|K^\times:C_{K^\times}(d_{a,b})|=\infty$ if $d_{a,b}\not=1$.
\end{enumerate}
\end{fact}
Let now $A$ be any subgroup of finite index in $K^\times$ which avoids all non-trivial coefficients $d_{a,b}$ for $a,b\in K$.
Kerby \cite[Theorem 8.26]{Ker74} has shown that $K$ must be a near-field in the following cases:\begin{enumerate}
\item $\mbox{char} K=0$ and $|K^\times:A|=2$,
\item $\mbox{char} K=2$, $|K^\times:A|=2$ and $|E|>2$,
\item $\mbox{char} K=p>2$ and $|K^\times:A|<|E|$.
\end{enumerate}
We shall adapt the proof of (3) to characteristic $0$.
\begin{lmm}\label{lemma} Suppose $d_{a,1/k}=1$. Then $d_{a,n/k}=1$ for all $n\in\mathbb N$.\end{lmm}
\begin{proof} By induction on $n$. This is clear for $n=0$ and $n=1$. So suppose it holds for $n$, and consider
$$\begin{aligned}\frac{n+1}k +a&=\Big(\frac nk+\frac1k\Big)+a=\frac nk+\Big(\frac1k+a\Big)=\frac nk+\Big(a+\frac 1k\Big)\\
&=\Big(\frac nk+a\Big)+\frac 1k=\Big(a+\frac nk\Big)+\frac1k=a+\Big(\frac nk+\frac 1k\Big)=a+\frac{n+1}k.\qedhere\end{aligned}$$
\end{proof}
\begin{proposition}\label{propn} If $A\le K^\times$ is a subgroup of finite index avoiding all nontrivial $d_{a,b}$ and $\mbox{char}(K)=0$, then $K$ is a near-field.\end{proposition}
\begin{proof} Recall that $\mathbb Q\subseteq E$. If $K=E$, then $d_{a,b}=1$ for all $a,b\in K$ and $K$ is a near-field. So assume $E\subsetneq K^\times$, and take $a\in K\setminus E\,2^{-1}$. Let $n=|K^\times:A|$. Then there are distinct $i>j$ in $\{0,1,2,\ldots,n\}/n!$ with $d_{a,i}A=d_{a,j}A$; since $d_{-j,i}=1$ we obtain
$$d_{a,i}=d_{a,j}d_{j+a,-j+i}d_{-j,i}=d_{a,j}d_{j+a,-j+i}.$$
Hence $d_{j+a,-j+i}\in A$, and $d_{j+a,-j+i}=1$ by assumption.
Now $d_{(i-j)^{-1}(j+a),1}=d_{j+a,-j+i}=1$, so $(i-j)^{-1}(j+a)\in E$. Since $-(i-j)^{-1}j\in\mathbb Q\subseteq E$, we have
$$[-(i-j)^{-1}j+(i-j)^{-1}(j+a)]\,2=(i-j)^{-1}a\,2\in E,$$
and $d_{a2,i-j}=1$. But $0<(i-j)\,n!\le n$ is integer, and there is an integer $k>0$ with $i-j=\frac1k$. By Lemma \ref{lemma} we obtain $d_{a2,1}=1$ and $a\,2\in E$, a contradiction.
\end{proof}
\begin{cor} Let G be a sharply doubly transitive permutation group of characteristic $0$ whose point stabilizer is virtually abelian. Then $G$ is split.\end{cor}
\begin{proof} If $K$ is the associated near-domain, $K^\times$ has an abelian subgroup $A$ of finite index. Now any non-trivial $d_{a,b}$ has a centralizer of infinite index in $K^\times$, so $d_{a,b}\notin A$. We finish by Proposition \ref{propn}.\end{proof}
|
1,477,468,750,673 | arxiv |
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\text{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\@ifundefined{Column}{\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}}{}%
\@ifundefined{qed}{\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}}{}%
\@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}%
\@ifundefined{tciLaplace}{\def\tciLaplace{L}}{}%
\@ifundefined{tciFourier}{\def\tciFourier{F}}{}%
\@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}%
\@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}%
\@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}%
\@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}%
\@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}%
\@ifundefined{vvert}{\def\vvert{\Vert}}{
\@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}%
\@ifundefined{dB}{\def\dB{\hbox{{}}}}{
\@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{
\@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{
\@ifundefined{note}{\def\note{$^{\dag}}}{}%
\defLaTeX2e{LaTeX2e}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\tag@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\tag@true
\global\def\@taggnum{(#1)}}
\def\@TCItagstar*#1{%
\global\tag@true
\global\def\@taggnum{#1}}
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\mathop{\textstyle \int}}%
\def\tiint{\mathop{\textstyle \iint }}%
\def\tiiint{\mathop{\textstyle \iiint }}%
\def\tiiiint{\mathop{\textstyle \iiiint }}%
\def\tidotsint{\mathop{\textstyle \idotsint }}%
\def\toint{\mathop{\textstyle \oint}}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\def\dint{\mathop{\displaystyle \int}}%
\def\diint{\mathop{\displaystyle \iint}}%
\def\diiint{\mathop{\displaystyle \iiint}}%
\def\diiiint{\mathop{\displaystyle \iiiint }}%
\def\didotsint{\mathop{\displaystyle \idotsint }}%
\def\doint{\mathop{\displaystyle \oint}}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\RequirePackage{amsmath}
\makeatother
\endinput
|
1,477,468,750,674 | arxiv | \section{Introduction}
\label{sec:Intro}
Superluminous supernovae \citep[SLSNe;][]{Qui2011,Gal2012}, whose peak
luminosities are $\gtrsim 10^{44}$ erg s$^{-1}$, have been discovered and
studied in the last decade. According to their optical spectra near maximum
light, SLSNe can be divided into types I (hydrogen-deficient) and II
(hydrogen-rich; the detailed classification schemes of SNe based on their
spectra were summarized by \citealt{Fil1997} and \citealt{Gal2016}). To
date, all SLSNe-I are type Ic SNe whose spectral features resemble that of
the normal-luminosity SNe Ic \citep{Pas2010,Gal2012,Inse2013,Nich2016b}.
While most SLSNe-II have narrow H$\alpha$ emission lines in their optical
spectra and can be regarded as the high-luminosity versions of normal SNe
IIn, a few SLSNe II do not have H$\alpha$ emission lines and their spectra
resemble those of SNe IIL \citep{Gez2009,Mil2009,Ins2016}.
The problems of the origin of the energy sources and explosion mechanisms of
SLSNe have not yet been completely solved. Currently, the most prevailing
energy-source models explaining SLSNe are magnetar-powered model
\citep[e.g.,][]{Kas2010,Woos2010,Des2012,Inse2013,Chen2015,Wang2015a,Wang2016a,Dai2016,Wang2016c}
\footnote
It has long been recognized that the dipole radiation from the nascent
neutron stars can enhance the luminosities of the normal SNe
\citep{Ost1971,Mae2007}.}, the ejecta-circumstellar medium (CSM) interaction
model \citep{Che1982,Che1994,Chu1994,Chu2009,Che2011,Cha2012,Gin2012}, and
the pair instability supernova (PISN) model \citep{Bar1967,Rak1967} which is
essentially $^{56}$Ni-powered \citep{Col1969,Col1980,Arn1982} but requires a
huge amount ($\gtrsim 5M_{\odot }$) of $^{56}$Ni. Some SLSNe show
double-peaked light curves \citep{Nich2015a,NS2015,Smit2016,Vre2017} and
their early-time excess emission might be due to the cooling emission from
shock-heated envelopes of the progenitors while the main peaks can be
explained by the magnetar-powered model or interaction model.
Determining the energy-source models for SLSNe is rather tricky. For
example, \citet{Gal2009} proposed that SN 2007bi, whose light-curve decline
rate is approximately equal to the decay rate of $^{56}$Co, is a PISN, but
\citet{Des2012} argued that it might not be a PISN since its spectrum is
inconsistent with the spectrum produced by the PISN model; \citet{Nich2013}
demonstrated that another slow declining SLSN PTF12dam, whose post-peak
light curve mimics that reproduced by the PISN model, is not yet a PISN
since the rising part of its light curve cannot be explained by the PISN
model.
Except for the two high-redshift SLSNe (SN 2213-1745 and SN 1000+0216) that
are believed to be PISNe \citep{Cooke2012} and SN 2007bi whose explosion
mechanism is still in debate,\footnote{\citet{Inse2017} assembled four
low-redshift slow-evolving SLSNe I (SN 2007bi, PTF12dam, SN 2015bn, and
LSQ14an) and found the declines of their light curves at $t-t_{\text{peak}}
\gtrsim 150$ days are faster than $^{56}$Co and further steepen after 300
days, which indicates that these four SLSNe I cannot be explained by the
PISN model since the required ejecta are so massive that the decline rates
of the light curves must be consistent with that of $^{56}$Co decay at $t-t_
\text{peak}}\gtrsim 500$ days. A similar analysis cannot be performed for
high-redshift analogues since the lower quality late-time data prevent
further investigation, and therefore the possibility that high-redshift
slow-evolving SLSNe are PISNe cannot be excluded.} all SLSNe discovered
cannot be explained by the PISN model since the ratio of required masses of
^{56}$Ni to inferred ejecta masses are too large and/or the theoretical
light curves did not fit the observational data
\citep{Qui2011,Inse2013,Nich2013,Nich2014}. This fact in turn indicates that
most SLSNe observed might be core-collapse SNe (CCSNe) since the peak
luminosities of type Ia SNe cannot reach $10^{44}$ erg s$^{-1}$. \footnote
Type Ia SNe magnified by the gravitational lensing effect, e.g., PS1-10afx
\citep{Qui2013,Qui2014}, are not genuine SLSNe.}
SLSNe that cannot be explained by PISN model can be explained by the
magnetar-powered model \citep{Inse2013,Nich2013,Nich2014} or the ejecta-CSM
interaction model \citep[e.g.][]{Smi2007,Mor2013a,Nich2014}. The
magnetar-powered model proposes that the nascent neutron stars with initial
rotational periods of a few milliseconds and magnetic strength of
10^{13}-10^{15}$ G inject their rotational energy to the ejecta of SNe,
while the ejecta-CSM interaction model supposes that the interaction between
the SN ejecta and the CSM surrounding the progenitors of the SNe can release
a huge amount of energy and heat the SN ejecta. These two processes can
significantly enhance the luminosities of the SNe and lead them to be SLSNe.
In this paper we focus on the SLSNe that were demonstrated not to be PISNe.
While the ejecta-CSM model cannot be excluded in explaining these SLSNe Ic,
we do not adopt it due to the absence of the interaction signatures (e.g.,
narrow H$_{\alpha }$ emission lines) that are indicative of strong
interaction between the ejecta and CSM. Studies have shown that the
magnetar-powered model can well reproduce the light curves of these SLSNe,
and indicated that the magnetar-powered model has special advantages since
it does not need ad hoc assumptions about the pre-supernova mass-loss
history. Therefore, the magmetar model is preferred in modeling SLSNe Ic.
Most previous studies using the magnetar-powered model focused on the
light-curve fitting. Some group fitted the light curves, temperature
evolution and the evolution of photospheric radii of SLSNe. However, the
photospheric radius of an SLSN cannot be measured directly, and it is
derived from the luminosity and temperature ($L=4\pi \sigma T^4R^2$, where
L $ is the luminosity, $\sigma$ is the Stefan-Boltzman constant, $T$ is the
temperature, and $R$ is the photospheric radius).
In this paper, we collect a sample of 19 type I SLSNe and use the
manetar-powered model proposed by \citet{Wang2016a} to fit their light
curves, temperature evolution, and velocity evolution. To obtain the
best-fitting parameters, we use the Markov Chain Monte Carlo (MCMC) code
developed by \citet{Wang2016b}, who employed this approach (manetar-powered
model + MCMC) to fit the light curves of SNe 1998bw and 2002ap.
This paper is organized as follows. In Section \ref{sec:mod}, we present our
sample and describe the magnetar-powered model adopted. In Section \re
{sec:res}, we use the magmetar-powered model and the MCMC code to get the
best-fitting parameters. We discuss implications of our results in Section
\ref{sec:dis} and give some conclusions in Section \ref{sec:con}. In our
another paper \citep{Wang2017}, we systematically study the whole sample of
broad-lined type Ic supernovae not associated with gamma-ray bursts. We find
that the magnetar-powered model can also well account for both the light
curves and velocity evolution of these SNe.
\section{Sampling and Modeling}
\label{sec:mod}
We collect 19 type I (Ic) SLSNe in the literature, see Table \ref{tbl:sample
. All of these SLSNe have light curves and temperature data, and most of
them have velocity data. These SLSNe are selected because the observational
errors were provided by the original papers so that we can run the MCMC code
against them.
The most prevailing magnetar model are the model proposed by \citet{Cha2012}
and \citet{Inse2013} the model revised by \citet{Wang2015a} and
\citet{Chen2015} by incorporating the hard emission leakage effect. These
semi-analytical magnetar-powered models used to fit most type I SLSNe
neglect the photospheric recession effect and acceleration of the SN ejecta
caused by the magnetar wind. \citet{Kas2010} and \citet{Woos2010} proposed
the magnetar-powered model for SLSNe and demonstrated the acceleration
effect is rather notable but they do not take into account the leakage
effect. \citet{Wang2016a} proposed a revised magnetar model that took into
account all these effects (leakage effect, photospheric recession effect and
acceleration effect). So we use this revised magnetar-powered model to fit
the observed data.
In this revised magnetar-powered model, the photospheric velocities of SLSNe
cannot be fixed to be their scale velocities $v_{\text{sc}}$ and their
evolution must be fitted. Moreover, the scale velocity itself is a varying
quantity and not directly measurable. Instead, the initial scale velocity
v_{\text{sc0}}$ is a free parameter. If the neutrinos emitted from
proto-neutron stars provide the initial kinetic energy (KE) and initial
velocity, then the values of $v_{\text{sc0}}$ must have a lower limit to
ensure that the initial KE of SNe can be larger than the lower limit of the
initial KE provided by neutrinos
\citep[$\sim10^{50}$ erg,][and references therein]{Jan2016}.
Other parameters in this model include the mass of ejecta $M_{\text{ej}}$,
grey optical opacity $\kappa $, magnetic strength of magnetar $B_{p}$,
initial rotational period of the magnetar $P_{0}$, and the gamma-ray opacity
$\kappa _{\gamma }$ to magnetar photons.\footnote
In order to quantitatively describe the leakage effect associated with gamma
and X-ray photons emitted from the magnetar, \citet{Wang2015a} incorporated
the trapping factor ($1-e^{-At^{-2}}$) into the original magnetar-powered
model, $A=3\kappa _{\gamma }M_{\text{ej}}/4\pi v^{2}$, $\kappa _{\gamma }$
is the effective gamma ray opacity.} Here the subscript \textquotedblleft $p
" in $B_{p}$ means the surface dipole magnetic field of the magnetar
\citep{Shapiro83}. The optical opacity $\kappa $ is somewhat uncertain and
we adopt the value 0.1 cm$^{2}$ g$^{-1}$ throughout this paper.
Because the explosion time of an SLSN is not an observable quantity, a free
parameter $T_{\text{start}}$ is included to refer to the
theoretical explosion time relative to the zero epoch given in the paper
from which the observed data have been taken. In the papers providing the data,
sometimes the zero epochs refer to the peak times, and otherwise the zero epochs refer to
the inferred explosion times.
While the PISN model (pure $^{56}$Ni-powered model) has been excluded by
previous studies for most type I SLSNe, a moderate amount of $^{56}$Ni
synthesized by the these SLSNe cannot be completely neglected. However, the
masses of $^{56}$Ni synthesized by CCSNe are usually only $0.04-0.2M_{\odot}$ for normal
CCSNe and $0.2-0.5M_{\odot }$ \citep[see e.g., Figure 8 of][]{Mazzali13} for
some Hypernovae whose kinetic energies were inferred to be $\gtrsim 10^{52}$
erg.\footnote
There is evidence that at least some hypernovae were powered by magnetars
\citep{Wang2016b} and the required $^{56}$Ni mass and initial explosion
energy are significantly smaller than previously believed
\citep{WangHan16,Wang2017}.} When we study an SLSNe whose peak luminosities
\gtrsim 10^{44}$ erg s$^{-1}$, the contribution of $0.1-0.5$ $M_{\odot}$ of
^{56}$Ni is significantly lower than the contribution from magnetars or the
ejecta-CSM interaction and can therefore be neglected
\citep[e.g.,][]{Inse2013,Chen2015} since $0.1-0.5$ $M_{\odot}$ of $^{56}$Ni
will power a peak luminosity of $\sim 2\times 10^{42}-1\times 10^{43}$ erg~
$^{-1}$ if the rise time is $\sim 18$ days and a lower peak luminosity for
longer rise time.\footnote
For the luminous SNe whose peak luminosity $\sim 5\times 10^{43}$ erg s
^{-1} $, the contribution from $^{56}$Ni cannot be neglected
\citep{Wang2015b}.}
Hence, we neglect the contribution from $^{56}$Ni in our modeling, i.e., the
mass of $^{56}$Ni is set to be zero. In summary, the free parameters in this
model are $M_{\text{ej}}$, $B_{p}$, $P_{0}$, $v_{\text{sc0}}$,
\kappa_{\gamma}$, and $T_{\text{start}}$. To get the best-fitting parameters
and estimate the uncertainties of the parameters, we use the code developed
by \citet{Wang2016b} who incorporated MCMC approach into our revised
magnetar-powered model.
\section{Results}
\label{sec:res}
Using the magnetar-powered model and the MCMC approach, we find that the
light curves, temperature evolution, and velocity evolution reproduced by
the model are in excellent agreement with the observational data of LSQ14mo,
PS1-10awh, PS1-10bzj, SN 2010gx, SN 2011kf, and PS1-11ap, see Figure \re
{fig:fit1}. The light curves and temperature data of SN 2012il, PTF12dam,
PTF11rks, SN 2013dg, SSS120810, PTF10hgi, Gaia16apd, and DES13S2cmm can also
be explained by the model, but the velocities reproduced by the model are
larger or smaller than the data of these SLSNe, see Figure \ref{fig:fit2}.
DES13S2cmm is also in this group since it do not have velocity data. The
light curves of SN 2011ke, SN 2015bn, LSQ12dlf, DES14X3taz, and PS1-14bj can
also be well reproduced by the model, but both the temperature evolution and
velocity evolution of these SLSNe do not fit the theoretical curves well,
see Figure \ref{fig:fit3}. In these figures, the zero epochs all
refer to the peak times of the SLSN light curves.
The best-fitting parameters as well as the values of $\chi^2$/d.o.f are
listed in Table \ref{tbl:para} and their error bars can be appreciated by
inspecting Figure \ref{fig:corner}. Some parameters, e.g., $\kappa_{\gamma}$
and $v_{\text{sc0}}$\ of some SNe (see Figure \ref{fig:corner}), cannot be
tightly constrained and therefore only their $1\sigma $\ upper limits or
lower limits are presented in this table. From Table \ref{tbl:para}, we find
that the magnetars' initial periods and the magnetic field strengths are
1.2-8.3$ ms and $0.2-8.8\times 10^{14}$ G, respectively, consistent with
theoretical expectation and the results of previous modelings. The gamma-ray
opacity $\kappa_{\gamma }$ directly determining the magnitude of late-time
leakage effect is between 0.01 and 0.8 cm$^2$ g$^{-1}$. Assuming that
\kappa $ = 0.1 cm$^2$ g$^{-1}$, the masses of these SLSNe are between
1-27.6M_{\odot }$. \citet{Nich2015b} have fitted the light curves of 24
hydrogen-deficient SLSNe and found that the range of their ejecta masses is
3-30M_{\odot}$ if $\kappa =0.1$\thinspace cm$^2$ g$^{-1}$. Both the lower
limit and the upper limit of the mass range in our sample are smaller than
the values inferred by \citet{Nich2015b}. If we adopt a larger $\kappa$,
e.g., 0.2 cm$^2$ g$^{-1}$, the ejecta mass must be halved. Although the
acceleration effect must be obvious for less massive ejecta, the light
curves have only a slight change, i.e., the degeneracy between the $\kappa$
and $M_{\text{ej}}$ would not be broken.
The initial scale velocity $v_{\text{sc0}}$ of the ejecta of these SLSNe is
between $\sim $1,100 km s$^{-1}$ and $1.7\times 10^{4}$ km s$^{-1}$. Based
on the values of $M_{\text{ej}}$ and $v_{\text{sc0}}$, the initial KE $E_
\text{K0}}$ of these SLSNe can be calculated. In the beginning of the
expansion when the acceleration does not take place, the initial KE can be
calculated according to $E_{\text{K0}}=0.3M_{\text{ej}}v_{\text{sc0}}^{2}$
(see \citealt{Arn1982} and footnote 1 of \citealt{Wheeler15}) and listed in
Table \ref{tbl:derived parameters}. The initial rotational energy of the
corresponding magnetars and the accumulative fraction ($\eta $) of the
magnetars' rotational energy converted to the ejecta KE are also listed in
Table \ref{tbl:derived parameters}.
Figure \ref{fig:dis} shows the distributions of these best-fitting
parameters (initial periods $P_{0}$, magnetic strength $B_{p}$, ejecta
masses $M_{\text{ej}}$, initial scale velocities $v_{\text{sc0}}$, and the
gamma-ray opacity $\kappa _{\gamma }$), the derived parameter (initial KE
E_{\text{K0}}$), and the conversion fraction ($\eta$).
\section{Discussion}
\label{sec:dis}
In our adopted model, the initial velocity of the SN ejecta is also a free
parameter, rather than a measurable quantity. The magnetar wind would
accelerate the ejecta, sweep up the material into a (thin) shell and produce
a bubble between the center and the shell. We fit the observed photospheric
velocities $v_{\text{ph}}$ (even though they are always smaller than the
scale velocities $v_{\text{sc}}$) and find the theoretical velocities of the
ejecta of 6 SLSNe are in excellent agreement with the observations. However,
we would point out that the velocity data of the remaining SLSNe in our
sample cannot be well reproduced by the model.
In most of the previous studies, the KE $E_{\text{K}}$ was a constant since
the scale velocity $v_{\text{sc}}$ is fixed to be a constant ($v_{\text{sc
}=v_{\text{sc0}}$), then the inferred KE is equivalent to the initial KE (
E_{\text{K}}=E_{\text{K0}}$). This assumption would cause two problems.
First, if we neglect the acceleration effect from the magnetar wind, the
ejecta expansion is homologous ($v_{\text{sc}}(x)\propto x$, where $x$ is
the distance between the arbitrary point in the ejecta to the center), and
the KE of the ejecta is $E_{\text{K}}=0.3M_{\text{ej}}v_{\text{sc}}^2$.
According to this formula, we find that the initial KEs of almost all SLSNe
in the literature are $\gtrsim 5\times 10^{51}$ erg if the acceleration
effect is neglected. Most SLSNe might be CCSNe and their initial KE should
be given by neutrinos coming from proto-neutron stars. The multi-dimensional
simulations of neutrino-driven SNe find that the upper limit of the KE
provided by neutrinos is $\sim 2.0\times 10^{51}$ erg or $\sim 2.5\times
10^{51}$ erg \citep{Ugl2012,Ertl2016,Mul2016,Suk2016}. Even if we adopt the
looser upper limit ($\sim 2.5\times 10^{51}$ erg), it is still significantly
smaller than the inferred KE ($\gtrsim 5\times 10^{51}$ erg) of the SLSNe.
It seems evident that some other mechanisms are required to provide the
additional initial KE.
To solve this problem, one might assume that the energy injection process
must be divided into two steps: (1) the magnetar releases $\gtrsim 3\times
10^{51}$ erg of the rotational energy at very short internal (``explosive
injection"), and (2) the magnetar continuously injects its rotational energy
to SN ejecta (``continuous injection"). As pointed out by \citet{Ioka2016},
this two-step injection scenario needs an exotic behavior: the magnetic
field should be initially large, resulting in a fast energy injection for a
short duration, and then it may decay rapidly between the explosive
injection and continuous injection.
Second, the initial rotational energy of a magnetar with an initial period
P_{0}$ is $\frac{1}{2}I\Omega_{0}^2\simeq 2\times 10^{52}\left(I/10^{45} {
\text{g}~\text{cm}}^2\right) \left({P_{0}}/{1~\text{ms}}\right) ^{-2}$ erg.
Here $I$\ is the moment of inertia of the magnetar and $\Omega_{0}$\ is its
initial angular velocity. If $P_{0}=1-5$ ms, the initial rotational energy
is $\simeq 1-20\times 10^{51}$ erg. Calculations
\citep{Kas2010,Woos2010,Wang2016b} have indicated that a fraction of the
initial rotational energy of a magnetar would be converted to the KE of SNe.
\footnote
For example, \citet{Woos2010} demonstrated that a magnetar with $P_{0}$ =
4.5 ms and $B_{p}=1\times 10^{14}$ G would convert 40\% of its initial
rotational energy to the KE of the SN ejecta.} This huge amount of energy
and its acceleration effect cannot be neglected.
Our modelings based on the magnetar-powered model \citep{Wang2016b} can
simultaneously solve these two problems by taking into account the
acceleration effect of the magnetar wind. We find that $\sim 19-97$\% of the
initial rotational energy of the magnetars have been converted to the KE of
the ejecta (see Table \ref{tbl:derived parameters} and Figure \ref{fig:fra}
for some of these SLSNe) and the initial KE of 15 SLSNe in our sample are
smaller than $2.5\times 10^{51}$ erg (see Table \ref{tbl:derived parameters
) provided by the neutrino-driven mechanism for CCSNe. The additional KE is
provided by the magnetar wind which accelerates the ejecta.
Besides, LSQ14mo, SN 2015bn and DES13S2cmm have initial KE (slightly) larger
than $\sim 2.5\times 10^{51}$ erg but the initial KE of these SLSNe can be
halved to be $\sim 1.37\times 10^{51}$, $\sim 1.41\times 10^{51}$, and $\sim
2.28\times 10^{51}$ erg if the value of $\kappa$ doubled ($\kappa =0.2$ ~cm
^{-2}$ g$^{-1}$) so that their ejecta masses can be halved. Therefore, these
three SLSNe can also be explained by the magnetar model. The only SLSN whose
KE cannot be explained by the magnetar-powered model is DES14X3taz since its
initial KE is $4.51\times 10^{52}$ erg (if $\kappa =0.1$ cm$^{-2}$ ~g$^{-1}
) or $2.25\times 10^{52}$ erg (if $\kappa =0.2$ cm$^{-2}$ g$^{-1}$),
significantly larger than that can be provided by the neutrinos.
All in all, in our model, the neutrinos provide the initial KE of the
ejecta, and the magnetar accelerates the ejecta so that they have large
final KE ($\gtrsim 5\times 10^{51}$ erg). Thus the difficulty of explaining
the origin of the KE of the SLSNe can be solved by taking into account the
acceleration effect. \footnote
The models proposed by \citet{Kas2010} and \citet{Woos2010} also took into
account the acceleration effect and can also solve the problem associated
with the initial KE. However, these models neglected the leakage effect.}
Figure \ref{fig:etavsb} shows the conversion fraction $\eta$ versus magnetic
strength $B_{p}$. The correlation coefficient between these two quantities
is $R=0.715$, which means larger $B_{p}$ results in a larger $\eta$. \cit
{Wang2016a} demonstrated that the conversion fraction $\eta$ is high if the
spin-down timescale of the magnetar is short compared to the diffusion
timescale. It is therefore expected that the stronger $B_{p}$, the higher
\eta$ tends to be, although the other factors, e.g. $M_{\text{ej}}$, $P_{0}
, will cause some scatters in this relation. If $B_{p}$ is increased further
to $\sim 10^{16}$ G, we can expect that most of the magnetar's rotational
energy will be converted to ejecta's KE and the SNe will become dimmer. This
is precisely what is seen for broad-lined type Ic SNe (SNe Ic-BL) by \cit
{WangHan16}, who showed evidence that SNe Ic-BL 1998bw and 2002ap were
powered by magnetars \citep{Wang2016b}. This implies a continuous spectrum
of magnetar-powered SNe.
The inferred gamma-ray opacity $\kappa_{\gamma}$ is $\sim 0.01-0.82$ cm
^{-2} $ ~g$^{-1}$. \citet{Kot2013} has demonstrated that the values of
\kappa_{\gamma}$ must be between approximately 0.01 and 0.2 cm$^2$ g$^{-1}$
if the emission is dominated by gamma photons, and between approximately 0.2
and $10^{4}$ ~cm$^2$ g$^{-1}$ if the emission is dominated by X-ray photons.
In our sample, the values of $\kappa_{\gamma}$ of PTF12dam and SSS 120810
reach the lower limit, $\sim $ 0.01 cm$^2$ g$^{-1}$, proposed by the
theoretical prediction (see also \citealt{Chen2015} for PTF12dam),
suggesting that the energy of the gamma-ray radiation from the magnetars
associated with these SLSNe must be higher than $3\times 10^{7}$ eV = 30
MeV. Such high-energy photons from magnetars have already been observed
\citep{Hester08, Buhler14} and can be explained by theoretical modelings
\citep{Metzger14,Murase15,WangDai16}.
For SLSNe in Figure \ref{fig:fit2}, bolometric luminosity ($L$) and
temperature ($T$) evolution can be well fitted but the evolution of
photospheric velocity ($v_\text{ph}$) cannot be fitted. This is strange
since $L=4\pi \sigma T^4R_\text{ph}^2 =4\pi \sigma T^4{(\int_{v_\text{ph0
}^{v_\text{ph}}v_\text{ph}dt)}^2$ ($R_\text{ph}=\int_{v_\text{ph0}}^{v_\text
ph}}v_\text{ph}dt$ is the photospheric radius). The reason for this might be
that the error bars of the observational data are too small.
Observationally, the velocities inferred from different elements are rather
different (e.g., see Figure 7 of \citealt{Tad2016}), so the error bars of
the velocity data might be larger than that presented in the Figure \re
{fig:fit2}. To clarify this issue, more dedicated studies are required.
Some of the SLSNe in Figure \ref{fig:fit3} show flattening (SN 2011ke, SN
2015bn, PS1-11ap, PS1-14bj) in the late-time temperature evolution. This
flattening may be caused by recombination, which is not considered in our
adopted model. The most extreme case is PS1-14bj, whose temperature
evolution is rather flat at very beginning, and even slightly increasing
with time. Another peculiar case is DES14X3taz, whose temperature evolution
is untypical in the SN sample. We note that the SLSNe mentioned above
(PS1-14bj, SN 2015bn and DES14X3taz) are among the four SLSNe that are not
well fitted by our model. These peculiar SLSNe deserve more investigations.
The large reduced $\chi^2$ of SN 2013dg (among the four largest reduced
\chi^2$), on the other hand, is caused by the small errors in temperature
measurement because its light-curve and velocity fitting quality is similar
to PTF11rks.
\section{Conclusions}
\label{sec:con}
Detailed studies of SLSNe in the last decade have revealed many important
observational properties and given some crucial clues to understanding the
energy-source models and the explosion mechanisms of SLSNe. In the last
several years, most SLSNe discovered are type I whose light curves and
spectra are diverse and complex \citep[e.g.,][]{Inse2017}.
Using the magnetar-powered model and the MCMC approach, we fit the light
curves, temperature evolution, and photospheric velocity evolution of 19
SLSNe I. We get rather good fits for 7 events ($\chi^2$/d.o.f = $0.24-0.96$)
and good fits for other 7 events ($\chi^2$/d.o.f = $1.37-3.13$), suggesting
that these SLSNe can be explained by the magnetar model. Four events cannot
be well fitted by this model ($\chi^2$/d.o.f = $4.32-6.83$), suggesting that
these four events must be further studied.
The parameters determined by the MCMC code are as follows. The values of the
initial period of the magnetars supposed to power these SLSNe are $1.2-8.3$
ms; the values of the magnetic strength of the magnetars are $0.2-8.8\times
10^{14}$ G; the masses of the ejecta of these SLSNe are $1-27.6 M_{\odot}$;
the gamma-ray opacity are $0.01-0.82$ cm$^2$ g$^{-1}$.
More importantly, we take into account the acceleration effect and let the
initial velocity of the ejecta be a free parameter and find that the initial
KE of most SLSNe in our sample are (significantly) smaller than the upper
limit ($\sim 2.5\times 10^{51}$ erg) of the KE provided by the
neutrino-driven mechanism for CCSNe, indicating that our modelings are
self-consistent and do not need any exotic assumption (e.g., two-step
injections from the magnetars) to explain the origin of the ejecta kinetic
energy of these SLSNe.
Our modeling shows that $\sim $ 19$-$97\% of the initial rotational energy
of the magnetars is converted to the KE of the SNe ejecta. This acceleration
effect is especially important in the SLSNe that require magnetars with
initial periods $P_{0}\sim 1-5$ ms, since the initial rotational energy of
these magnetars is $\sim 1-20\times 10^{51}$~erg and would convert $\sim
0.2-20\times 10^{51}$~erg to KE of the SLSNe ejecta.
By combining these two results, we demonstrate that the KE acquired from the
rotational energy dissipated via magnetic dipole radiation can naturally
provide a considerable amount of the KE for the SN ejecta, and the
difficulty of explaining the KE of the SLSNe whose KE are usually
(significantly) larger than $2\times 10^{51}$ erg can be solved.
Understanding the nature of SLSNe is one of the most challenging questions
in astrophysics. Their explosion mechanisms and energy sources are still
ambiguous. Our results provide some new and important clues related to these
problems. To clarify these important issues, more observations and
theoretical work are needed to be done.
\acknowledgments We thank the referee for very constructive suggestions
which have allowed us to improve this manuscript significantly. This work
was supported by the National Basic Research Program (\textquotedblleft
973\textquotedblright\ Program) of China (grant no. 2014CB845800) and the
National Natural Science Foundation of China (grants no. 11573014, U1331202,
11533033, 11422325, 11373022, and 11673006).
|
1,477,468,750,675 | arxiv | \section{Introduction}
\vspace{-0.3cm}
Binary black hole (BH) systems have attracted our attention since the early days of general relativity. The recent detection of gravitational waves \cite{LIGO} produced by binary BH mergers permits us to reconsider exact binary models to complement the vast amount of numerical results in the literature. However, from a technical point of view, it is quite complicated to take into account all the dynamical interactions in a binary setup, and for such a reason the stationary scenario seems to be a good candidate to develop analytical results. In static charged systems, the Majumdar-Papetrou metric \cite{Majumdar,Papapetrou,HH} describes the simplest model of two extreme BHs, which remain in neutral equilibrium due to the balance of their electric charges and masses according to the relation $Q_{i}=\pm M_{i}$, regardless of the separation distance among sources. Moreover, in vacuum systems, the Kinnersley-Chitre (KCH) exact solution \cite{KCH} allows us the description of rotating binary BHs, after solving appositely the axis conditions \cite{MR,ICM2018}. In this type of binary vacuum systems, the Kerr BHs are apart by a conical singularity \cite{BachW,Israel}, which can give us information on their gravitational attraction and spin-spin interaction.
In contrast, the treatment of unequal binary configurations of extreme Kerr-Newman (KN) BHs \cite{ENewman} has been a fairly complicated problem beyond our possibilities, due mainly to the fact that the axis conditions are not enough to define properly KN BHs, therefore, it is necessary to impose an extra condition in order to kill both magnetic charges, otherwise, Dirac strings linked to the KN BHs will appear \cite{Tomi, Galtsov, ICM2020}. The main purpose of this paper is to derive a five-parametric exact solution that completely describes binary co- and counter-rotating extreme KN BH separated by a massless strut in a unified manner. To accomplish such a goal, we are going to take into account the recent results of \cite{ICM2021} where a complete derivation of the metric and thermodynamical properties for non-extreme KN BHs has been succeeded. Hence, the Ernst potentials and metric functions will be depicted in terms of physical Komar parameters \cite{Komar}: the masses $M_{i}$, electric charges $Q_{i}$, and a coordinate distance $R$ as well. In this scheme, the five arbitrary parameters compose an algebraic equation thus defining a dynamical law for interacting BHs with struts, which is reduced to some previous studied cases \cite{ICM2018, ICM2015}. At the same time, the metric is concisely given in terms of Perjes' factor structure \cite{Perjes}. Since the physical limits in both rotating charged models are well identified, after turning our sight exclusively in the corotating binary BH setup, we derive quite simple formulas for the area of the horizon and the interaction force during the merger limit. In addition, a deformed metric for a near horizon extreme binary KN BH is also given.
\vspace{-0.4cm}
\section{The charged Kinnersley-Chitre exact solution}
\vspace{-0.3cm}
Ernst's formalism \cite{Ernst} allows the description of Einstein-Maxwell equations in stationary axisymmetric spacetimes, in terms of a pair of complex functions $({\cal{E}}, \Phi)$ satisfying
\vspace{-0.1cm}
\bea \begin{split} \left({\rm{Re}} {\cal{E}}+|\Phi|^{2}\right)\Delta{\cal{E}}&=(\bnabla{\cal{E}}+
2\bar{\Phi}\bnabla \Phi)\cdot\bnabla {\cal{E}}, \\
\left({\rm{Re}}{\cal{E}}+|\Phi|^{2}\right)\Delta \Phi&=(\bnabla{\cal{E}}+
2\bar{\Phi}\bnabla\Phi)\cdot \bnabla\Phi. \label{ERNST} \end{split} \eea
\vspace{-0.1cm}
\noi where any exact solution of Eq.\ (\ref{ERNST}) can be derived via Sibgatullin's method (SM) \cite{Sibgatullin,MSO}, which is also useful to obtain the metric functions $f(\rho,z)$, $\omega(\rho,z)$ and $\gamma(\rho,z)$ of the line element \cite{Papapetrou}
\vspace{-0.1cm}
\be ds^{2}=f^{-1}\left[e^{2\gamma}(d\rho^{2}+dz^{2})+\rho^{2}d\varphi^{2}\right]- f(dt-\omega d\varphi)^{2}.
\label{Papapetrou}\ee
\vspace{-0.1cm}
Due to the fact that SM needs a particular form of the Ernst potentials on the upper part of the symmetry axis, let us begin with a more suitable physical representation, namely
\vspace{-0.1cm}
\begin{align}
{\cal E}(0,z)&=\frac{\mathfrak{e}_{1}}{\mathfrak{e}_{2}}, \qquad \Phi(0,z)=\frac{\mathcal{Q}z+\mathfrak{q}_{o}}{\mathfrak{e}_{2}}, \nonu \\
\mathfrak{e}_{1}&=z^{2}-[M + i(\mathfrak{q}+2J_{0})]z +P_{+}+i P_{1} -2iJ_{0}(d-i\mathfrak{q}), \qquad
\mathfrak{e}_{2}=z^{2} + (M -i\mathfrak{q})z + P_{-} + i P_{2}, \nonu\\
P_{\pm}&= \frac{M(2\Delta_{o}-R^{2})\pm 2\left[\mathfrak{q}{\rm s}_{1}-2(q_{o}Q+b_{o} B)\right]}{4M},\qquad
{\rm s}_{1}=P_{1}+P_{2}, \qquad d=M+\frac{P_{2}}{\mathfrak{q}},\qquad \mathcal{Q}=Q+iB, \nonu\\
\mathfrak{q}_{o}&=q_{o}+ib_{o}, \qquad \Delta_{o}= M^{2}-|\mathcal{Q}|^{2}-\mathfrak{q}^{2},
\label{ernstaxiselectro}\end{align}
\vspace{-0.1cm}
\noi where the aforementioned Ernst potentials Eq.\ (\ref{ernstaxiselectro}) are the extreme case of that one considered in Ref.\ \cite{ICM2021}, from which the first Simon's multipole moments \cite{Simon} can be explicitly calculated by means of the Hoenselaers-Perj\'es procedure \cite{HP,Sotiriou}. In this sense, $M$ plays the role of the total mass of the system and $Q+iB$ defines the total electromagnetic charge, while the total electric and magnetic dipole moment are $q_{o}-B(\mathfrak{q}+J_{0})$ and $b_{o}+Q(\mathfrak{q}+J_{0})$, respectively. Besides, $R$ represents a separation distance between both sources. At the same time, the NUT charge $J_{0}$ \cite{NUT} and total angular momentum of the system $J$ are given by
\vspace{-0.1cm}
\begin{align}
J_{0}&=\frac{N}{8M^{2}(\mathfrak{q}P_{-}+P_{2}d)} , \qquad J=M\mathfrak{q}-\frac{{\rm s}_{2}}{2}+(M+d)J_{0}, \nonu\\
N&=M^{2}\left\{4(P_{1}P_{2}+|\mathfrak{q}_{o}|^{2})-\Delta_{o}(R^{2}-\Delta_{o})\right\}
-\left[\mathfrak{q}{\rm s}_{1}-2(Q q_{o}+ B b_{o})\right]^{2}, \qquad
{\rm s}_{2}=P_{1}-P_{2},\label{Multipolarterms}\end{align}
\vspace{-0.1cm}
It is not difficult to show that once the SM \cite{Sibgatullin,MSO} has been applied to the axis data Eq.\ (\ref{ernstaxiselectro}), the Ernst potentials satisfying Eq.\ (\ref{ERNST}) acquire the final aspect
\vspace{-0.1cm}
\begin{align} {\cal{E}}&=\frac{\Lambda-2\Gamma}{\Lambda+2\Gamma},\qquad \Phi=\frac{2 \chi}{\Lambda+2\Gamma}, \nonu\\
\Lambda&=R^{2}\left[(R^{2}-\delta)(x^{2}-y^{2})^{2}+\delta(x^{4}-1)\right] +\Big\{|\mathfrak{p}|^{2}+(\mathfrak{q}+J_{0}) \mathfrak{r} -R^{2}(R^{2}-\delta)\Big\}(y^{4}-1)\nonu\\
&+2iR \Big\{xy\Big[\big[\mathfrak{r}+(\mathfrak{q}+J_{0})R^{2}\big](y^{2}-1)-(\mathfrak{q}+J_{0})R^{2}(x^{2}+y^{2}-2)\Big] -R{\rm S}_{1}\big(x^{2}+y^{2}-2x^{2}y^{2}\big) \Big\} \nonu\\
\Gamma&=(M+iJ_{0})\mathbb{P}_{1}-(\mathfrak{b}+i{\rm S}_{2})\mathbb{P}_{2}, \qquad
\chi= \mathcal{Q} \mathbb{P}_{1}+2\mathfrak{q}_{o} \mathbb{P}_{2}\qquad
\mathbb{P}_{1}=R^{3} x(x^{2}-1)-(R \bar{\mathfrak{p}} x-i \mathfrak{r} y)(y^{2}-1),\nonu\\
\mathbb{P}_{2}&= R^{2}y (x^{2}-1)-\big[\mathfrak{p}y-i(\mathfrak{q}+J_{0})Rx\big](y^{2}-1),\qquad
\mathfrak{p}=R^{2}-\delta+i{\rm S}_{1},\qquad \mathfrak{r}=2\mathfrak{a}-(\mathfrak{q}+J_{0})(R^{2}-2\delta)-2\mathfrak{b}J_{0}, \nonu\\
\mathfrak{a}&=M{\rm S}_{2}+2(b_{o}Q-q_{o}B), \qquad \delta=\Delta_{o}-2\mathfrak{q}J_{0}, \qquad
{\rm S}_{1}={\rm s}_{1}-2d J_{0}, \qquad
{\rm S}_{2}={\rm s}_{2}-2d J_{0}, \nonu\\
\mathfrak{b}&=\big[\mathfrak{q}{\rm s}_{1}-2(q_{o}Q+b_{o}B)\big]/M-2\mathfrak{q}J_{0},
\label{Ernstextreme}\end{align}
\vspace{-0.1cm}
\noi where $(x,y)$ are prolate spheroidal coordinates related to cylindrical coordinates $(\rho,z)$ by means of \vspace{-0.1cm}
\be x=\frac{r_{+}+r_{-}}{R}, \quad y=\frac{r_{+}-r_{-}}{R}, \quad r_{\pm}=\sqrt{\rho^{2} + (z \pm R/2)^{2}}. \label{prolates}\ee
\vspace{-0.1cm}
Furthermore, the metric functions contained within the line element can be written down in a closed analytical form by using Perjes's factor structure \cite{Perjes}, thus getting\footnote{It is possible to locate up or down semi-infinite singularities along the axis depending on the values for $C=0, \pm 1$ (see \cite{MR} and references therein)}
\vspace{-0.1cm}
\begin{align} f&=\frac{\mathcal{D}}{\mathcal{N}},\qquad \omega=2J_{0}(y+C)+\frac{R(y^{2}-1) \big[(x^{2}-1)\Sigma\Pi-\Theta {\rm T}\big]}{2\mathcal{D}}, \qquad
e^{2\gamma}=\frac{\mathcal{D}}{R^{8}(x^{2}-y^{2})^{4}}, \nonu\\
\mathcal{N}&= \mathcal{D}+ \Theta \Pi-(1-y^{2})\Sigma {\rm T}, \quad
\mathcal{D}= \Theta^{2}+(x^{2}-1)(y^{2}-1)\Sigma^{2}, \nonu\\
\Theta&=R^{2}\Big[(R^{2}-\delta)(x^{2}-y^{2})^{2}+\delta(x^{2}-1)^{2}\Big]
+\Big[|\mathfrak{p}|^{2}+(\mathfrak{q}+J_{0}) \mathfrak{r}
-R^{2}(R^{2}-\delta) \Big](y^{2}-1)^{2}, \nonu\\
\Sigma&=2R\Big((\mathfrak{q}+J_{0})R^{2}x^{2}-\mathfrak{r}y^{2} -2R{\rm S}_{1}xy\Big), \nonu\\
\Pi&= 4Rx\bigg\{MR^{2}(x^{2}-y^{2})+\big[M\delta+(\mathfrak{q}+2J_{0}){\rm S}_{2}+2J_{0}{\rm S}_{1}\big]
(1+y^{2})+(2M^{2}+2J_{0}^{2}-|\mathcal{Q}|^{2})Rx -2{\rm S}_{1}\big[\mathfrak{q}y+2J_{0}(1+y)\big]\bigg\}\nonu\\
&\hspace{-0.5cm} -4y\bigg\{ \mathfrak{b}\Big(R^{2}(x^{2}-y^{2})+\delta(1+y^{2})+2MRx-2\mathfrak{b}y\Big) +{\rm S}_{1}{\rm S}_{2}
(1+y^{2}) -2\big({\rm S}_{2}^{2}-2|\mathfrak{q}_{o}|^{2}\big)y +J_{0}
\big[\mathfrak{r}(1-y^{2})+2(\mathfrak{q}+J_{0})R^{2}x^{2}\big] \bigg\}, \nonu
\end{align}
\vspace{-0.1cm}
\begin{align}
{\rm T}&=\frac{2}{R} \bigg\{2R^{2}\Big(\big[M{\rm S}_{1}-(\mathfrak{q}+J_{0})\mathfrak{b}+J_{0}(R^{2}-\delta) \big]y
-{\rm S}_{2}(Rx+M) -\mathfrak{a}+2\mathfrak{b}J_{0}\Big)(1-x^{2})\nonu\\
&+\bigg(2\big[\mathfrak{b}{\rm S}_{1}+(R^{2}-\delta){\rm S}_{2}+M \mathfrak{r}\big](Rx+M) + 2J_{0}\Big[{\rm S}_{1}{\rm S}_{2}-\mathfrak{b}(2R^{2}-\delta)- \big[|\mathfrak{p}|^{2}+(\mathfrak{q}+J_{0})\mathfrak{r}\big]y\Big]+2\mathfrak{a} R^{2} -(M^{2}-\mathfrak{q}^{2})\mathfrak{r} \nonu\\
&+(\mathfrak{q}+J_{0})\Big[\delta R^{2}+2J_{0}\mathfrak{r}-4|\mathfrak{q}_{o}|^{2}\Big]\bigg)
(1-y^{2})\bigg\}.\label{extreme}\end{align}
\vspace{-0.1cm}
The above metric is the electromagnetically charged version of KCH's exact solution \cite{KCH,MR}. It contains nine parameters defined by the set $\{M,R,\mathfrak{q},P_{1},P_{2},Q,B,q_{o},b_{o}\}$. In the absence of electromagnetic field ($\mathcal{Q}$ and $\mathfrak{q}_{o}$ set to zero) the KCH exact solution \cite{KCH, MR} emerges straightforwardly after doing the simple redefinitions \cite{ICM2018}
\vspace{-0.1cm}
\begin{align}
M&=\frac{2\kappa(p\mathrm{P}-p\mathrm{Q}\alpha+q\mathrm{P}\beta)}{p^{2}+\alpha^{2}-\beta^{2}},\qquad
J_{0}=-\frac{2\kappa(p\mathrm{Q}+p\mathrm{P}\alpha+q\mathrm{Q}\beta)}{p^{2}+\alpha^{2}-\beta^{2}},\nonu\\
P_{1}&=\frac{2\kappa^{2}[(1-q\mathrm{Q})\alpha-p\mathrm{P}\beta]}{p^{2}+\alpha^{2}-\beta^{2}}+
2dJ_{0}, \qquad
P_{2}= \frac{2\kappa^{2}[(1+q\mathrm{Q})\alpha+p\mathrm{P}\beta]}{p^{2}+\alpha^{2}-\beta^{2}}, \qquad e^{-i\gamma_{o}}=\mathrm{P}-i \mathrm{Q}, \nonu \\
\mathfrak{q}&=\frac{2\kappa[p(q+\mathrm{Q})+ p \mathrm{P} \alpha +(1+q\mathrm{Q})\beta]}{p^{2}+\alpha^{2}-\beta^{2}}, \qquad R=2\kappa, \label{relations1}\end{align}
\vspace{-0.1cm}
\noi where $p^{2}+q^{2}=1$ and $|e^{-i\gamma_{o}}|=1$. Taking into account Bonnor's description \cite{Bonnor}, the above metric is not asymptotically flat due to the presence of the NUT charge which represents a semi-infinite singular source located along the lower part of the symmetry axis at $y=-1$, for $C=-1$, thus providing additional rotation to the binary system. The last point can be better understood when analyzing the asymptotic behavior of the metric functions; i.e., $f \rightarrow 1$, $\gamma \rightarrow 0$, and $\omega \rightarrow 2J_{0}(y+C)$ at $x \rightarrow \infty$. In this regard, the condition $J_{0}=0$ is enough to ensure any asymptotically flat spacetime from Eq.\ (\ref{extreme}). Such a task can be accomplished by means of
\vspace{-0.1cm}
\begin{align}
&\hspace{-0.3cm} M^{2}\left[4(P_{1}P_{2}+|\mathfrak{q}_{o}|^{2})-\Delta(R^{2}-\Delta)\right]
-\left(\mathfrak{q}{\rm s}_{1}-2Q q_{o}\right)^{2}=0, \nonu\\
\Delta&=M^{2}-Q^{2}-\mathfrak{q}^{2}, \label{noNUT}\end{align}
\vspace{-0.1cm}
\noi where we have imposed also the requirement $B=0$ in order to describe afterward extreme KN binary BHs. In addition, the condition $\omega(x=1,y=2z/R)=0$ permitting to disconnect the region among the sources is reduced to
\vspace{-0.2cm}
\begin{align}
&2(R+M)\bigg\{\Big[\mathfrak{q}{\rm s}_{1}-2q_{o}Q\Big]{\rm s}_{1}+M(R^{2}-\Delta){\rm s}_{2} -\mathfrak{q}M^{2}R^{2}\bigg\}+2MP_{0}\Big[M{\rm s}_{2}+2b_{o}Q+\mathfrak{q} \Delta \Big] +M\mathfrak{q} (Q^{2}R^{2}-4|\mathfrak{q}_{o}|^{2})=0, \nonu\\
P_{0}&=(R+M)^{2}+\mathfrak{q}^{2}.\label{disconnect}\end{align}
\vspace{-0.1cm}
If one is able to get an analytical solution from Eqs.\ (\ref{noNUT}) and (\ref{disconnect}), it will be possible to derive a binary model of rotating dyonic extreme BHs kept apart by a massless strut, where the sources are equipped with identical magnetic charges but endowed with opposite signs. Therefore, there exists a Dirac string joined to the BHs unless the magnetic charges are removed from the solution \cite{Tomi, Galtsov, ICM2020}. In this case, an absence of individual magnetic charges is accomplished if the condition on the real part of the potential $\Phi$ is imposed \cite{Tomi}, that is
\vspace{-0.1cm}
\be \lim_{\lambda \rightarrow 0} \Big[{\rm Re}\Phi\big(x=1+\lambda,y=1\big)-{\rm Re}\Phi\big(x=1,y=1-\lambda\big)\Big]=0, \label{nomagneticcharge}\ee
\vspace{-0.1cm}
\noi and this condition only is established in the upper BH since the lower one contains the same magnetic charge with opposite sign as we have before mentioned. A straightforward calculation yields the following expression
\vspace{-0.1cm}
\begin{align}
&\mathfrak{q}Q\bigg\{ \Big[ MP_{0}-(R+M)(2M(R+M)-\mathfrak{q}^{2})\Big]{\rm s}_{1}^{2} -4M^{2}
(R+M)P_{1}P_{2}\bigg\} +M^{2}\Big[2\mathfrak{q}P_{0}b_{o}-Q\big(P_{0}-2\mathfrak{q}^{2}\big)
(R^{2}-\Delta) \Big]{\rm s}_{2} \nonu\\
&-2q_{o}\Big[ M(\Delta+M R)P_{0}+2\mathfrak{q}^{2}(R+2M)Q^{2}\Big]{\rm s}_{1} -4\mathfrak{q}q_{o}^{2}Q\left[ MP_{0}-(R+M)Q^{2}\right] \nonu\\
&-M^{2}(R+M)(R^{2}-\Delta) \left[ 2P_{0}b_{o}+\mathfrak{q} Q(R^{2}-\Delta) \right]=0, \label{noBcharge}
\end{align}
\vspace{-0.1cm}
\noi where this algebraic equation killing both magnetic charges has recently been got in Ref.\ \cite{ICM2021} for the case of non-extreme sources. The most general exact solution cannot be derived directly from these entwined
equations unless we get first a highly complicated fourth-degree algebraic equation, consequently, one must circumvent this technical issue by adopting a different point of view. As we shall see next, we are going to derive the algebraic values for the set $\{q_{o},b_{o},P_{1},P_{2}\}$ that defines extreme KN binary BHs held apart by a massless strut in a physical representation.
\vspace{-0.4cm}
\section{Extreme KN binary BHs}
\vspace{-0.3cm}
Let us begin the section by considering first the variables $\{q_{o},b_{o},P_{1},P_{2}\}$ earlier derived in Ref.\ \cite{ICM2021} and containing a physical representation are given by
\vspace{-0.1cm}
\begin{align}
q_{o}&=\frac{2\mathfrak{q} (Q_{1}J_{2}-Q_{2}J_{1})}{P_{0}}+\frac{Q_{1}}{2} (R-2M_{2})-\frac{Q_{2}}{2} (R-2M_{1}),\nonu\\
b_{o}&=\Bigg[(Q_{1}C_{2}-Q_{2}C_{1})\bigg(\frac{J_{1}}{\mathcal{P}_{1}}-\frac{J_{2}}{\mathcal{P}_{2}}\bigg)
-\frac{\mathfrak{q}}{P_{0}}\Bigg(Q - \frac{Q_{1}H_{1+}\big[Q_{1}(Q_{1}-Q_{2})P_{0}-2(R+M)H_{1-}\big]}{\mathcal{P}_{1}} \nonu\\
&-\frac{Q_{2}H_{2+}\big[Q_{2}(Q_{2}-Q_{1})P_{0}-2(R+M)H_{2-}\big]}{\mathcal{P}_{2}} \Bigg) \Bigg] \frac{(R^{2}-\Delta)}{2}, \nonu\\
P_{1,2}&=\frac{(2H_{2}A_{2}-RP_{0}\mathcal{P}_{2})J_{1}-(2H_{1}A_{1}-RP_{0}\mathcal{P}_{1})J_{2}}
{2P_{0}\mathcal{P}_{0}}\pm (M\mathfrak{q}-J),
\label{simplePxandQx}\end{align}
\vspace{-0.1cm}
\noi with the following elements:
\vspace{-0.1cm}
\begin{align}
A_{i}&=\mathcal{P}_{i}-(R^{2}-\Delta)H_{i+}P_{0}, \qquad H_{i}=M_{i}P_{0}-Q_{i}Q(R+M),\qquad
\mathcal{P}_{i}=H_{i-}C_{i}-(-1)^{i}(M_{1}-M_{2})Q^{2}_{i}P_{0}^{2},\nonu\\
\mathcal{P}_{0}&=M\mathcal{P}_{1}-(R^{2}-\Delta)H_{1+}H_{1-}\equiv M\mathcal{P}_{2}-(R^{2}-\Delta)H_{2+}H_{2-}, \qquad
C_{i}=P_{0}^{2}-2M_{i}(R+M)P_{0}+2\mathfrak{q}^{2} Q_{1}Q_{2}, \nonu\\
H_{i\pm}&=M_{i}P_{0}\pm Q_{1}Q_{2}(R+M), \qquad i=1,2.
\label{elements}\end{align}
\vspace{-0.1cm}
Also, in the binary setup, the BH horizons $\sigma_{i}$ are expressed as \cite{ICM2021}
\vspace{-0.1cm}
\begin{align}
\sigma_{i}&=\sqrt{D_{i}-J_{i}\left( \frac{J_{i}G_{i}-2\mathfrak{q}A_{i}B_{i}}{P_{0}^{2}\mathcal{P}_{i}^{2}}\right)},\nonu\\
D_{i}&=M_{i}^{2}- Q_{i}^{2}F_{i}-2(-1)^{i}Q_{i}F_{0},\qquad
G_{i}=\left[2(R+M)\mathcal{P}_{i}+P_{0}(R^{2}-\Delta)C_{i}\right]^{2}-4P_{0}\mathcal{P}_{1}\mathcal{P}_{2},\nonu\\
F_{0}&=\frac{M_{2}Q_{1}-M_{1}Q_{2}}{R+M}\left( 1-\frac{\mathfrak{q}^{2}}{P_{0}^{2}}\right),\qquad F_{i}=1-\frac{Q_{i}^{2}\mathfrak{q}^{2}}{P_{0}^{2}}
\left(1-\frac{A_{i}^{2}}{\mathcal{P}_{i}^{2}}\right)+\frac{Q^{2}\mathfrak{q}^{2}}{P_{0}^{2}},\qquad
B_{i}=Q_{i}^{2}P_{0}(R^{2}-\Delta)C_{i}-2H_{i}\mathcal{P}_{i}, \nonu\\
i&=1,2,\label{sigmas}\end{align}
\vspace{-0.1cm}
\noi where the set of parameters $\{M_{1},M_{2},Q_{1},Q_{2},J_{1},J_{2}\}$ are the physical Komar parameters \cite{Komar} for each source. It should be pointed out, that the total mass, total electric charge, and total angular momentum are $M=M_{1}+M_{2}$, $Q=Q_{1}+Q_{2}$, and $J=J_{1}+J_{2}$, respectively. A peculiar characteristic of the binary system is that the seven physical parameters satisfy a dynamical law for interacting KN sources (BHs $\sigma_{i}^{2}\geq 0$ or naked singularities $\sigma_{i}^{2}< 0$) with struts defined by
\vspace{-0.1cm}
\be \mathfrak{q}\mathcal{P}_{0}-J_{1}\mathcal{P}_{2}-J_{2}\mathcal{P}_{1}=0. \label{condition1}\ee
\vspace{-0.1cm}
In this regard, the extreme limit solution is achieved by setting $\sigma_{1}=\sigma_{2}=0$ in Eq.\ (\ref{sigmas}), where this condition enables one the opportunity to express the angular momentum for each source in terms of the remaining parameters, thus getting
\vspace{-0.1cm}
\begin{align}
J_{1}&= \frac{\mathfrak{q} A_{0}B_{1}+ \varepsilon_{1}P_{0}\mathcal{P}_{1}\sqrt{P_{0}(R^{2}-\Delta)E_{0}d_{1}}}{G_{0}}, \quad \varepsilon_{1}=\pm 1, \nonu\\
J_{2}&= \frac{\mathfrak{q} A_{0}B_{2}+ \varepsilon_{2}P_{0}\mathcal{P}_{2}\sqrt{P_{0}(R^{2}-\Delta)E_{0}d_{2}}}{G_{0}}, \quad \varepsilon_{2}=\pm 1, \nonu\\
E_{0}&=4R\mathcal{P}_{1}+(R^{2}-\Delta)\Big[(P_{0}+2Q_{1}Q_{2})C_{1}
-2(R-\delta_{2})P_{0}H_{1+}\Big]\nonu\\
&\equiv 4R\mathcal{P}_{2}
+(R^{2}-\Delta)\Big[(P_{0}+2Q_{1}Q_{2})C_{2}-2(R+\delta_{2})P_{0}H_{2+}\Big],\nonu\\
d_{i}&=P_{0}^{2}D_{i}+\Bigg(\frac{\mathfrak{q} Q_{i}^{2}A_{0}}{\mathcal{P}_{i}}\Bigg)^{2}, \qquad \delta_{2}=M_{1}-M_{2},
\label{angularmomentum}\end{align}
\vspace{-0.1cm}
\noi where we have used the symmetric character of $G_{1}\equiv G_{2}=G_{0}$ and $A_{1}\equiv A_{2}=A_{0}$. On one hand, if one takes into account the case $\varepsilon_{1}=\varepsilon_{2}=\pm 1$, it might be possible to study co-rotating KN BHs, while on the other hand, the choice $\varepsilon_{1}=-\varepsilon_{2}=\pm 1$, permits the description of the corresponding counter-rotating scenario. The substitution of Eq.\ (\ref{angularmomentum}) into Eq.\ (\ref{condition1}) guides us to the simple formula
\vspace{-0.1cm}
\begin{align} &P_{0}(R^{2}-\Delta)E_{0} \Big(\sqrt{d_{2}}+ \epsilon \sqrt{d_{1}}\Big)^{2}
=\mathfrak{q}^{2}(E_{0}-2RA_{0})^{2},\quad
\epsilon=\pm 1, \label{cocounterrotating}\end{align}
\vspace{-0.1cm}
\noi and thereby the sign $+/-$ defines co/counter-rotating KN binary BHs. In order to illustrate how this dynamical law might be used to describe various scenarios among two interacting BHs, for instance, let us explore first the case of a binary system of unequal counterrotating KN BHs \cite{ICM2015} that arises immediately when $\epsilon=-1$ and $\mathfrak{q}=0$, where it is pretty much obvious that Eq.\ (\ref{cocounterrotating}) is satisfied with the condition
\vspace{-0.1cm}
\begin{align}
&\sigma_{1E}^{2}=\sigma_{2E}^{2}, \qquad
\sigma_{iE}=\sqrt{M_{i}^{2}- Q_{i}^{2}-2(-1)^{i}Q_{i} \frac{M_{2}Q_{1}-M_{1}Q_{2}}{R+M}},\quad i=1,2
\label{conditioncounter}\end{align}
\vspace{-0.1cm}
\noi where $\sigma_{iE}$ represents the BH horizons in electrostatic spacetimes \cite{VCH}. For such a case, both angular momenta displayed in Eq.\ (\ref{angularmomentum}) are reduced to
\vspace{-0.1cm}
\begin{align}
&J_{i}=\varepsilon_{i}\frac{\sigma_{iE}P_{1i}(R+M)}{\sqrt{P_{00}(R^{2}-\delta_{0})}}, \quad i=1,2, \quad \varepsilon_{1}=-\varepsilon_{2}=\pm 1,\nonu\\
P_{11}&=M_{1}\big[(R+M_{2})^{2}-M_{1}^{2}\big]-Q_{1}\big[Q_{2}R-\delta_{2}Q\big],\nonu\\
P_{00}&=\big[(R+M_{1})^{2}-M_{2}^{2}\big]\big[(R+M_{2})^{2}-M_{1}^{2}\big]
+(\delta_{1}R+\delta_{2}Q)^{2}, \qquad
P_{12}=P_{11(1\leftrightarrow2)}, \nonu\\
\delta_{0}&=M^{2}-Q^{2}, \quad \delta_{1}=Q_{1}-Q_{2},
\label{momentacounter}\end{align}
\vspace{-0.1cm}
\noi and the substitution of these formulas inside of Eq.\ (\ref{simplePxandQx}) derives directly the simple results
\vspace{-0.1cm}
\begin{align}
q_{o}&=\frac{Q_{1}}{2} (R-2M_{2})-\frac{Q_{2}}{2}(R-2M_{1}),\qquad
b_{o}=\frac{Q_{2}P_{2}-Q_{1}P_{1}}{R+M},\nonu\\
P_{1}&=-\frac{J_{1}(R+\delta_{2})(R^{2}-\delta_{0})}{P_{11}}, \qquad P_{2}=\frac{J_{2}(R-\delta_{2})(R^{2}-\delta_{0})}{P_{12}}. \label{simpleqoboPi}\end{align}
\vspace{-0.1cm}
So, it is not quite difficult to show that Eqs.\ (\ref{noNUT}), (\ref{disconnect}), and (\ref{noBcharge}) are identically satisfied by the results given above in Eqs.\ (\ref{conditioncounter})-(\ref{simpleqoboPi}), when $\mathfrak{q}=0$ is considered.\footnote{In Ref.\ \cite{ICM2015} is contemplated a relation between the seven physical parameters that might be obtained from Eq.\ (\ref{condition1}), when $\mathfrak{q}=0$. It can be expressed in the very simple form $J_{1}P_{12}+J_{2}P_{11}=0$. Notice that this condition is satisfied if Eqs.\ (\ref{conditioncounter}) and (\ref{momentacounter}) are substituted into it.} On the other hand, the vacuum solution earlier studied in Ref.\ \cite{ICM2018} is the second trivial scenario that appears in the absence of electric charges $Q_{1}=Q_{2}=0$, where after choosing $\epsilon=+1$, Eq.\ (\ref{cocounterrotating}) provides us a bicubic equation for co-rotating Kerr BHs that is given by
\vspace{-0.1cm}
\begin{align}
&\Delta_{1} (4M_{1}M_{2}\mathfrak{q}^{2}-p_{1}p_{2})-4M_{1}M_{2}\mathfrak{q}^{2}R^{2}=0, \nonu\\
p_{1}&=(R+M_{1})^{2}-M_{2}^{2}+\mathfrak{q}^{2}, \qquad p_{2}=(R+M_{2})^{2}-M_{1}^{2}+\mathfrak{q}^{2}, \quad
\Delta_{1}=M^{2}-\mathfrak{q}^{2}.
\label{corotatingvacuum}\end{align}
\vspace{-0.1cm}
A trivial combination of Eqs.\ (\ref{simplePxandQx}), (\ref{angularmomentum}), and (\ref{corotatingvacuum}), will lead us to the following expressions in the co-rotating case
\vspace{-0.1cm}
\begin{align}
P_{1,2}&= \frac{(\Delta_{1}+MR)(M_{2}-M_{1})}{2\mathfrak{q}}\pm (M\mathfrak{q}-J), \nonu\\
J_{1}&=\frac{M_{1}^{2}\mathfrak{q}P_{0}}{M p_{1}}, \qquad J_{2}=\frac{M_{2}^{2}\mathfrak{q}P_{0}}{M p_{2}}.
\label{angularmomentumvacuumfinal}\end{align}
\vspace{-0.1cm}
In the meanwhile, whether $\epsilon=-1$ it is possible to derive another bicubic algebraic equation concerning counter-rotating Kerr BHs that now contains the form
\vspace{-0.1cm}
\be \Delta_{1} (4M_{1}M_{2}\mathfrak{q}^{2}-p_{1}p_{2})+4M_{1}M_{2}R^{2}(R+M)^{2}=0, \label{counterrotatingvacuum}\ee
\vspace{-0.1cm}
\noi where this binary configuration is entirely depicted by
\vspace{-0.1cm}
\begin{align}
P_{1,2}&= \frac{M\mathfrak{q}(R^{2}+MR-\Delta_{1})}{2(M_{2}-M_{1})(R+M)} \pm (M\mathfrak{q}-J), \nonu\\
J_{1}&=\frac{M_{1}^{2}\mathfrak{q}}{M_{2}-M_{1}}\Bigg( 1-\frac{2M_{2}(R+M)p_{2}}{4M_{1}M_{2}P_{0}-p_{1}p_{2}}\Bigg), \qquad
J_{2}=-\frac{M_{2}^{2}\mathfrak{q}}{M_{2}-M_{1}}\Bigg( 1-\frac{2M_{1}(R+M)p_{1}}{4M_{1}M_{2}P_{0}-p_{1}p_{2}}\Bigg).
\label{angularmomentumvacuumfinal}\end{align}
\vspace{-0.1cm}
Regarding now the most general case, where after non-trivial algebraic manipulations on Eqs.\ (\ref{simplePxandQx}), (\ref{angularmomentum}), and (\ref{cocounterrotating}), eventually we will get the algebraic set $\{q_{o},b_{o},P_{1},P_{2}\}$ that is giving us a complete description of extreme KN binary BHs in a physical representation; they read
\vspace{-0.1cm}
\begin{widetext}
\begin{align}
q_{o}&=\Bigg( \frac{Q_{1}C_{2}-Q_{2}C_{1}}{2}+M\Big[(M_{1}Q_{2}-M_{2}Q_{1})P_{0}
-Q_{1}Q_{2}(Q_{1}-Q_{2})(R+M)\Big]\Bigg)\frac{RP_{0}(R^{2}-\Delta)}{E_{0}-2RA_{0}},\nonu\\
b_{o}&=\Bigg[ \frac{Q_{1}}{\mathfrak{q}} \Bigg( RP_{0}\bigg( \frac{2H_{2+}(R^{2}-\Delta)+R(C_{2}+Q_{2}QP_{0})}{E_{0}-2RA_{0}}\bigg)-1\Bigg)
+\frac{Q_{2}}{\mathfrak{q}} \Bigg( RP_{0}\bigg( \frac{2H_{1+}(R^{2}-\Delta)+R(C_{1}+Q_{1}QP_{0})}{E_{0}-2RA_{0}}\bigg)-1\Bigg)\Bigg]\nonu\\
&\times \frac{(R^{2}-\Delta)}{2}, \nonu\\
P_{1,2}&=\frac{RP_{0}(d_{2}-d_{1})(R^{2}-\Delta)}{2\mathfrak{q}(E_{0}-2RA_{0})}\pm (M\mathfrak{q}-J),
\label{simplePxandQx2}\end{align}
\vspace{-0.1cm}
\noi while the total angular momentum can be expressed as follows
\vspace{-0.1cm}
\begin{align}
J&=M\mathfrak{q}-\Bigg(\frac{R+M}{\mathfrak{q}}-RP_{0}^{2}\frac{P_{0}(R^{2}-\Delta+MR)-2Q_{1}Q_{2}(R+M)}
{\mathfrak{q}(E_{0}-2RA_{0})}\Bigg)
\frac{(R^{2}-\Delta)}{2}.
\label{totalmomenta}\end{align}
\end{widetext}
\vspace{-0.1cm}
Then we have that a whole description for co- and counter-rotating electrically charged BHs is made once the parameters contained in Eqs.\ (\ref{simplePxandQx2}) and (\ref{totalmomenta}) are inserted inside the asymptotically flat exact solution, which is obtainable from Eqs.\ (\ref{Ernstextreme}) and (\ref{extreme}) by making simply $J_{0}=0$ and $B=0$, namely
\vspace{-0.1cm}
\begin{align} {\cal{E}}&=\frac{\Lambda-2\Gamma}{\Lambda+2\Gamma},\qquad \Phi=\frac{2 \chi}{\Lambda+2\Gamma}, \qquad f=\frac{\mathcal{D}}{\mathcal{N}},\qquad
\omega=\frac{R(y^{2}-1)(x-1)\big[(x+1)\Sigma\Pi-\Theta {\rm T}\big]}{2\mathcal{D}},\qquad e^{2\gamma}=\frac{\mathcal{D}}{R^{8}(x^{2}-y^{2})^{4}}, \nonu\\
\Lambda&=R^{2}\left[(R^{2}-\Delta)(x^{2}-y^{2})^{2}+\Delta(x^{4}-1)\right] +\Big\{|\mathfrak{p}|^{2}+\mathfrak{q} \mathfrak{r}-R^{2}(R^{2}-\Delta)\Big\}(y^{4}-1) \nonu\\
&+2iR \Big\{xy\Big[\big(\mathfrak{r}+\mathfrak{q}R^{2}\big)(y^{2}-1)-\mathfrak{q}R^{2}(x^{2}+y^{2}-2)\Big] -R{\rm s}_{1}\big(x^{2}+y^{2}-2x^{2}y^{2}\big) \Big\}, \nonu\\
\Gamma&=M\mathbb{P}_{1}-\varepsilon\mathbb{P}_{2}, \qquad
\chi= Q \mathbb{P}_{1}+2\mathfrak{q}_{o} \mathbb{P}_{2} \qquad
\mathbb{P}_{1}=R^{3} x(x^{2}-1)-(R \bar{\mathfrak{p}} x-i \mathfrak{r} y)(y^{2}-1), \nonu\\
\mathbb{P}_{2}&= R^{2}y (x^{2}-1)-\big[\mathfrak{p}y-i\mathfrak{q}Rx\big](y^{2}-1),\qquad
\mathcal{N}= \mathcal{D}+ \Theta \Pi-(1-y^{2})\Sigma {\rm T}, \qquad \mathcal{D}= \Theta^{2}+(x^{2}-1)(y^{2}-1)\Sigma^{2},\nonu\\
\Theta&=R^{2}\Big[(R^{2}-\Delta)(x^{2}-y^{2})^{2}+\Delta(x^{2}-1)^{2}\Big] + \Big[|\mathfrak{p}|^{2}+\mathfrak{q} \mathfrak{r}
-R^{2}(R^{2}-\Delta) \Big](y^{2}-1)^{2}, \nonu\\
\Sigma&=2R\Big(\mathfrak{q}R^{2}x^{2}-\mathfrak{r}y^{2} -2R{\rm s}_{1}xy\Big),\nonu\\
\Pi&= 4Rx\bigg\{MR^{2}(x^{2}-y^{2})
+(M\Delta+\mathfrak{q}{\rm s}_{2})(1+y^{2})+(2M^{2}-Q^{2})Rx -2\mathfrak{q}{\rm s}_{1}y\bigg\} \nonu\\
&-4y\bigg\{ \mathfrak{b}_{o}\Big(R^{2}(x^{2}-y^{2})+\Delta(1+y^{2})+2MRx-2\mathfrak{b}_{o}y\Big)
+{\rm s}_{1}{\rm s}_{2}(1+y^{2}) -2\big({\rm s}_{2}^{2}-2|\mathfrak{q}_{o}|^{2}\big)y \bigg\}, \nonu\\
{\rm T}&=4\Big\{R\big[\mathfrak{a}_{o}+{\rm s}(Rx+M)+(\mathfrak{q}\mathfrak{b}_{o}
-M{\rm s}_{1})y\big](1+x)
+\big[M\mathfrak{r}+\mathfrak{b}_{o}{\rm{s}}_{1}+(R^{2}-\Delta_{o}){\rm{s}}_{2}\big](1-y^{2})\Big\},\nonu\\
\mathfrak{p}&=R^{2}-\Delta+i{\rm s}_{1},\quad \mathfrak{r}=2\mathfrak{a}_{o}-\mathfrak{q}(R^{2}-2\Delta), \quad \varepsilon=\mathfrak{b}_{o}+i{\rm{s}}_{2}, \quad
\mathfrak{a}_{o}=M{\rm s}_{2}+2b_{o}Q, \quad
\mathfrak{b}_{o}=\big(\mathfrak{q}{\rm s}_{1}-2q_{o}Q\big)/M,
\label{extremeflat}\end{align}
\vspace{-0.1cm}
\noi where a physical representation in terms of the parameters $\{M_{1},M_{2},Q_{1},Q_{2},R\}$ will be achieved through the parameter $\mathfrak{q}$ once Eq.\ (\ref{cocounterrotating}) is taken into account. Finally, to complete the full solution, we show also the Kinnersley potential \cite{Kinnersley}
\vspace{-0.1cm}
\begin{align} \Phi_{2}&=\frac{(4\mathfrak{q}+iRxy)\chi-i \mathcal{I}}{\Lambda+2\Gamma}, \nonu\\
\mathcal{I}&=R^{2}\bigg\{ 2\mathfrak{q}_{o}\Big[Rx-4i\mathfrak{q}y-M(1-y^{2})\Big]-
Q\Big[\bar{\varepsilon}(1+y^{2}) +\big(2MRx-4\mathfrak{b}_{o}y-2\bar{p}+R^{2} \big)y +2i\mathfrak{q}Rx\Big]\bigg\}(x^{2}-1)\nonu\\
&+\bigg\{2\mathfrak{q}_{o} \Big[\left(Mp-i\mathfrak{q}\bar{\varepsilon}\right)(1+y^{2})-Rx \big(4MRx-2\bar{\varepsilon}y+\bar{p}+4\Delta\big)
+ i(2\mathfrak{q}R^{2}-r)y\Big]\nonu \\
&-Q\Big[(\bar{\varepsilon}\bar{p}-iMr)(1+y^{2})-Rx\big[2\varepsilon R x+ i(2r-\mathfrak{q}R^{2}) \big]
-\Big(R^{2}(\bar{p}+2R^{2}-2\Delta)-2(|p|^{2}+\mathfrak{q}r)\Big)y\Big] \bigg\}(y^{2}-1),
\label{Kinnersley}\end{align}
\vspace{-0.1cm}
\noi with the aim to derive the magnetic potential by considering its real part. This potential is useful to consider the contribution of the Dirac string in the horizon mass \cite{Tomi,Galtsov,ICM2020}.
\vspace{-0.2cm}
\subsection{Physical characteristics of extreme KN binary BHs}
\vspace{-0.3cm}
The thermodynamical properties of each extreme KN BH satisfy the well-known Smarr formula for the mass \cite{Tomi,Smarr}
\vspace{-0.1cm}
\begin{align}
M_{i}&=2\Omega_{i}J_{i}+ \Phi_{i}^{H}Q_{i}, \qquad i=1,2, \label{Massformula}\end{align}
\vspace{-0.1cm}
\noi where $\Omega_{i}$ and $\Phi_{i}$ define the angular velocity and electric potential in the corotating frame of each BH. Another, important thermodynamical aspect to be considered in the binary system is the area of the horizon $S_{i}$. In the extreme limit case of BHs, their corresponding formulas are obtained by setting $\sigma_{i}=0$ in Eq.\ (33) of Ref.\ \cite{ICM2021}. The result is
\vspace{-0.1cm}
\begin{align}
\Omega_{i}&= \frac{\mathfrak{q} A_{i}}{P_{0}\mathcal{P}_{i}} + \frac{J_{i}P_{0}^{3}\mathcal{P}_{i}R^{2}(R^{2}-\Delta)
}{\mathcal{P}_{i}^{2}\mathcal{N}_{i}^{2}+
P_{0}^{2}(R^{2}-\Delta)^{2}\mathcal{M}_{i}^{2}}, \qquad
\Phi_{i}^{H}=\frac{M_{i}-2\Omega_{i}J_{i}}{Q_{i}},\qquad
S_{i}=4\pi \frac{\mathcal{P}_{i}^{2}\mathcal{N}_{i}^{2}+
P_{0}^{2}(R^{2}-\Delta)^{2}\mathcal{M}_{i}^{2}}{R^{2}P_{0}\mathcal{P}_{i}^{2}},\nonu\\
\mathcal{N}_{i}&=M_{i} P_{0}-2\mathfrak{q}J_{i}-Q_{i}Q(R+M),\qquad
\mathcal{M}_{i}=J_{i}C_{i}+\mathfrak{q}Q_{i}^{2}\left[M_{i}P_{0}+
Q_{1}Q_{2}(R+M)\right], \quad i=1,2. \label{Horizonproperties}\end{align}
\vspace{-0.1cm}
On the other hand, the interaction force related to the strut has the form \cite{ICM2021}
\vspace{-0.1cm}
\begin{align}
\mathcal{F}&=\frac{\mathcal{N}_{0}}{P_{0}^{3}(R^{2}-M^{2}+Q^{2}+\mathfrak{q}^{2})}, \nonu\\
\mathcal{N}_{0}&=(M_{1}M_{2}P_{0}^{2}-\mathfrak{q}^{2}Q_{1}^{2}Q_{2}^{2})\left[(R+M)^{2}
-\mathfrak{q}^{2}\right]-(Q_{1}-F_{0})(Q_{2}+F_{0})P_{0}^{3}
+\mathfrak{q}^{2}\Big\{(M_{1}Q_{2}-M_{2}Q_{1})^{2}P_{0}\nonu\\
+&Q_{1}Q_{2}\left[ 2(R^{2}+MR+\mathfrak{q}^{2})P_{0}+(P_{0}+Q_{1}Q_{2})Q^{2}\right] \Big\},
\label{force} \end{align}
\vspace{-0.1cm}
\noi and it acquires the same aspect regardless if the binary system is co/counter-rotating. The distinction between each configuration will be dictated by the choice of the sign $+/-$ in the dynamical law shown in Eq.\ (\ref{cocounterrotating}). The conical singularity in the middle region among the BHs can be removed if the force equals zero. However, a non vanishing force can give us more insight on how the sources are attracting or repelling each other via the gravitational, spin-spin, and electric interactions. Besides, the force provides the limits of the interaction distance, for instance, the merger limit of BHs is obtainable by equating the denominator of its formula to zero, thus getting the value $R_{0}=\sqrt{M^{2}-Q^{2}-\mathfrak{q}^{2}}$, where $\mathfrak{q}=J/M$ [see Eq.\ (\ref{totalmomenta})]. Luckily, the merger limit of extreme BHs brings us rather simple expressions for $\Omega_{i}$ and $\Phi_{i}$, which are given by
\vspace{-0.1cm}
\begin{align} \Omega_{1}&=\Omega_{2}=\frac{J/M}{d_{0}}, \qquad
\Phi_{i}=\frac{Q(R_{0}+M)}{d_{0}}+ \frac{R_{0}}{2Q_{i}}, \quad i=1,2, \nonu\\
d_{0}&=(R_{0}+M)^{2}+ (J/M)^{2}.\end{align}
\vspace{-0.1cm}
\noi while each angular momentum takes the final form
\vspace{-0.1cm}
\begin{align} J_{1}&=M_{1} \mathfrak{q} +\frac{Q\nu}
{2\mathfrak{q}}+R_{0} \frac{R\big[2\delta_{2}(R_{0}+M)-Q\delta_{1}\big]}{4\mathfrak{q}}, \qquad
J_{2}=M_{2}\mathfrak{q}-\frac{Q\nu}
{2\mathfrak{q}}-R_{0} \frac{R\big[2\delta_{2}(R_{0}+M)-Q\delta_{1}\big]}{4\mathfrak{q}}, \nonu\\
\nu&=M_{1}Q_{2}-M_{2}Q_{1}. \label{extremerelations0}\end{align}
\vspace{-0.1cm}
On one hand, in the co-rotating case, $\mathfrak{q}=\sqrt{M^{2}-Q^{2}}$ at $R_{0}=0$, where it can be possible to recover the well-known expression for extreme KN BHs
\vspace{-0.1cm}
\be J_{1}+J_{2}=(M_{1}+M_{2})\sqrt{(M_{1}+M_{2})^{2}-(Q_{1}+Q_{2})^{2}}. \label{extremerelation} \ee
\vspace{-0.1cm}
On the other hand, the counter-rotating case establishes as $\mathfrak{q}=0$ the lowest value for $\mathfrak{q}$ at distance $R_{0}=\sqrt{M^{2}-Q^{2}}$. Naturally that, in this physical scenario, the total angular momentum of the system is $J=0$, where both angular momenta fulfill the condition $J_{1}=-J_{2}=\infty$. Moreover, when the BHs are far away from each other, in the limit $R \rightarrow \infty$, the parameter $\mathfrak{q} \rightarrow J_{1}/M_{1}+\epsilon J_{2}/M_{2}$, $\epsilon=\pm 1$, where now each angular momentum satisfies the relation $J_{i}=M_{i}\sqrt{M_{i}^{2}-Q_{i}^{2}}$ for extreme KN BHs. Once again, it must be recalled that the sign $+/-$ for $\epsilon$ is related to co/counter-rotating KN BHs, respectively.
Continuing with the analysis, but centering our attention on the co-rotating scenario, one should be aware that the interaction force as well as the area of the horizon become indeterminate at the value $R=0$. To calculate their correct expressions, one must apply a Taylor expansion around $R=0$ after using the term $\mathfrak{q}=\sqrt{M^{2}-Q^{2}}
+C_{0}R$ in Eq.\ (\ref{cocounterrotating}) with the objective to obtain the correct value for $C_{0}$, which is governed by the quadratic equation
\vspace{-0.1cm}
\begin{align} & 4C_{0}^{2}+4 \sqrt{(\alpha_{+}^{2}-1)(1-\alpha_{-}^{2})} C_{0}+\alpha_{-}^{2}-1=0, \nonu \\
\alpha_{\pm}&=2\sqrt{\frac{2M^{2}-Q^{2}}{\alpha_{0}}}\Bigg(\sqrt{4J_{1}^{2}+Q_{1}^{4}} \pm \sqrt{4J_{2}^{2}+Q_{2}^{4}}\Bigg), \nonu\\
\alpha_{0}&= \delta_{0}(4M^{2}-Q^{2}-\delta_{1}^{2})^{2}+4(2M^{2}-Q^{2})^{2}\delta_{2}^{2},
\label{quadratic}\end{align}
\vspace{-0.1cm}
\noi and, therefore, it can be shown that the final expressions for the area of the horizon and the force are given by
\vspace{-0.1cm}
\begin{align}
S_{i}&=\frac{4\pi(2M^{2}-Q^{2})^{3/2} \sqrt{4J_{i}^{2}+Q_{i}^{4}}}{\sqrt{\alpha_{0}} \alpha_{+}}\Bigg[1+ \bigg(\sqrt{\alpha_{+}^{2}-1}-\epsilon_{0} \alpha_{+}\bigg)^{2}\Bigg], \nonu\\
\mathcal{F}&=\frac{\alpha_{0}\Big(\sqrt{\alpha_{+}^{2}-1}+\epsilon_{0} \alpha_{+}\Big)^{2}-4(2M^{2}-Q^{2})^{3}}{16 (2M^{2}-Q^{2})^{3}}, \quad i=1,2,
\label{mergerformulas}
\end{align}
\vspace{-0.1cm}
\noi where $\epsilon_{0}= \pm 1$. In this occasion, the signs $+/-$ define two different scenarios during the touching limit where both sources attract/repel to each other. In order to verify the accuracy of our result, for $Q_{1}=Q_{2}=0$, it is possible to attain directly from Eq.\ (\ref{mergerformulas}) the result \cite{CCHV},
\vspace{-0.1cm}
\begin{align} S_{i}&=8\pi M_{i}(M_{1}+M_{2})^{2}\bigg(\frac{M_{1}+M_{2}-\epsilon_{0} \sqrt{2 M_{1}M_{2}}}{M_{1}^{2}+M_{2}^{2}}\bigg), \qquad
\mathcal{F}=\frac{M_{1}M_{2}+ \epsilon_{0} \sqrt{ 2M_{1}M_{2}}(M_{1}+M_{2})}{2(M_{1}+M_{2})^{2}}, \quad i=1,2. \end{align}
\vspace{-0.1cm}
\noi where the attractive scenario has been deduced before in Ref.\ \cite{CiafreII}, but it was expressed in terms of dimensionless parameters. Finally, after performing a first order expansion around $R=0$ and the simple coordinate changes $\rho=(r-M)\sin \theta$, $z=(r-M)\cos \theta$ on Eq.\ (\ref{extremeflat}), it is possible to get
\vspace{-0.1cm}
\begin{align}
&\hspace{-0.8cm} {\cal{E}} \simeq 1-\frac{2M}{r-i a\cos \theta}\Bigg(1-\mathcal{K} \frac{(r-M)\cos\theta-i a \sin^{2} \theta}{r-ia \cos \theta} \frac{R}{r-M}\Bigg), \quad
\Phi \simeq \frac{Q}{r-i a\cos \theta}\Bigg(1-\mathcal{K} \frac{(r-M)\cos\theta-i a \sin^{2} \theta}{r-ia \cos \theta} \frac{R}{r-M}\Bigg),\nonu\\
f&\simeq f_{0}\Bigg[1+2\mathcal{K}\frac{(Mr+a^{2})\Xi-(2Mr-Q^{2})(Mr+a^{2}\cos 2\theta)}{\Xi(\Xi-2Mr+Q^{2})}\frac{R\cos\theta}{r-M}\Bigg], \qquad e^{2\gamma}\simeq e^{2\gamma_{0}}+\frac{4a^{2}\mathcal{K}\sin^{2}\theta}{(r-M)^{3}}R\cos \theta,\nonu\\
\omega&\simeq \omega_{0}
\Bigg[1-2\mathcal{K}\Bigg(\frac{2(r-M)}{\Xi-2Mr+Q^{2}}-\frac{M}{2Mr-Q^{2}}\Bigg)R \cos \theta \Bigg], \nonu\\
\mathcal{K}&=\frac{(2M^{2}-Q^{2})\big[Q\delta_{1}(4M^{2}-Q^{2}-\delta^{2}_{1})
-4M(2M^{2}-Q^{2})\delta_{2}\big]}
{\alpha_{0}}\Bigg(1-\frac{\epsilon_{0}\sqrt{\alpha_{+}^{2}-1}}{\alpha_{+}}\Bigg), \nonu\\
f_{0}&=1-\frac{2Mr-Q^{2}}{\Xi}, \quad
e^{2\gamma_{0}}=1-\frac{a^{2}\sin^{2} \theta}{(r-M)^{2}}, \quad \omega_{0}=-\frac{a(2Mr-Q^{2}) \sin^{2} \theta}{\Xi-2Mr+Q^{2}},\qquad
\Xi=r^{2}+a^{2}\cos^{2} \theta, \quad \epsilon_{0}=\pm 1, \label{Ernstnearhorizon}\end{align}
\vspace{-0.1cm}
\noi being $a=J/M$ the angular momentum per unit mass and $(r,\theta)$ are the Boyer-Lindquist coordinates. Eq.\ (\ref{Ernstnearhorizon}) defines a deformed metric for a near horizon extreme binary KN BH, where in the physical limit at $R=0$ it is possible to recover the metric for a single extreme KN BH of mass $M=M_{1}+M_{2}$, electric charge $Q=Q_{1}+Q_{2}$, and angular momentum $J=M \sqrt{M^{2}-Q^{2}}$, in other words
\vspace{-0.1cm}
\begin{align} ds^{2}&=f_{0}^{-1}\Big[e^{2\gamma_{0}}\big[dr^{2}+(r-M)^{2}d\theta^{2}\big]+(r-M)^{2}\sin^{2} \theta d\varphi^{2}\Big]- f_{0}(dt-\omega_{0} d\varphi)^{2}.
\label{KerrNewmanmetric} \end{align}
\vspace{-0.6cm}
\section{Conclusion}
\vspace{-0.3cm}
The derivation of the metric that completely characterizes unequal configurations of extreme KN BHs in a physical representation finally has been succeeded. The task of solving the conditions on the axis and the one eliminating the magnetic charges is accomplished by adopting a fitting parametrization that has been earlier introduced in \cite{ICM2021}. It follows that the asymptotically flat metric has been written in a quite simple form by means of
Perjes' approach \cite{Perjes}, where it contains a physical representation in terms of the set $\{M_{1},M_{2},Q_{1},Q_{2},R\}$ that is very suitable to concrete some applications in rotating charged binary systems. Similarly to the non-extreme case \cite{ICM2021}, the physical parameters are related to each other through an algebraic equation that represents a dynamical law for interacting BHs with struts. Unfortunately, there is no chance to solve exactly this higher degree equation except for some special unequal cases \cite{ICM2018,ICM2015}.
Due to the fact that our solution reported in this work is presented with a more physical aspect, the physical limits of the interaction of BHs can be readily identified in both co- and counter-rotating cases. Even better, the thermodynamical characteristics of each BH during the merger limit have been also derived and concisely introduced. With regard to the co-rotating case, in particular, during the merger limit of binary BHs, it is possible to conceive an attractive or repulsive final state. In this respect, the deformed metric for a near horizon extreme binary KN BH is also obtained, from which we do not exclude that it might be helpful to develop some analytical studies related to the collision of two BHs like gravitational waves by assuming a quasi stationary process, in a similar way to that previously considered in \cite{JSN}.
\vspace{-0.4cm}
\section*{Acknowledgements}
\vspace{-0.3cm}
The author acknowledges the financial support of SNI-CONACyT, M\'exico, grant with CVU No. 173252.
\vspace{-0.4cm}
|
1,477,468,750,676 | arxiv | \section{Introduction}\label{sec:intro}
Although the observations of filaments within molecular clouds have
been reported since decades (e.g. Schneider \& Elmegreen 1979), only
recently their presence has been recognized as a unique characteristic
of the star-formation process. The latest Herschel results have
revealed the direct connection between the filaments, dense cores and
stars in all kinds of environments along the Milky Way, from low-mass
and nearby clouds (Andr{\'e} et al. 2010) to most distant and
high-mass star-forming regions (Molinari et al. 2010). As a
consequence, characterizing the physical properties of these filaments
has been revealed as key to our understanding of the origin of the
stars within molecular clouds.
The large majority of observational papers (Arzoumanian et al. 2011;
Palmeirim et al. 2013; Hacar et al. 2013) use the classical
``Ostriker'' profile (Ostriker 1964) as a benchmark to interpret
observations. More specifically, if the estimated linear mass of an
observed filament is larger than the value obtained for the Ostriker
filament ($\simeq$ 16.6 M$_\odot$ pc$^{-1}$ for T=10 K), it is assumed
that the filament is unstable. Analogously, density profiles flatter
than the Ostriker profile are generally interpreted as a a sign of
collapse. However, it is worth recalling the assumptions and
limitations of this model: $(i)$ filaments are assumed to be
isothermal, $(ii)$ they are not rotating, $(iii)$ they are isolated,
$(iv)$ they can be modeled as cylindrical structures with infinite
length, $(v)$ their support against gravity comes solely from thermal
pressure. An increasing number of observational results suggest
however that none of the above assumptions can be considered as
strictly valid. In a first paper (Recchi et al. 2013, hereafter Paper
I) we have relaxed the hypothesis $(i)$ and we have considered
equilibrium structures of non-isothermal filaments. Concerning
hypothesis $(ii)$, and after the pioneering work of Robe (1968), there
has been a number of publications devoted to the study of equilibrium
and stability of rotating filaments (see e.g. Hansen et al. 1976;
Inagaki \& Hachisu 1978; Robe 1979; Simon et al. 1981; Veugelen 1985;
Horedt 2004; Kaur et al. 2006; Oproiu \& Horedt 2008). However, this
body of knowledge has not been recently used to constrain properties
of observed filaments in molecular clouds. In this work we aim to
explore the effects of rotation on the interpretation of the physical
state of filaments during the formation of dense cores and stars.
Moreover, we emphasize the role of envelopes on the determination of
density profiles, an aspect often overlooked in the recent literature.
The paper is organised as follows. In Sect. \ref{sec:obs} we review
the observational evidences suggesting that star-forming filaments are
rotating. In Sect. \ref{sec:rotfil} we study the equilibrium
configuration of rotating filaments and the results of our
calculations are discussed and compared with available observations.
Finally, in Sect. \ref{sec:conc} some conclusions are drawn.
\section{Observational signs of rotation in filaments}
\label{sec:obs}
Since the first millimeter studies in nearby clouds it is well known
that star-forming filaments present complex motions both parallel and
perpendicular to their main axis (e.g. Loren 1989; Uchida et al.
1991). Recently, Hacar \& Tafalla (2011) have shown that the internal
dynamical structure of the so-called velocity coherent filaments is
dominated by the presence of local motions, typically characterized by
velocity gradients of the order of 1.5--2.0 km~s$^{-1}$~pc$^{-1}$,
similar to those found inside dense cores (e.g.
Caselli et al. 2002). Comparing the structure of both density and
velocity perturbations along the main axis of different filaments,
Hacar \& Tafalla (2011) identified the periodicity of different
longitudinal modes as the streaming motions leading to the formation
of dense cores within these objects. These authors also noticed the
presence of distinct and non-parallel components with similar
amplitudes than their longitudinal counterparts. Interpreted as
rotational modes, these perpendicular motions would correspond to a
maximum angular frequency $\omega$ of about 6.5~$\cdot$ 10$^{-14}$
s$^{-1}$. Assuming these values as characteristic defining the
rotational frequency in Galactic filaments, the detection of such
rotational levels then rises the question on whether they could
potentially influence the stability of these objects.\footnote{It is
worth stressing that if the filament forms an angle $\beta \neq 0$
with the plane of the sky, an observed radial velocity gradient
$\frac{\Delta V_r}{\Delta r}$ corresponds to a real gradient that is
$\frac{1}{\cos \beta}$ times larger than that.}
To estimate the dynamical relevance of rotation we can take the total
kinetic energy per unit length as equal to $\mathcal{T}=\frac{1}{2}
\omega^2 R_c^2 M_{lin}$, where $R_c$ is the external radius of the
cylinder and $M_{lin}$ its linear mass. The total gravitational
energy per unit mass is $W=G{M_{lin}}^2$, hence the ratio
$\mathcal{T}/W$ is
\begin{equation} \frac{\mathcal{T}}{W} \simeq 0.65 \left(
\frac{\omega}{6.5 \cdot 10^{-14}} \right)^2 \left(\frac{R_c}{0.15
\,{\rm pc}} \right)^2 \left(\frac{M_{lin}}{16.6\, {\rm
M}_{\odot} \,{\rm pc}^{-1}} \right)^{-1}.
\end{equation}
\noindent
Clearly, for nominal values of $\omega$, $R_c$ and $M_{lin}$ the total
kinetic energy associated to rotation is significant, thus rotation
is dynamically important.
\section{The equilibrium configuration of rotating, non-isothermal
filaments} \label{sec:rotfil}
In order to calculate the density distribution of rotating,
non-isothermal filaments, we extend the approach already used in Paper
I, which we shortly repeat here. The starting equation is the
hydrostatic equilibrium with rotation: $\nabla P = \rho (g + \omega^2
r)$. We introduce the normalization:
\begin{equation}
\label{eq:normalization}
\rho = \theta \rho_0,\;\;T=\tau
T_0,\;\; r=Hx\;\; \Omega=\sqrt{\frac{2}{\pi G \rho_0}} \omega.
\end{equation}
\noindent
Here, $\rho_0$ and $T_0$ are the central density and temperature,
respectively, $H=\sqrt{\frac{2 k T_0}{\pi G \rho_0 \mu m_H}}$ is a
length scale and $\Omega$ is a normalized frequency. Simple steps
transform the hydrostatic equilibrium equation into:
\begin{equation}
\theta\tau^\prime+\tau\theta^\prime=\theta\left(\Omega^2 x - 8
\frac{\int_0^x {\tilde x} \theta d{\tilde x}}{x}\right).
\label{eq:start}
\end{equation}
Calling now $I=\int_0^x {\tilde x} \theta d{\tilde x}$, then clearly
$I^\prime =\theta x$. Solving the above equation for $I$, we obtain
$8I=\Omega^2
x^2 -\tau^\prime x - \tau x \frac{\theta^\prime}{\theta}$. Upon
differentiating this expression with respect to $x$ and rearranging, we
obtain:
\begin{equation}
\theta^{\prime\prime}=\frac{\left(\theta^\prime\right)^2}{\theta}
-\theta^\prime\left[\frac{\tau^\prime}{\tau}+\frac{1}{x}\right]-
\frac{\theta}{\tau}\left[\tau^{\prime\prime}+\frac{\tau^\prime}{x}+
8 \theta -2 \Omega^2 -2 x \Omega\Omega^\prime\right].
\label{eq:basic}
\end{equation}
\noindent
Correctly, for $\Omega=0$ we recover the equation already used in
Paper I. This second-order differential equation, together with the
boundary conditions $\theta(0)=1$, $\theta^\prime(0)=-\tau^\prime(0)$
(see Paper I) can be integrated numerically to obtain equilibrium
configurations of both rotating and non-isothermal filaments
independently. This expression is more convenient than classical
Lane-Emden type equations (see e.g. Robe 1968; Hansen et al. 1976) for
the problem at hand. Notice also that also the normalization of
$\omega$ differs from the more conventional assumption
$\eta^2=\omega^2/4 \pi G \rho_0$ (Hansen et al. 1976).
\subsection{Uniformly rotating filaments}\label{subsec:rotfils}
\begin{figure}
\begin{center}
\includegraphics[width=6cm, angle=270]{den_rot.eps}
\caption{Logarithm of the normalized density $\theta$ as a function
of $x$ for various models of isothermal filaments with different
normalized angular frequencies. The model with $\Omega=0$
corresponds to the Ostriker profile with $\rho\propto r^{-4}$ at
large radii.}
\label{fig:denrot} \end{center} \end{figure}
If we set $\tau, \Omega=Const$ in Eq. \ref{eq:basic}, we can obtain
equilibrium solutions for isothermal, uniformly rotating filaments. We
have checked that our numerical results reproduce the main features of
this kind of cylinders, already known in the literature, namely:
\begin{itemize}
\item Density inversions take place for $\Omega^2>0$ as the
centrifugal, gravitational and pressure gradient forces battle to
maintain mechanical equilibrium. Density oscillations occur in
other equilibrium distributions of polytropes (see Horedt 2004 for a
very comprehensive overview). Noticeably, the equilibrium solution
of uniformly rotating cylindrical polytropes with polytropic index
$n=1$ depends on the (oscillating) zeroth-order Bessel function
$J_0$ (Robe 1968; see also Christodoulou \& Kazanas 2007).
Solutions for rotating cylindrical polytropes with $n>1$ maintain
this oscillating character although they can not be expressed
analytically. As evident in Fig. \ref{fig:denrot}, in the case of
isothermal cylinders (corresponding to $n \rightarrow \infty$), the
frequency of oscillations is zero for $\Omega=0$, corresponding to
the Ostriker profile. This frequency increases with the angular
frequency $\Omega$.
\item For $\Omega>2$, $\rho^\prime(0)>0$, due to the fact that, in
this case, the effective gravity $g+\omega^2r$ is directed outwards.
For $\Omega<2$, $\rho^\prime(0)<0$. If $\Omega=2$, there is perfect
equilibrium between centrifugal and gravitational forces (Keplerian
rotation) and the density is constant (see also Inagaki \& Hachisu
1978).
\item The density tends asymptotically to the value $\Omega^2/4$.
This implies also that the integrated mass per unit length $\Pi
=\int_0^\infty 2 \pi x \theta(x) dx$ diverges for $\Omega^2>0$.
Rotating filaments must be thus pressure truncated. This limit of
$\theta$ for large values of $x$ is essentially the reason why
density oscillations arise for $\Omega \neq 2$. This limit can not
be reached smoothly, i.e. the density gradient can not tend to
zero. If the density gradient tends to zero, so does the pressure
gradient. In this case there must be asymptotically a perfect
equilibrium between gravity and centrifugal force (Keplerian
rotation) but, as we have noticed above, this equilibrium is
possible only if $\Omega=2$. Thanks to the density oscillations,
$\nabla P$ does not tend to zero and perfect Keplerian rotation is
never attained. Notice moreover that the divergence of the linear
mass is a consequence of the fact that the centrifugal force
diverges, too, for $x \rightarrow \infty$.
\end{itemize}
All these features can be recognized in Fig. \ref{fig:denrot}, where
the logarithm of the normalized density $\theta$ is plotted as a
function of the filament radius $x$ for models with various angular
frequencies $\Omega$, ranging from 0 (non-rotating Ostriker filament)
to 1. Hansen et al. (1976) performed a stability analysis of
uniformly rotating isothermal cylinders, based on a standard linear
perturbation of the hydrodynamical equations. They noticed that,
beyond the point where the first density inversion occurs, the system
behaves differently compared to the non-rotation case. Dynamically
unstable oscillation modes appear and the cylinder tends to form
spiral structures. Notice that a more extended stability analysis,
not limited to isothermal or uniformly rotating cylinders, has been
recently performed by Freundlich et al. (2014; see also Breysse et al.
2014).
Even in its simplest form, the inclusion of rotations has interesting
consequences in the interpretation of the physical state of filaments.
As discussed in Paper~I, the properties of the Ostriker filament
(Stod{\'o}lkiewicz 1963; Ostriker 1964), in particular its radial
profile and linear mass, are classically used to discern the stability
of these structures. According to the Ostriker solution, an infinite
and isothermal filament in hydrostatic equilibrium presents an
internal density distribution that tends to $\rho (r) \propto r^{-4}$
at large radii and a linear mass
M$_{Ost}\simeq16.6$~M$_{\odot}$~pc$^{-1}$ at 10~K. As shown in
Fig.~\ref{fig:denrot}, and ought to the effects of the centrifugal
force, the radial profile of an uniformly rotating filament in
equilibrium ($\Omega>0$) could present much shallower profiles than in
the Ostriker-like configuration (i.e. $\Omega=0$). Such departure
from the Ostriker profile is translated into a variation of the linear
mass that can be supported by these rotating systems. For comparison,
an estimation of the linear masses for different rotating filaments in
equilibrium truncated at a normalized radius x=3 and x=10 are
presented in Tables \ref{table1} and \ref{table2}, respectively. In
these tables, the temperature profile is the linear function
$\tau(x)=1+Ax$. In particular, the case $A=0$ refers to isothermal
filaments, whereas if $A>0$, the temperature is increasing
outwards.\footnote{In Paper I, we considered two types of temperature
profiles as a function of the filament radius, i.e.
$\tau_1(x)=1+Ax$ and $\tau_2(x)=[1+(1+B)x]/(1+x)$, whose constants
defined their respective temperature gradients as functions of the
normalized radius. Both cases are based on observations. In this
paper we will only consider the linear law $\tau=\tau_1(x)$; results
obtained with the asymptotically constant law are qualitatively the
same.} As can be seen there, the linear mass of a rotating filament
could easily exceed the critical linear mass of its Ostriker-like
counterpart without being necessary unstable.
It is also instructive to obtain estimations of the above models in
physical units in order to interpret observations in nearby clouds.
For typical filaments similar to those found in Taurus (Hacar \&
Tafalla 2011; Palmeirim et al. 2013; Hacar et al. 2013), with central
densities of $\sim 5\cdot 10^4$ cm$^{-3}$, one obtains $\Omega\simeq
0.5$ according to Eq.~\ref{eq:normalization}. Assuming a temperature
of 10~K, and from Tables \ref{table1} and \ref{table2} (case $A=0$),
this rotation level leads to an increase in the linear mass between
$\sim$~17.4 M$_\odot$~pc$^{-1}$ if the filament is truncated at radius
x=3, and up to $\sim$~112 M$_\odot$~pc$^{-1}$ for truncation radius of
x=10. Here, it is worth noticing that a normalized frequency of
$\Omega\simeq 0.5$, or $\omega \sim 6.5 \cdot 10 ^{-14}$ s$^{-1}$,
corresponds to a rotation period of $\sim$~3.1~Myr. With probably
less than one revolution in their entire lifetimes
($\tau\sim$~1--2~Myr), the centrifugal forces inside such slow
rotating filaments can then provide a non-negligible support against
their gravitational collapse, being able to sustain larger masses than
in the case of an isothermal and static Ostriker-like filament.
\subsection{Differentially rotating filaments} \label{subsec:diffrotfils}
\begin{figure} \begin{center}
\includegraphics[width=6cm, angle=270]{den_diffrot.eps}
\caption{Logarithm of the normalized density $\theta$ as a
function of $x$ for various models of filaments with different
rotation laws.}
\label{fig:dendiffrot}
\end{center} \end{figure}
As can be noticed in Fig.~\ref{fig:denrot}, a distinct signature of
the centrifugal forces acting within rotating filaments is the
presence of secondary peaks (i.e. density inversions) in their radial
density distribution at large radii. Such density inversions could
dynamically detach the outer layers of the filament to its central
region, eventually leading to the mechanical breaking of these
structures. In Sect.~\ref{subsec:rotfils}, we assumed that the
filaments present a uniform rotation, similar to solid bodies.
However, our limited information concerning the the rotation profiles
in real filaments invites to explore other rotation configurations.
\begin{table}
\caption{Normalized linear masses at $x=3$ compared to the Ostriker
filament with similar truncation radius, with M$_{Ost}(x\le 3)=
14.9$~M$_{\odot}$~pc$^{-1}$, as a function of $\Omega$ and $A$.}
\label{table1}
\centering
\begin{tabular}{c c c c c} \hline\hline
$\Omega$ & $A=0$& $A=0.02$ & $A=0.1$ & $A=0.5$\\ \hline
0.1 & 1.006 & 1.015 & 1.049 & 1.167 \\
0.5 & 1.166 & 1.176 & 1.213 & 1.330 \\
0.8 & 1.553 & 1.561 & 1.593 & 1.676 \\
1.0 & 2.108 & 2.108 & 2.111 & 2.117 \\ \hline
\end{tabular} \end{table}
\begin{table}
\caption{Similar to Table~\ref{table1} but for linear masses at
$x=10$, with M$_{Ost}(x\le 10)= 16.4$~M$_{\odot}$~pc$^{-1}$.}
\label{table2}
\centering
\begin{tabular}{c c c c c} \hline\hline
$\Omega$ & $A=0$& $A=0.02$ & $A=0.1$ & $A=0.5$\\
\hline 0.1 & 1.015 & 1.039 & 1.137 & 1.623 \\
0.2 & 1.075 & 1.102 & 1.212 & 1.730 \\
0.3 & 1.287 & 1.309 & 1.415 & 1.951 \\
0.4 & 2.533 & 2.321 & 2.063 & 2.379 \\
0.5 & 7.019 & 6.347 & 4.377 & 3.234 \\
0.6 & 10.37 & 10.53 & 9.398 & 4.988 \\
0.7 & 12.29 & 12.77 & 13.78 & 8.399 \\
0.8 & 14.96 & 15.14 & 16.59 & 13.84 \\
0.9 & 20.05 & 19.39 & 19.43 & 20.22 \\
1.0 & 26.22 & 25.70 & 23.71 & 25.95 \\ \hline
\end{tabular} \end{table}
For the sake of simplicity, we have investigated the equilibrium
configuration of filaments presenting differential rotation, assuming
that $\Omega$ linearly varies with the filament radius $x$. For
illustrative purposes, we choose two simple laws: $\Omega_1(x)=x/10$
and $\Omega_2(x)=1-x/10$, both attaining the typical frequency
$\Omega=0.5$ at $x=5$. The first of these laws presumes that the
filament rotates faster at larger radii but presents no rotation at
the axis, resembling a shear motion. Opposite to it, the second one
assumes that the filament presents its maximum angular speed at the
axis and that it radially decreases outwards.
The comparison of the resulting density profiles for these two models
presented above are shown in Fig. \ref{fig:dendiffrot} for normalized
radii x$\le$~10. For comparison, there we also overplot the density
profile obtained with a constant frequency $\Omega=0.5$ (see
Sect.~\ref{subsec:rotfils}). For these models, we are assuming A=0,
i.e. isothermal configurations. Clearly, the law $\Omega_1(x)$
displays a radial profile with even stronger oscillations than the
model with uniform rotation. As mentioned above, oscillations are
prone to dynamical instabilities. In this case, instabilities start
occurring at the minimum of the density distribution, here located at
$x \simeq 4.45$. Conversely, these density oscillations are
suppressed in rotating filaments that obey a law like $\Omega_2(x)$.
It is however worth noticing that this last rotational law fails to
satisfy the Solberg-H{\o}iland criterion for stability against
axisymmetric perturbations (Tassoul 1978; Endal \& Sofia 1978; Horedt
2004). Stability can be discussed by evaluating the first order
derivative $\frac{d}{dx}[x^4 \Omega^2_2(x)]$, which is positive for $x
\in (0, 20/3) \cup (10, +\infty)$ and negative for $x \in (20/3, 10)$.
We must therefore either consider that this filament is unstable at
large radii, or we must assume it to be pressure-truncated at radii
smaller than x=20/3 $\simeq$~6.7. As we mentioned above, we could not
exclude the hypothesis that rotation indeed induced instability and
fragmentation of the original filament, separating the central part
(at radii x$\lower.5ex\hbox{\ltsima}$ 4.45 for $\Omega=\Omega_1(x)$ and x$\lower.5ex\hbox{\ltsima}$ 6.7
for $\Omega=\Omega_2(x)$) from the outer mantel, which might
subsequently break into smaller units. This (speculative) picture
would be consistent with the bundle of filaments observed in B213
(Hacar et al. 2013). For comparison, the mass per unit length
attained by the model with $\Omega=\Omega_1(x)$ at $x<4.45$ (which
corresponds to $\sim$ 0.2 pc for $T=10$ K and $n_c \sim 5\cdot 10^4$
cm$^{-3}$) is equal to 0.99 M$_{Ost}$ whereas the mass outside this
minimum is equal to 22.7 M$_{Ost}$, i.e. there is enough mass to form
many other filaments.
\subsection{Non-isothermal and rotating filaments}\label{subsec:nonisofils}
\begin{figure} \begin{center}
\includegraphics[width=6cm, angle=270]{den_lin_rot.eps}
\caption{Logarithm of the normalized density $\theta$ as a
function of $x$ for various models of uniformly rotating
filaments with $\Omega=0.5$ and different temperature slopes
$A$.}
\label{fig:denlinrot} \end{center} \end{figure}
As demonstrated in Paper I, the presence of internal temperature
gradients within filaments could offer an additional support against
gravity. Under realistic conditions, these thermal effects should be
then considered in combination to different rotational modes in the
study of the stability of these objects.
The numerical solutions obtained for the equilibrium configuration of
filaments with $\Omega=0.5$ and various values of $A$ are plotted in
Fig. \ref{fig:denlinrot}. Notice that Fig. 5 of Palmeirim et al.
(2013) suggests a rather shallower dust temperature gradient with a
value of A of the order of 0.02 (green curve in Fig.
\ref{fig:denlinrot}). However, as discussed in Paper I, the gas
temperature profile could be steeper than the dust one, so it is
useful to consider also larger values of $A$. Fig.
\ref{fig:denlinrot} shows that the asymptotic behaviour of the
solution does not depend on $A$: $\theta(x)$ always tends to
$\Omega^2/4$ for $x\rightarrow \infty$. By looking at Eq.
\ref{eq:basic}, it is clear that the same asymptotic behaviour holds
for a wide range of reasonable temperature and frequency profiles.
Whenever $\tau^{\prime\prime}$, $\tau^\prime/x$ and
$\Omega\Omega^\prime x$ tend to zero for $x\rightarrow \infty$, and
this condition holds for a linear increasing $\tau(x)$ and for
$\Omega=$constant, the asymptotic value of $\theta(x)$ is
$\Omega^2/4$. It is easy to see that also the asymptotically constant
law fulfils this condition if the angular frequency is constant.
Figure~\ref{fig:denlinrot} also shows that density oscillations are
damped in the presence of positive temperature gradients. This was
expected as more pressure is provided to the external layers to
contrast the effect of the centrifugal force. Since density
inversions are dynamically unstable, positive temperature gradients
must be thus seen as a stabilizing mechanism in filaments. Our
numerical calculations indicate in addition that the inclusion of
temperature variations also increases the amount of mass that can be
supported in rotating filaments. This effect is again quantified in
Tables \ref{table1} and \ref{table2} for truncation radii of $x=3$ and
$x=10$, respectively, compared to the linear mass obtained for an
Ostriker profile at the same radius. As can be seen there, the
expected linear masses are always larger than in the isothermal and
non-rotating filaments, although the exact value depends on the
combination of $\Omega$ and $A$ due to the variation in the position
of the secondary density peaks compared to the truncation radius.
\subsection{Derived column densities for non-isothermal, rotating
filaments: isolated vs. embedded configurations}
\begin{figure} \begin{center}
\includegraphics[width=6cm, angle=270]{dcol_cyl.eps}
\caption{Column density, as a function of the normalized impact
parameter $\chi$, for filaments characterized by three different
rotation laws: increasing outwards ($\Omega=x/10$), decreasing
outwards ($\Omega=1-x/10$) and constant ($\Omega=0.5$). The
filament is embedded in a cylindrical molecular cloud, with
radius five times the radius of the filament. The column
density of the Ostriker filament (case $p=4$) and the one
obtained for a Plummer-like model with $\rho\sim r^{-2}$ (case
$p=2$) are also shown for comparison.}
\label{fig:dcolcyl} \end{center} \end{figure}
\begin{figure} \begin{center}
\includegraphics[width=6cm, angle=270]{dcol_slab.eps}
\caption{Same as Fig. \ref{fig:dcolcyl} but for a filament
embedded in a slab, with half-thickness five times the radius of
the filament.}
\label{fig:dcolslab} \end{center} \end{figure}
In addition to their radial profiles, we also calculated the column
density profiles produced by these non-isothermal, rotating filaments
in equilibrium presented in previous sections, as a critical parameter
to compare with the observations. For the case of isolated filaments,
the total column density at different impact parameters $\chi$ can be
directly calculated integrating (either analytically or numerically)
their density profiles along the line of sight. As a general rule, if
the volume density $\rho$ is proportional to $r^{-p}$, then the column
density $\Sigma (\chi)$ is proportional to $\chi^{1-p}$. This result
holds not only for both Ostriker filaments (see also Appendix
\ref{sec:a2}) and more general Plummer-like profiles (e.g. see Eq.~1
in Arzoumanian et al. 2011), but also also for the new rotating,
non-isothermal configurations explored in this paper. Recent
observations seem to indicate that those filaments typically found in
molecular clouds present column density profiles with $\Sigma(\chi)
\sim \chi^{-1}$, i.e. $p\simeq 2$ (see Arzoumanian et al. 2011;
Palmeirim et al. 2013), a value that we use for comparison hereafter.
An aspect often underestimated in the literature is the influence of
the filament envelope in the determination of column densities
profiles. Particularly if a filament is embedded in (and
pressure-truncated by) a large molecular cloud, the line of sight also
intercepts some cloud material whose contribution to the column
density could be non-negligible (see also Appendix \ref{sec:a2}), as
previously suggested by different observational and theoretical
studies (e.g. Stepnik et al. 2003; Juvela et al. 2012). In order to
quantify the influence of the ambient gas in the determination of the
column densities, here we consider two prototypical cases:
\begin{enumerate}
\item The filament is embedded in a co-axial cylindrical molecular
cloud with radius $R_{m}$.
\item The filament is embedded in a sheet with half-thickness
$R_{m}$.
\end{enumerate} Note that, if the filament is not located in the plane
of the sky, the quantity that enters the calculation of the column
density is not $R_{m}$ itself, but $R_{m}'=R_{m}/\cos\beta$, where
$\beta$ is the angle between the axis of the filament and this plane.
\begin{figure} \begin{center}
\includegraphics[width=6cm, angle=270]{fil_env.eps}
\caption{Fractional contribution of filament and envelope to the
total column density. The model shown here corresponds to the
blue line of Fig. \ref{fig:dcolcyl}: the rotation profile is
$\Omega=1-x/10$ and the filament is surrounded by a cylindrical
envelope with R$_m$/R$_c$=5.}
\label{fig:fil_env} \end{center} \end{figure}
Following the results presented in
Sect.~\ref{subsec:rotfils}-\ref{subsec:nonisofils}, we have
investigated the observational properties of three representative
filaments in equilibrium obeying different rotational laws, namely
$\Omega_1(x)=x/10$, $\Omega_2(x)=1-x/10$ and $\Omega_3(x)=0.5$,
covering both differential and uniform rotational patterns. The
contribution of the envelope to the observed column densities is
obviously determined by its relative depth compared to the truncation
radius of the filament as well as the shape of its envelope. To
illustrate this behaviour, we have first assumed that these filaments
are pressure-truncated at $x=3$ (a conservative estimate). Moreover,
we have considered these filaments to be embedded into the two
different cloud configurations presented before, that is a slab and a
cylinder, both with extensions $R_{m}$ corresponding to five times the
radius of the filament (i.e. R$_{m}$/R$_{c}=5$). In both cases, we
have assumed that the density of the envelope is constant and equal to
the filament density at its truncation radius i.e. at $x=3$.
The recovered column densities for the models presented above as a
function of the impact parameter $\chi$ in the case of the two
cylindrical and slab geometries are shown in Figs. \ref{fig:dcolcyl}
and \ref{fig:dcolslab}, respectively. In both cases, the impact
parameter $\chi$ is measured in units of $H$. The results obtained
there are compared with the expected column densities in the case of
two infinite filaments described by an Ostriker-like profile (case
$p=4$) and a Plummer-like profile with $\rho \propto r^{-2}$ at large
radii (case $p=2$), as suggested by observations. From these
comparisons it is clear that all the explored configurations present
shallower profiles than the expected column density for its equivalent
Ostriker-like filament. This is due to the constant value of the
density in the envelope, which tends to wash out the density gradient
present in the filament if the envelope radius is large. Moreover,
the column densities expected for embedded filaments described by
rotating laws like $\Omega_1(x)$ and $\Omega_3(x)$ (this last one only
if the filament embedded into a slab) exhibit a radial dependency even
shallower than these p=2 models at large impact parameters. The
relative contribution of filament and envelope is outlined in Fig.
\ref{fig:fil_env}. The model shown here corresponds to the blue line
of Fig. \ref{fig:dcolcyl}: the rotation profile is $\Omega=1-x/10$
and the filament is surrounded by a cylindrical envelope with
R$_m$/R$_c$=5. As expected, at larger projected radii the observed
radial profiles are entirely determined by the total column density of
the cloud.
\begin{figure*} \begin{center} \vspace{-1cm} \includegraphics[width=11cm,
angle=270]{p_cyl_decr.eps} \vspace{-1cm}
\caption{Expected radial dependence for the observed column density
profiles (colour coded) of rotating filaments in equilibrium obeying
a rotation law like $\Omega_2(x)=1-x/10$, truncated at a radius
R$_{c}$, and embedded into a cylindrical cloud extending up to a
distance R$_{m}$. R$_{c}$ and R$_{m}$ are displayed in units of the
normalized (i.e. x) and truncation (i.e. R$_{m}$/R$_{c}$) radii,
respectively. The black solid line highlights those models with a
power-law dependence with p=2, similar to the observations. Notice
that also the color palette has been chosen in order to emphasize
the transition from $p<2$ configurations to $p>2$ configurations.}
\label{fig:cyl_models} \end{center} \end{figure*}
Finally, it is important to remark that the expected column density
profiles for the models presented above and, particularly, their
agreement to these shallow Plummer-like profiles with p=2,
significantly depend on the selection of the truncation radius R$_{c}$
and the extent of the filament envelopes R$_{m}$. This fact is
illustrated in Fig.~\ref{fig:cyl_models} exploring the expected slope
of the observed column density profiles for pressure truncated and
isothermal filaments following a rotational law like
$\Omega_2(x)=1-x/10$ under different configurations for both their
truncation and cloud radii. These results were calculated as the
averaged value of the local slope of the column density at impact
parameters $\chi\le R_{c}$, that is, where our models are sensitive to
the distinct contributions of both filaments and envelopes. As
expected, the larger the cloud depth is compared to the filament, the
flatter profile is expected. Within the range of values explored in
the figure, multiple combinations for both R$_{c}$ and R$_{m}$
parameters present slopes consistent to a power-law like dependency
with p=2. Although less prominently, few additional combinations can
be also obtained in the case of filaments with rotational laws like
$\Omega_1(x)=x/10$ or $\Omega_3(x)=0.5$ (not shown here). Unless the
rotational state of a filament is known and the contribution of the
cloud background is properly evaluated, such degeneration between the
parameters defining the cloud geometry and the relative weights of
both the filament and its envelope makes inconclusive any stability
analysis solely based on its mass radial distribution.
\section{Conclusions}
\label{sec:conc}
The results presented this paper have explored whether the inclusion
of different rotational patterns affect the stability of gaseous
filaments similar to those observed in nearby clouds. Our numerical
results show that, even in configurations involving slow rotations,
the presence of centrifugal forces have a stabilizing effect,
effectively sustaining large amounts of gas against the gravitational
collapse of these objects. These centrifugal forces promote however
the formation of density inversions that are dynamically unstable at
large radii, making the inner parts of these rotating filaments to
detach from their outermost layers. To prevent the formation of these
instabilities as well as the asymptotical increase of their linear
masses at large radii, any equilibrium configuration for these
rotating filaments would require them to be pressure truncated at
relatively low radii.
In order to have a proper comparison with observations, we have also
computed the expected column density profiles for different pressure
truncated, rotating filaments in equilibrium. To reproduce their
profiles under realistic conditions we have also considered these
filaments to be embedded in an homogeneous cloud with different
geometries. According to our calculations, the predicted column
density profiles for such rotating filaments and their envelopes tend
to produce much shallower profiles than those expected for the case of
Ostriker-like filaments, resembling the results found in observations
of nearby clouds. Unfortunately, we found that different combinations
of rotating configurations and envelopes could reproduce these
observed of profiles, complicating this comparison.
To conclude, the stability of an observed filament can not be judged
by a simple comparison between observations and the predictions of the
Ostriker profile. We have shown in this paper that density profiles
much flatter than the Ostriker profile and linear masses significantly
larger than the canonical value of $\simeq$ 16.6 M$_\odot$ pc$^{-1}$
can be obtained for rotating filaments in equilibrium, surrounded by
an envelope. Detailed descriptions of the filament kinematics and
their rotational state, in addition to the analysis of their projected
column densities distributions, are therefore needed to evaluate the
stability and physical state in these objects.
\section*{Acknowledgements}
This publication is supported by the Austrian Science Fund (FWF). We
wish to thank the referee, Dr Chanda J. Jog, for the careful reading
of the paper and for the very useful report.
|
1,477,468,750,677 | arxiv | \section{Introduction}
Important brain development occurs in the last trimester of pregnancy including brain growth, myelination, and cortical gyrification \cite{kostovic2006development}. Magnetic resonance imaging (MRI) is widely used to non-invasively assess and monitor brain development in preterm infants. In spite of ability of MRI to visualize the neonatal brain, motion artifacts caused by the head movement lead to blurry image slices or slices with stripes (see Figure \ref{fig:example}). These artifacts hamper image interpretation as well as brain tissue segmentation.
To enable the analysis of images affected by motion artifacts, most studies perform the correction in the frequency domain (k-space) prior to analysis \cite{atkinson1997automatic,godenschweger2016motion}. However, frequency domain data is typically not stored and hence, not available after image reconstruction. Recently, Duffy at al. \cite{duffy2018retrospective} and Paware et al. \cite{pawar2018moconet} proposed to use convolutional neural networks (CNNs) to correct motion-corrupted MRI from already reconstructed scans. CNNs were trained to reconstruct simulated motion artifacts that were modelled with a predefined formula. This enforces the network towards an assumed distribution of artifacts. However, in practice, it is difficult to estimate the real distribution of motion. Alternatively, a CNN could be trained to generate images without motion artifacts from images with such artifacts. However, this would require training with paired scans, which are rarely available. To solve this, recently cycleGAN has been proposed to train CNNs for image-to-image transformation with unpaired images \cite{zhu2017unpaired}.
In this study, we propose to employ a cycleGAN to generate MR slices without motion artifacts from slices affected by motion artifacts in a set of neonatal brain MR scans. The cycleGAN is trained to transform slices affected by motion artifacts into slices without artifacts, and vice versa. To generate slices corrected for motion artifacts, we applied the trained cycleGAN to motion affected slices and we hypothesize that images corrected for motion artifacts allow more accurate (automatic) segmentation. To evaluate this, we use a method exploiting a convolutional neural network to segment scans into eight tissue classes. Moreover, we propose to augment the segmentation training data from the cycleGAN that synthesizes slices with artifacts from slices without the artifacts. We demonstrate that the proposed correction for motion artifacts improves image quality and allows accurate automatic segmentation of brain tissue classes in brain MRI of infants. We also show that the proposed data augmentation further improves segmentation results.
\begin{figure}[t]
\includegraphics[width=\textwidth]{example_motion_30.png}
\caption{Examples of coronal slices from T2-weighted MRI acquired in preterm born infants at 30 weeks postmenstrual age affected by motion artifacts. Structures outside the neonatal cranium have been masked out.}
\label{fig:example}
\end{figure}
\section{Data}
This study includes 80 T2-weighted MRI scans of preterm born infants scanned at average of $30.7\pm1.0$ weeks postmenstrual age (PMA). Images were acquired on a Philips Achieva 3T scanner at University Medical Center Utrecht, the Netherlands. The acquired voxel size was $0.34\times0.34$ mm$^2$ and the reconstruction matrix was $384\times384\times50$. The scans were acquired in the coronal plane. In this data set, 60 scans had visible motion artifacts in most of the slices and 20 scans had no visible motion in any slice. The reference segmentation of 10 scans out of 20 scans without motion artifacts were available. The scans were manually segmented into 8 tissue classes: cerebellum (CB), myelinated white matter (mWM), basal ganglia and thalami (BGT), ventricular cerebrospinal fluid (vCSF), white matter (uWM), brain stem (BS), cortical gray matter (cGM), and extracerebral cerebrospinal fluid (eCSF)
\section{Method}
Motion artifacts in the neonatal brain MR hamper the diagnostic interpretability and precise automatic segmentation of the brain tissue classes. To address this, we propose to correct motion artifacts in the reconstructed MR scans using a cycleGAN. Thereafter, to evaluate whether the corrected images are suitable for segmentation of brain tissues, a CNN architecture was trained to segment the brain into eight tissue classes. Furthermore, to improve segmentation performance, we proposed to augment the training data by synthesizing images with motion artifacts from the images without artifacts using the cycleGAN.
\subsection{Artifact correction network}
CycleGAN has been proposed to train image-to-image translation CNNs with unpaired images. Given that obtaining paired scans with and without motion artifacts is difficult, cycleGAN was trained to transform slices affected by motion to slices without motion artifacts and, vice versa (Figure \ref{fig:cycleGAN}). The network architecture consists of two cycles, motion correction and motion generation cycles. The motion correction cycle consists of three networks. Motion correction network ($MC$) transforms slices affected by motion to slices without motion artifacts. Motion generation network ($MG$) reconstructs the generated slices without motion artifacts to the original image slices. A discriminator CNN discriminates between generated and real slices without motion artifacts $Dis_{MC}$. While the discriminator distinguishes between generated and real slices without motion artifacts, the generator tries to prevent it by generating images which are not distinguishable for the discriminator. Similarly, motion generation cycle transforms slices without motion artifacts to slices affected by motion. The network architecture in both cycles is identical. The generator contains 2 convolution layers with stride of 2, 9 residual blocks \cite{he2016deep}, and 2 fractionally strided convolutions with stride proposed in \cite{johnson2016perceptual}. The discriminator networks have a PatchGAN \cite{isola2017image}, which classifies $70 \times 70$ overlapping image patches as fake or real. Two adversial losses \cite{goodfellow2014generative} were used in both motion correction network and motion generation network. Furthermore, cycle consistency loss in motion correction network ($MC_{cl}$) and motion generation network ($MG_{cl}$) were weighted by $\lambda$ and were added to adversial losses.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{method.png}
\caption{ The CycleGAN consists of two cycles: motion correction and motion generation. In the motion correction cycle, first network is trained to transform slices affected by motion into slices without motion artifacts ($MC$), the second network is trained to transform the generated slices without motion artifacts back to the original slices ($MG$), and the third network discriminates between real and synthesized slices without motion artifacts ($Dis_{MC}$). In the motion generation network, motion was added to the slices without motion artifacts ($MG$), motion correction network transforms generated slices to the original slices ($MC$), and the discriminator network discriminates between real and fake slices affected by motion artifacts ($Dis_{MG}$)}
\label{fig:cycleGAN}
\end{figure}
\subsection{Segmentation Network}
To assess segmentation performance in images affected by motion artifacts, a CNN with Unet-like architecture was trained to segment images into eight tissue classes. The segmentation network consists of a contracting path and an expanding path. The contracting path consists of 10 $3\times3$ convolution layers followed by rectified linear units (ReLUs). Every two convolution layers the features were downsampled by $2\times2$ max pooling and the feature channels were doubled using the following scheme 32, 64, 128, 256, 512. In the expanding path, an up-sampling is followed by a $2\times2$ convolution which halves the number of feature channels. The results are concatenated with the corresponding contracting path and convolved by two $3\times3$ convolutional layers followed by a ReLU. In the final layer, one $1\times1$ convolutional layer maps each component of the feature vector to the desired number of classes. Batch normalization is applied after all convolutional layers to allow for faster convergence. The network was trained with 3D patches of $256 \times 256 \times 3$ voxels. The network was trained by minimizing the average of Dice coefficient in all classes between the network output and manual segmentation.
\section{Evaluation}
Given that slices affected by motion don't allow accurate manual annotation, to quantitatively evaluate the proposed method, motion is synthesized in images using the motion generation network. This allows evaluation with the manual annotations performed in images without artifacts. Thereafter, the performance of the segmentation network was evaluated using the Dice coefficient (DC), Hausdorf distance (HD) and mean surface distance (MSD) between manual reference and automatically obtained segmentations. The evaluation was performed in 3D.
To evaluate the proposed method on images with real motion artifacts, the images and the corresponding automatic segmentations before and after motion correction were qualitatively evaluated using 5-points Likert scale. The image quality was scored on a scale from 1 to 5, where 1 indicates uninterpretable images with severe motion artifacts, and 5 indicates excellent image quality. Similarly, automatic segmentations were scored 1 when the segmentation failed, and 5 when the segmentation was very accurate.
\section{Experiments and Results}
Prior to analysis, the intracranial brain volume was extracted from all scans using Brain Extraction Tool \cite{smith2002fast}. To train the artifact correction network, 15 scans without motion artifacts and 20 scans with motion artifacts were selected for training. The remaining 5 scans without motion artifacts and 40 scans with motion artifacts were used for testing. From scans without motion artifacts, 700 slices without visible artifacts were selected. Similarly, from the scans with motion artifacts, 714 slices with visible artifacts were selected. The network was trained with a batch size of 4. Adam \cite{kingma2014adam} was used to minimize the loss function for 100 epochs with a fixed learning rate of 0.00005. $\lambda$ was set to 10.
To segment the brain into eight tissue classes, the segmentation network was trained with 5 scans without motion artifacts selected from the 15 training scans used to train the motion correction network. The segmentation network was trained with a batch size of 6. Adam was used to minimize the loss function for 200 epoch and the learning rate was set to 0.0001.
In the experiments, we performed quantitative evaluation of the proposed method through the evaluation of the brain tissue segmentation. First, to determine the upper limit of the segmentation performance, images without artifacts were segmented (Table \ref{tab:qantitative}, top row). Second, we aimed to evaluate the segmentation performance in the images with artifacts. However, motion artifacts are prohibitive for accurate manual annotation thus, those were not available for such images. Hence, the motion generation network was used to synthesize images with artifacts from the images without artifacts, for which manual segmentations were available. Segmentation was performed in the synthesized images. (Table \ref{tab:qantitative}, second row). Third, using motion correction network, the artifacts were removed from the images with synthesized artifacts and those were subsequently segmented (Table \ref{tab:qantitative}, third row).
In the previous experiments, the segmentation network was trained only with images without motion artifacts, as only those were manually labelled. However, we hypothesized that the performance would improve when the segmentation would be trained with both types of images. Hence, to obtain images affected by motion that can be used for training, similar to the second experiment, we synthesized training images using motion generation network. In the fourth experiment, we evaluated segmentation network trained with augmented training data, i.e. images with and without motion artifacts on images with synthesized motion artifacts (Table \ref{tab:qantitative}, fourth row). Finally, segmentation was performed in images with corrected synthesized artifacts as in the third experiment, and training data for the segmentation was augmented as in the fourth experiment (Table \ref{tab:qantitative}, bottom row).
The results show that correction of motion artifacts using motion correction network improves the performance (Table \ref{tab:qantitative}, second vs. third row). Moreover, results demonstrate that the performance of the segmentation network improves when the training data is augmented (Table \ref{tab:qantitative}, second row vs fourth row and third vs. bottom row).
\begin{table}[t]
\caption{Performance of brain tissue segmentation into eight tissue classes. The evaluation of segmentation was performed 1) on scans without motion artifacts (Motion Free) 2) on the same scans with synthesized motion using motion generation network (Motion Synthesized) 3) on the scans where synthesized motion were corrected using motion correction network (Motion Corrected). The segmentation network was retrained with motion augmented scans using motion generation network. The evaluation of segmentation was performed 4) on the scans with synthesized motion using motion generation network (Motion Augmented) 5) on the scans where synthesized motion were corrected (Motion Corrected and Augmented) }
\resizebox{\textwidth}{!}{
\npdecimalsign{.}
\nprounddigits{2}
\begin{tabular}
{|c|c|n{2}{2}n{2}{2}n{2}{2}n{2}{2}n{2}{2}n{2}{2}n{2}{2}n{2}{2}n{2}{2}|}
\hline
&&{\enspace CB}& {mWM} & {\enspace BGT} & {vCSF} & {\enspace WM} & {\enspace BS} & {cGM} & {eCSF} & {Mean} \\ \hline
\multirow{3}{9em}{Motion Free} & DC & 0.90 & 0.53 & 0.89 & 0.84 & 0.94 & 0.84 & 0.67 & 0.83 & 0.80 \\
& HD & 44.92 & 32.97 & 39.06 & 23.08 & 17.25 & 42.57 & 18.47 & 8.60 & 28.36 \\
& MSD & 0.36 & 1.85 & 0.56 & 0.36 & 0.20 & 0.56 & 0.21 & 0.23 & 0.54 \\
\hline
\multirow{3}{9em}{Motion Synthesized} & DC & 0.87 & 0.38 & 0.87 & 0.77 & 0.90 & 0.81 & 0.62 & 0.75 & 0.75 \\
& HD & 52.27 & 53.80 & 42.93 & 33.70 & 21.33 & 48.18 & 21.53 & 22.43 & 37.02 \\
& MSD & 0.62 & 4.10 & 1.04 & 1.32 & 0.77 & 0.92 & 0.55 & 1.00 & 1.29 \\
\hline
\multirow{3}{9em}{Motion Corrected} & DC & 0.90 & 0.47 & 0.89 & 0.83 & 0.936 & 0.83 & \ubold{\enspace0.68} &\ubold{\enspace0.85} & 0.79 \\
& HD & 45.06 & 41.93 & 33.58 & 22.84 & 18.25 & 39.19 & 18.57 & 8.90 & 28.54 \\
& MSD & 0.46 & 2.07 & 0.55 & 0.35 & 0.20 &\ubold{\enspace0.41} & 0.207 &\ubold{\enspace0.16} & 0.551 \\
\hline
\multirow{3}{9em}{Motion Augmented} &DC & 0.88 &0.45 &0.88 &0.80 & 0.92 & 0.81 & 0.63 & 0.80 & 0.77\\
&HD&\ubold{40.19} &\ubold{27.42} &28.43 &19.27& 14.98 &\ubold{30.85}& \ubold{15.03}& 11.79 & 23.49\\
&MSD&0.46 &1.84 & 0.61 &0.39 &0.27 &0.48 &0.27 &0.24 &0.57\\
\hline
\multirow{3}{15em}{Motion Corrected \& Augmented} & DC &\ubold{\enspace 0.91} &\ubold{\enspace0.48} &\ubold{\enspace0.89}& \ubold{\enspace0.84} &\ubold{\enspace0.94}& \ubold{\enspace0.84} &0.67 &0.84 &\ubold{\enspace0.80}\\
& HD &45.62 &34.52 &\ubold{26.83} &\ubold{17.77} &\ubold{14.40} &35.93 &17.18 &\ubold{\enspace7.63} &\ubold{24.99}\\
&MSD&\ubold{\enspace0.45} &\ubold{\enspace1.89} &\ubold{\enspace0.44} &\ubold{\enspace0.29} &\ubold{\enspace0.19} &0.42 &\ubold{\enspace0.20}& 0.165 &\ubold{\enspace0.51}\\
\hline\end{tabular}
}
\label{tab:qantitative}
\end{table}
To qualitatively evaluate the performance of the motion correction network, 40 scans affected by motion artifacts were corrected using motion correction network. Subsequently, the segmentation network trained with the proposed data augmentation was used to segment the corrected images. Qualitative scoring of the images and segmentations before and after motion correction was performed. The evaluation results show that the median image quality and quality of corresponding automatic segmentations were assigned grade 2 (poor) and 3 (moderate), respectively. After correction of motion artifacts, both improved to grades 3 and 4, respectively. Figure \ref{fig:evaluation} shows examples of images and corresponding segmentations before and after motion correction. This shows that the motion correction network reduces motion artifacts and hence, improves quality of the images and corresponding segmentations. Moreover, the figure shows that our proposed motion augmentation further improves automatic segmentations.
\begin{figure}[t]
\includegraphics[width=\textwidth]{result_quantive.png}
\caption{ Examples of slices affected by motion artifacts and the corresponding tissue segmentation in neonatal MRI. 1st column: A motion affected slice; 2nd column: Automatic segmentation when the network was trained on slices without motion artifacts; 3rd column: Automatic segmentation, network trained on slices with augmented motion; 4th column: A motion corrected slice; 5th column: Automatic segmentation result on the corrected slice; 6th column: Automatic segmentation results on the corrected slice when the network was trained with data augmentation.}
\label{fig:evaluation}
\end{figure}
\begin{comment}
\begin{table}[t]
\setlength{\tabcolsep}{5pt}
\centering
\begin{tabular}{ccccc}
&MA Image& MC Image& MA segmentation&MC Segmentation \\
Median & 2 &3 & 3 & 4\\
\end{tabular}
\caption{The quality of scans and corresponding segmentation were graded using 5-point Likert scale. The median, and interquartile range is listed. Evaluation was performed in slices affected by motion artifacts (MA) and corresponding segmentations as well as in motion corrected (MC) slices with corresponding segmentations.}
\label{tab:score}
\end{table}
\end{comment}
\section{Discussion and conclusion}
We presented a method for correction of motion artifacts in reconstructed brain MR scans of preterm infants using a cycleGAN. We demonstrate that the proposed artifact correction generates images that are more suitable for (automatic) image segmentation. Additionally, we show that training the segmentation network with the proposed data augmentation further improves segmentation performance.
Unlike previous methods that performed motion correction in the frequency domain (k-space), the proposed method corrects motion artifacts in already reconstructed scans. Given that k-space data is typically not available after scans have been reconstructed and stored, the proposed method allows correction.
To conclude, results demonstrate that correction of motion artifacts in reconstructed neonatal brain MR scans is feasible. Moreover, results show that the proposed motion correction allows automatic brain tissue segmentation in scans affected by motion artifacts. This may improve clinical interpretability and extraction of quantitative markers in images with motion artifacts.
|
1,477,468,750,678 | arxiv | \section{Introduction} \label{CHOICE_MODEL_sec_intro}
The field of queueing theory has been dedicated to understanding the dynamics of queueing systems and their impact on society. In an age where information is readily available, it is an important problem to understand how information affects the decision making of customers who want to join queueing systems. In some applications, such as amusement parks, transportation systems, and even telecommunication systems, it has been observed that the information given to the customers can potentially be delayed \citet{doldo2021queues, novitzky2020update, pender2020stochastic, pender2018analysis, nirenberg2018impact}.
This delay in information can be caused by factors such as the fact that it can take time to process and send information about queue lengths to the customers and thus the information that customers actually receive describes the queue lengths from some amount of time in the past. Queueing systems where customers are provided with delayed information have been studied extensively, see for example \citet{novitzky2020limiting, doldo2021mean, doldo2020multi, doldo2021breaking, novitzky2020queues, novitzky2019nonlinear, Lipshutz2018, whitt2021many}.
There are many different types of delayed information that customers can be provided with and different choices can lead to different choice models and thus different queueing system dynamics. One of the most common choices used in the literature is where the customer is provided with the lengths of the queues from some amount of time in the past. This delay in information is often modeled with a constant delay, as is the case in \citet{mitzenmacher2000useful, raina2005buffer, kamath2015car, raina2015stability, lipshutz2015existence, lipshutz2017exit, Lipshutz2018, nirenberg2018impact, novitzky2020limiting, doldo2021breaking, novitzky2019nonlinear, pender2017queues, pender2018analysis, whitt2021many}.
Different types of queue length information with a constant delay have been used, see \citet{doldo2021mean} for example, which considers so-called mean-field queue length information with a constant delay. Other types of information that have been considered include delayed velocity information and moving average information, see \citet{dong2018impact, novitzky2020limiting, pender2017queues, novitzky2019nonlinear}. Some of the literature has examined queueing systems where the delay is not constant. For example, a queueing model that uses delayed updating information is examined in \citet{doldo2021queues, novitzky2020update}. Additionally, queueing models with distributed delays have been studied by considering distributed delay equation models in \citet{doldo2020multi, novitzky2020queues}, though it is worth noting that distributed delay models exhibit different dynamics in general than simply replacing the constant delay with a random variable in standard models, as discussed in \citet{doldo2021note}.
In this paper, we will assume a similar model in spirit of \citet{pender2017queues}. However, our model uses a generalized version of the multinomial logit choice model which typically assumes an exponential form. The exponential form is related to a linear utility function, however our functional form will correspond to a nonlinear utility function. One of our main goals of this work is to understand how this nonlinearity of the utility function affects the critical delay which induces a Hopf bifurcation. As discussed in more detail in Section \ref{CHOICE_MODEL_hopf_section}, we will focus our attention on utilities that incorporate the complementary cumulative distribution function of some probability distribution. We will examine how the choice of probability distribution used to induce the choice model can lead to different system dynamics.
\subsection{Main Contributions of Paper}
The contributions of this work can be summarized as follows:
\begin{itemize}
\item We consider queueing system choice models informed by more general utilities that are built from the complementary cumulative distribution functions of some probability distribution.
\item We solve for an equilibrium solution and compute the critical delay at which the queueing system exhibits a change in stability in terms of the hazard function corresponding to the given probability distribution.
\item We numerically examine how the queueing dynamics change when different probability distributions induce the choice model, including how stability properties differ between various common probability distributions.
\end{itemize}
\subsection{Organization of Paper}
The remainder of this paper is organized as follows. In Section \ref{CHOICE_MODEL_model_introduction_section} we introduce the class of queueing models that we will be analyzing. In Section \ref{CHOICE_MODEL_hopf_section}, we find an equilibrium solution and compute the critical delay of our queueing system. In Section \ref{CHOICE_MODEL_eigenvalue_section} we consider various probability distributions and how their corresponding choice models impact the dynamics of the queueing system. In Section \ref{CHOICE_MODEL_conclusion_section} we give concluding remarks and discuss possible extensions.
\section{Generalized Multinomial Logit Queueing Model}
\label{CHOICE_MODEL_model_introduction_section}
In this section we describe the DDE system that we are interested in analyzing. The previous literature examines delay differential equation (DDE) fluid models that are based on the following model of an $N$-queue system
\begin{eqnarray}
\overset{\bullet}{q}_i(t) &=& \lambda \frac{\exp(- \theta q_i(t - \Delta))}{\sum_{j=1}^{N} \exp( - \theta q_j(t - \Delta))} - \mu q_i(t), \hspace{5mm} i = 1, ..., N
\label{CHOICE_MODEL_system1}
\end{eqnarray} where $q_i(t)$ represents the length of the $i^{\text{th}}$ queue at time $t$ for $i = 1, ..., N$, $\lambda > 0$ is the arrival rate, and $\mu > 0$ is the service rate of the infinite-server queue, $\theta > 0$ is a choice model parameter, and $\Delta > 0$ is a time delay. The arrival rate $\lambda$ is multiplied by the probability of joining the $i^{\text{th}}$ queue which is determined by the choice model which, in this case, is based on a multinomial logit model. The choice model takes the delayed queue lengths as input to model providing customers with delayed information about the queue lengths. More specifically, the logit choice model used in Equation \ref{CHOICE_MODEL_system1} assumes that the utility of an agent joining the $i^{\text{th}}$ queue is
\begin{eqnarray}
U_i = - \theta q_i(t - \Delta) + \epsilon_i
\end{eqnarray}
where $\epsilon_i$ is distributed according to a standard Gumbel distribution. See \cite{train2009discrete} for the details regarding how to construct such a choice model. We note that the deterministic portion of the utility is negative so that maximizing the utility corresponds to joining a queue with a shorter length (neglecting the random component of the utility as some realizations of the utility could be larger despite having a larger queue length if the realization of the corresponding random component of the utility happens to be small enough).
In this paper, we will focus on analyzing the more general DDE system
\begin{eqnarray}
\overset{\bullet}{q}_i(t) &=& \lambda \frac{ \bar{G}(q_i(t - \Delta))}{\sum_{j=1}^{N} \bar{G} ( q_j(t - \Delta))} - \mu q_i(t), \hspace{5mm} i = 1, ..., N
\label{CHOICE_MODEL_dde_system}
\end{eqnarray}
which uses a more general functional form of the multinomial logit model which is based on a more general utility in the form
\begin{eqnarray}
U_i = \log \left( \bar{G} ( q_i(t - \Delta)) \right) + \epsilon_i
\end{eqnarray}
where $\bar{G} : \mathbb{R} \to [0, \infty)$ is a decreasing function (so that longer queue lengths correspond to smaller utilities, neglecting randomness) and $\epsilon_i$ is again distributed according to a standard Gumbel distribution. This allows the utility to depend nonlinearly on the delayed queue length. We note that the choice model used in Equation \ref{CHOICE_MODEL_system1} corresponds to when \begin{eqnarray}
\bar{G}(x) = \exp(-\theta x)
\end{eqnarray} which happens to be the complementary cumulative distribution function corresponding to an exponential distribution with parameter $\theta$. This motivated us to focus on the case where $\bar{G}$ is a complementary cumulative distribution function corresponding to some probability distribution as this may be a particularly interesting class of decreasing functions to consider. Under such a choice model, the probability of joining the $i^{\text{th}}$ queue is \begin{eqnarray}
\frac {\bar{G} (q_i(t - \Delta))}{\sum_{j=1}^{N} \bar{G} (q_j(t - \Delta))}.
\end{eqnarray}
Additionally, some complementary cumulative distribution functions can be written as the moment-generating function of some random variable $Y < 0$ (negativity of $Y$ is important so that the moment-generating function is a decreasing function) so that
\begin{eqnarray}
\bar{G}(x) = M_Y(x) := \mathbb{E}[e^{ Y x}]
\end{eqnarray}
\noindent where the expectation is taken over the random variable $Y$. An example of such a case is when the complementary cumulative distribution function corresponds to a hyperexponential distribution so that
\begin{eqnarray}
\bar{G}(x) = \sum_{k=1}^{m} p_k e^{- \theta_k x} = \mathbb{E}[e^{Y x}]
\end{eqnarray} where $Y$ is a discrete random variable where $Y = - \theta_k < 0$ with probability $p_k$ for $k=1,...,m$. In such a case, we note that the deterministic part of the utility can be viewed as the cumulant-generating function of the random variable $Y$, that is
\begin{eqnarray}
U_i = \log(\mathbb{E}[e^{Y q_i(t - \Delta)}]), \hspace{5mm} i=1,..., N.
\end{eqnarray}
We can interpret such a utility as one corresponding to an exponential distribution where we have some uncertainty about the parameter (modeled by the random variable $-Y > 0$) of the distribution and account for this uncertainty by averaging over the distribution that we assume the parameter follows. If we view the utility as a nonlinear transformation of a linear quantity (namely, $ - Y q_i(t-\Delta)$), then we can view this as averaging over the possible values for the slope of this linear quantity. Of course, such choices of $\bar{G}$ can be interpreted as Laplace transforms. For ease of notation, let $Z := -Y > 0$ so that
\begin{align}
\bar{G}(x) = M_Y(x) &= \mathbb{E}[e^{Yx}]\\
&= \mathbb{E}[e^{-Zx}]\\
&= \int_{0}^{\infty} e^{-zx} f_Z(z) dz\\
&= \mathcal{L}[f_Z](x)
\end{align}
where $f_Z$ is the probability density function of $Z$ and $\mathcal{L}$ denotes the Laplace transform. It follows that we can directly recover the density $f_Z$ from $\bar{G}(x)$ by applying the inverse Laplace transform so that
\begin{eqnarray}
f_Z = \mathcal{L}^{-1}[\bar{G}].
\label{CHOICE_MODEL_inverse_laplace_eqn}
\end{eqnarray}
This relation gives us a way of interpreting the choice models corresponding to various distributions as being induced by an exponential distribution with uncertainty in its parameter. Ultimately, we will concern ourselves with better understanding the dynamics of queueing systems with choice models induced by different probability distributions.
\section{Queueing System Stability Analysis}
\label{CHOICE_MODEL_hopf_section}
In this section, we will perform a stability analysis of the queueing system with the generalized functional form of the multinomial logit model. We will find an equilibrium solution of the system \ref{CHOICE_MODEL_dde_system} and use this equilibrium solution to perform a linearization which will allow us to compute the critical delay of this system.
It is well-known that when treating the delay $\Delta$ as a bifurcation parameter, the DDE system \ref{CHOICE_MODEL_system1} undergoes a Hopf bifurcation. In particular, there exists a critical delay value $\Delta_{\text{cr}}$ such that if $\Delta > \Delta_{\text{cr}}$, then the system is unstable an oscillates periodically. Computing this critical delay is an important part of understanding the dynamics of the queueing system. In this work, we will focus our attention on the choice model and consider how different choice models can impact the system dynamics. In particular, we will think of a choice model as being induced by a probability distribution in the sense that the probability of joining the $i^{\text{th}}$ queue can be written in the form \begin{eqnarray}
\frac {\bar{G}(q_i(t - \Delta))}{\sum_{j=1}^{N} \bar{G}(q_j(t - \Delta))}
\end{eqnarray}
where $\bar{G}$ is the complementary cumulative distribution function corresponding to the given probability distribution. As we noted above, the choice model in system \ref{CHOICE_MODEL_system1} is induced by an exponential distribution with parameter $\theta$. We will examine how the dynamics of the queueing system change when the choice model is induced by different probability distributions.
Suppose we are given some random variable $X : \Omega \to \mathbb{R}$ on the probability space $(\Omega, \mathcal{F}, \mathbb{P})$ and we assume that $X$ has a cumulative distribution function $G(x) = \mathbb{P}(X \leq x)$ and a corresponding probability density function $g$ exists. We define the complementary cumulative distribution function to be $\bar{G}(x) = 1 - G(x) = \mathbb{P}(X > x)$. We also define the so-called hazard function corresponding to this distribution to be $h(x) = \frac{g(x)}{\bar{G}(x)}$. We consider the DDE system
\begin{eqnarray}
\overset{\bullet}{q}(t) &=& \lambda \frac{\bar{G}(q_i(t - \Delta))}{\sum_{j=1}^{N} \bar{G}(q_j(t - \Delta))} - \mu q_i(t), \hspace{5mm} i = 1, ..., N.
\end{eqnarray}
\noindent We are interested in better understanding how using complementary cumulative distribution functions of different probability distributions to build the choice model can impact the dynamics of the queueing system. We note that the system \ref{CHOICE_MODEL_dde_system} is a generalization of the system \ref{CHOICE_MODEL_system1}, the latter of which had its choice model induced by an exponential distribution with parameter $\theta > 0$. Understanding how the critical delay of the system changes as different choice models are used is very important to understanding how the system dynamics change as the critical delay tells us when the system is stable or unstable. In order to find the critical delay of the system \ref{CHOICE_MODEL_dde_system}, we will perform a linearization analysis for which an equilibrium solution is needed. Thus, we introduce an equilibrium solution with Theorem \ref{CHOICE_MODEL_equilibrium_theorem}.
\begin{theorem}
The system \ref{CHOICE_MODEL_dde_system} has an equilibrium solution at $q_1 = \cdots = q_N = q^* $ where $$q^* = \frac{\lambda}{N \mu}.$$
\label{CHOICE_MODEL_equilibrium_theorem}
\end{theorem}
\begin{proof}
Letting $q_1 = \cdots = q_N = q^*$ for some constant $q^* \in \mathbb{R}$ to be determined, we get the system of $N$ identical equations in the form
\begin{align}
0 &= \lambda \frac{\bar{G}(q^*)}{\sum_{j=1}^{N} \bar{G}(q^*)} - \mu q^*\\
&= \lambda \frac{\bar{G}(q^*)}{N \bar{G}(q^*)} - \mu q^*\\
&= \frac{\lambda}{N} - \mu q^*
\end{align}
and so it follows that $q^* = \frac{\lambda}{N \mu}$ is a solution.
\end{proof}
A nice property of system \ref{CHOICE_MODEL_dde_system} is that the equilibrium solution from Theorem \ref{CHOICE_MODEL_equilibrium_theorem} always exists regardless of the probability distribution that is used to define the choice model. Now that we have an equilibrium solution, we can perform the linearization analysis used to get Theorem \ref{CHOICE_MODEL_delta_cr_theorem}. Before proceeding, we want to note that the hazard function $h(x)$ corresponding to the given probability distribution is an important quantity that will show up in our analysis and thus we make a brief comment on hazard functions. We can rewrite the hazard function corresponding to the distribution of a continuous random variable $X$ as follows
\begin{align}
h(x) = \frac{g(x)}{\bar{G}(x)} = \frac{g(x)}{\mathbb{P}(X > x)} &= \lim_{\delta \to 0} \frac{G(x + \delta) - G(x)}{\delta} \cdot \frac{1}{\mathbb{P}(X > x)}\\
&= \lim_{\delta \to 0} \frac{\mathbb{P}(x < X \leq x + \delta)}{\delta} \cdot \frac{1}{ \mathbb{P}(X > x)}\\
&= \lim_{\delta \to 0} \frac{1}{\delta} \frac{\mathbb{P}( \{ X \leq x + \delta \} \cap \{ X > x \} )}{\mathbb{P}(X > x)}\\
&= \lim_{\delta \to 0} \frac{\mathbb{P}(X \leq x + \delta | X > x)}{\delta}.
\end{align}
\begin{theorem}
The system \ref{CHOICE_MODEL_dde_system} exhibits a change in stability at the critical value $\Delta_{\text{cr}}$ of the delay parameter $\Delta$ where
\begin{eqnarray}
\Delta_{\text{cr}} &=& \frac{\text{arccos} \left( \frac{\mu}{C} \right)}{\sqrt{C^2 - \mu^2}}
\end{eqnarray}
where
\begin{eqnarray}
C = - \frac{\lambda}{N} \frac{g \left( \frac{\lambda}{N \mu} \right)}{\bar{G} \left( \frac{\lambda}{N \mu} \right)} = - \frac{\lambda}{N} h \left( \frac{\lambda}{N \mu} \right)
\end{eqnarray}
and $\Delta_{\text{cr}}$ is valid when $C < - |\mu|$.
\label{CHOICE_MODEL_delta_cr_theorem}
\end{theorem}
\begin{proof}
We begin by linearizing the system \ref{CHOICE_MODEL_dde_system} about the equilibrium point $q_1 = \cdots = q_N = q^*$ where $q^* = \frac{\lambda}{N \mu}$ by introducing the variables $$q_i(t) = q^* + u_i(t), \hspace{5mm} i = 1, ..., N.$$ Letting $u(t) = (u_1(t), ..., u_N(t))^T$, we obtain the linearized system
\begin{eqnarray}
\overset{\bullet}{u}(t) &=& A u(t - \Delta) - \mu u(t)
\end{eqnarray}
where $$A = C \begin{bmatrix}
\frac{N-1}{N} & - \frac{1}{N} & - \frac{1}{N} \dots - \frac{1}{N}\\
- \frac{1}{N} & \frac{N-1}{N} & - \frac{1}{N} \dots - \frac{1}{N}\\
- \frac{1}{N} & - \frac{1}{N} & \frac{N-1}{N} \dots - \frac{1}{N}\\
\vdots & & \ddots \vdots\\
- \frac{1}{N} & - \frac{1}{N} & - \frac{1}{N} \dots \frac{N-1}{N}
\end{bmatrix}$$
where $$C = - \frac{\lambda}{N} \frac{g \left( \frac{\lambda}{N \mu} \right)}{\bar{G} \left( \frac{\lambda}{N \mu} \right)} = - \frac{\lambda}{N} h \left( \frac{\lambda}{N \mu} \right).$$ The matrix $A$ has eigenvalues $0$ with multiplicity $1$ and $C$ with multiplicity $N - 1$ and we can diagonalize the system according to the change of variables $u(t) = Ev(t)$ where $v(t) = (v_1(t), ..., v_N(t))^T$ and $E$ is a matrix whose columns are eigenvectors of $A$ so that $AE = DE$ where $D = \text{diag}(0, C, ..., C)$ is a diagonal matrix with the eigenvalues of $A$ on its diagonal. Applying this change of variables yields
\begin{align}
\overset{\bullet}{u}(t) &= A u(t - \Delta) - \mu u(t)\\
&\iff \nonumber \\
E \overset{\bullet}{v}(t) &= A E v(t - \Delta) - \mu Ev(t)\\
&\iff \nonumber \\
E\overset{\bullet}{v}(t) &= E D v(t - \Delta) - \mu Ev(t)\\
&\iff \nonumber \\
\overset{\bullet}{v}(t) &= D v(t - \Delta) - \mu v(t)
\end{align}
which can be written as
\begin{align}
\overset{\bullet}{v}_1(t) &= -\mu v_1(t) \label{CHOICE_MODEL_stable_eqn}\\
\overset{\bullet}{v}_2(t) &= C v_2(t - \Delta) - \mu v_2(t)\\
&\vdots\\
\overset{\bullet}{v}_N(t) &= C v_N(t - \Delta) - \mu v_N(t)
\end{align}
The first of these equations, Equation \ref{CHOICE_MODEL_stable_eqn}, is an ordinary differential equation with a solution that decays with time since $\mu > 0$. We thus turn our attention to the remaining $N - 1$ equations with are all DDEs in the same form. By assuming a solution in the form $v_j(t) = e^{rt}$ for $j \in {2, ..., N}$ and $r \in \mathbb{C}$, we get the following characteristic equation.
\begin{eqnarray}
r - Ce^{-r \Delta} + \mu &=& 0 \label{CHOICE_MODEL_characteristic_eqn}
\end{eqnarray}
Stability changes when the real part of $r$ changes signs and thus we consider when $r$ is on the imaginary axis so that $r = i \omega$ for some $\omega \in \mathbb{R}$. Separating into real and imaginary parts yields
\begin{align}
\mu &= C \cos(\omega \Delta)\\
\omega &= - C \sin(\omega \Delta)
\end{align}
so that $$\mu^2 + \omega^2 = C^2$$ and we take $$\omega = \sqrt{C^2 - \mu^2}.$$
Looking back at the characteristic equation \ref{CHOICE_MODEL_characteristic_eqn}, we have that
\begin{align}
r - Ce^{-r \Delta} + \mu &= 0\\
&\iff \nonumber \\
\frac{C}{\mu + i \omega} &= e^{i \omega \Delta}\\
&\iff \nonumber\\
\frac{C (\mu - i \omega)}{\mu^2 + \omega^2} &= e^{i \omega \Delta}\\
&\iff \nonumber \\
\frac{\mu}{C} - i \frac{\omega}{C} &= e^{i \omega \Delta}\\
&\iff \nonumber \\
\log \left( \frac{\mu}{C} - i \frac{\omega}{C} \right) &= i \omega \Delta\\
&\iff \nonumber \\
\ln \left(\frac{\mu^2 + \omega^2}{C^2} \right) + i \text{arg} \left( \frac{\mu}{C} - i \frac{\omega}{C} \right) &= i \omega \Delta\\
&\iff \nonumber \\
\Delta &= \frac{1}{\omega} \text{arg} \left( \frac{\mu}{C} - i \frac{\omega}{C} \right)
\end{align}
where we note that $ \frac{\mu}{C} - i \frac{\omega}{C}$ is of unit modulus and thus we can alternatively write
\begin{eqnarray}
\Delta_{\text{cr}} = \frac{1}{\omega} \text{arccos} \left( \frac{\mu}{C} \right)
\end{eqnarray}
provided that $\text{arccos} \left( \frac{\mu}{C} \right) \in \mathbb{R}$ because we require that $\text{arg} \left( \frac{\mu}{C} - i \frac{\omega}{C} \right) \in \mathbb{R}$ by definition. This requirement forces $\left| \frac{\mu}{C} \right| \leq 1$ or equivalently that $C^2 \geq \mu^2$. Additionally, we require that $\omega \in \mathbb{R}$ so that this inequality becomes strict so that $C^2 > \mu^2$. One can show that the system is always unstable when $C > |\mu|$ and we require that $C < -|\mu|$. These conditions ensure that a real value of $\Delta_{\text{cr}}$ exists. To account for multivaluedness, we take the smallest value of $\Delta_{\text{cr}} > 0$.
For completeness, we can write the critical delay as
\begin{eqnarray}
\Delta_{\text{cr}} = \frac{\text{arccos} \left( \frac{\mu}{C} \right)}{\sqrt{C^2 - \mu^2}} = \frac{ \text{arccos} \left( - \frac{N \mu \bar{G} \left( \frac{\lambda}{N \mu} \right) }{ \lambda g \left( \frac{\lambda}{N \mu} \right) } \right) }{ \sqrt{ \left( \frac{ \lambda g \left( \frac{\lambda}{N \mu} \right) }{N \bar{G} \left( \frac{\lambda}{N \mu} \right) } \right)^2 - \mu^2} }.
\end{eqnarray}
\end{proof}
As seen in the proof of Theorem \ref{CHOICE_MODEL_delta_cr_theorem}, we can express the critical delay in terms of the repeated eigenvalue $C$ of the linearized system. In particular, we see that $C$ directly depends on the hazard function of the given probability distribution. Using this information, we can explore how using different probability distributions to create the choice model can give the system different stability properties.
\section{Dynamics for Different Distributions}
\label{CHOICE_MODEL_eigenvalue_section}
In this section, we numerically examine the dynamics of the queueing system \ref{CHOICE_MODEL_dde_system} for complementary cumulative distribution functions $\bar{G}$ corresponding to various different probability distributions. In analyzing the dynamics of the queueing system, it is crucial to understand the critical delay as this provides important information about the qualitative behavior of the system. As seen in Section \ref{CHOICE_MODEL_hopf_section}, the critical delay of the queueing system depends directly on the eigenvalue $C$ of the linearized system according to the relation
\begin{eqnarray}
\Delta_{\text{cr}} = \frac{\text{arccos} \left( \frac{\mu}{C} \right)}{\sqrt{C^2 - \mu^2}}
\end{eqnarray}
where the eigenvalue $C$ is directly related to the hazard rate of the chosen probability distribution by
\begin{eqnarray}
C = - \frac{\lambda}{N} h \left( \frac{\lambda}{N \mu} \right).
\end{eqnarray}
Therefore, this eigenvalue is a useful quantity to examine to help understand the qualitative system dynamics. In the remainder of this section, we consider when the given probability distribution is exponential, normal, log-normal, Weibull, gamma, or of phase-type. For each of these distributions, we will provide useful quantities including the probability density function, denoted $g$, the complementary cumulative distribution function, denoted $\bar{G}$, the mean, the variance, and the eigenvalue $C$. Additionally, we will numerically examine how the critical delay depends on the mean and variance of the given distribution.
\subsection{Exponential Distribution}
In this section, we assume that the complementary cumulative distribution function $\bar{G}$ that characterizes the choice model is given by an exponential distribution. Below we provide some useful quantities relating to the exponential distribution.
\begin{align}
X &\sim \text{Exp}(\theta), \hspace{5mm} \theta > 0\\
g(x) &= \theta e^{- \theta x}, \hspace{5mm} x \geq 0\\
\bar{G}(x) &= e^{- \theta x}\\
\mathbb{E}[X] &= \frac{1}{\theta}\\
\text{Var}(X) &= \frac{1}{\theta^2}\\
C &= - \frac{\lambda \theta}{N}
\end{align}
The exponential distribution is a well-known nonnegative continuous probability distribution. The DDE system \ref{CHOICE_MODEL_dde_system} corresponding to an exponential distribution is equivalent to a DDE queueing system with a multinomial logit choice model, which has already been extensively studied in the existing literature. Such a choice model uses $-\theta q_i(t-\Delta)$ for the deterministic part of the utility of joining the $i^{\text{th}}$ queue so that the utility depends linearly on the delayed queue length. We note that the exponential distribution is determined by a single parameter $\theta > 0$ as we see that its mean and variance are both fully determined by this parameter. Additionally, the exponential distribution has a constant hazard function given by $h(x) = g(x)/\bar{G}(x) = \theta$.
The exponential complementary cumulative distribution function can be viewed as the moment generating function of a random variable $Y$ that takes on the value $- \theta$ with probability $1$ and thus, in accordance with Equation \ref{CHOICE_MODEL_inverse_laplace_eqn}, we have
\begin{eqnarray}
f_{-Y}(x) = \mathcal{L}^{-1}[e^{-\theta x}] = \delta(x - \theta)
\end{eqnarray}
where $\delta(x)$ denotes a Dirac delta centered at the origin. Obviously, this is a trivial case where we have complete certainty of the parameter of our exponential distribution.
In Figure \ref{CHOICE_MODEL_fig_exponential} we show queue length plots and phase diagrams on each side of the critical delay. In particular, we observe that the queue lengths approach an equilibrium when the delay is below the critical delay and the queue lengths oscillate with some limiting amplitude when the delay is larger than the critical delay. It is worth noting that the mean and variance of the exponential distribution are determined by a single parameter and thus the exponential distribution has a fixed variance for a fixed mean. In Figure \ref{CHOICE_MODEL_fig_exponential_mean} we see how the value of the critical delay varies as the mean of the distribution is varied. We see that there is a critical value of the mean that, when approached, can make the critical delay arbitrarily large. This is useful information because instability can lead to inefficiencies in the queueing system as the resulting queue length oscillations can cause some servers to be overworked and others to be underworked. Thus, being able to make the critical delay large would be beneficial for the overall productivity of the queueing system as large delays could still result in a stable system.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[scale=.55]{./Figures/exponential1.eps} & \includegraphics[scale=.55]{./Figures/exponential2.eps} \\
(a) & (b) \\[6pt]
\includegraphics[scale=.55]{./Figures/exponential_phase1.eps} & \includegraphics[scale=.55]{./Figures/exponential_phase2.eps} \\
(c) & (d) \\[6pt]
\end{tabular}
\caption{Before and after the change in stability using the choice model induced by an \textbf{exponential distribution} with $\theta = 1$ with constant history function on $[-\Delta, 0]$ with $q_1 = 4.99$ and $q_2 = 5.01$, $N = 2, \lambda = 10$, $\mu = 1$. The top two plots are queue length versus time with $\Delta = .3$ (a) and $\Delta = .7$ (b). The bottom two plots are phase plots of the queue length derivative with respect to time against queue length for $\Delta = .3$ (c) and $\Delta = .7$ (d). The critical delay is $\Delta_{\text{cr}} = .3617$.}
\label{CHOICE_MODEL_fig_exponential}
\end{figure}
\begin{figure
\hspace{-10mm} \includegraphics[scale=.6]{./Figures/Exponential_lam10_mu1_N2.eps} \includegraphics[scale=.6]{./Figures/PDF_Exponential_lam10_mu1_N2_mean_varied.eps}
\caption{Left: The critical delay plotted against the mean of the \textbf{exponential distribution} that induces the choice model. Right: A plot of the probability density function used for some selected values of the mean.}
\label{CHOICE_MODEL_fig_exponential_mean}
\end{figure}
\subsection{Normal Distribution}
In this section, we assume that the complementary cumulative distribution function $\bar{G}$ that characterizes the choice model is given by a normal distribution. Below we provide some useful quantities relating to the normal distribution.
\begin{align}
X &\sim \text{Normal}(\alpha, \sigma^2)\\
g(x) &= \frac{1}{\sigma \sqrt{2 \pi}} \exp \left( - \frac{1}{2} \left( \frac{x - \alpha}{\sigma} \right)^2 \right)\\
\bar{G}(x) &= \frac{1}{2} \left[ 1 - \text{erf} \left( \frac{x - \alpha}{\sigma \sqrt{2}} \right) \right]\\
\mathbb{E}[X] &= \alpha\\
\text{Var}(X) &= \sigma^2\\
C &= - \sqrt{\frac{2}{\pi}}\frac{\lambda}{N \sigma} \frac{\exp \left( - \frac{1}{2} \left( \frac{ \frac{\lambda}{N \mu} - \alpha}{\sigma} \right)^2 \right)}{\left[ 1 - \text{erf} \left( \frac{ \frac{\lambda}{N \mu} - \alpha}{\sigma \sqrt{2}} \right) \right] }
\end{align}
An important difference between the normal distribution and the exponential distribution is that the normal distribution is characterized by two parameters instead of just one parameter, its mean parameter $\alpha$ and standard deviation parameter $\sigma$. This is interesting to consider because we can vary the variance of the normal distribution for a fixed mean and vice versa and thus we have the freedom to adjust the value of the critical delay even if one of these parameters remains fixed.
The normal distribution has an interesting connection to hazard functions. In particular, if one considers the mean of a so-called truncated normal distribution, one can show that for $X \sim \text{Normal}(\alpha, \sigma^2)$
\begin{align}
\mathbb{E}[X | X < a] &= \alpha - \sigma h \left( \frac{a - \alpha}{\sigma} \right)\\
\mathbb{E}[X | X > a] &= \alpha + \sigma h \left( \frac{a - \alpha}{\sigma} \right)
\end{align}
\noindent where $h(x)$ is the hazard function of the standard normal distribution. This is true largely because the probability density function $\phi$ of the standard normal distribution has the nice property that $\frac{d}{dx} \phi(x) = -x \phi(x)$.
In Figure \ref{CHOICE_MODEL_fig_normal} we show queue length plots and phase diagrams on each side of the critical delay. In Figure \ref{CHOICE_MODEL_fig_normal_mean} we see how the value of the critical delay varies as the mean is varied with a fixed variance and in Figure \ref{CHOICE_MODEL_fig_normal_variance} we see how it changes as the variance is varied with a fixed mean. We see that in the cases considered, there is a critical value at which the critical delay can be made arbitrarily large when approached.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[scale=.55]{./Figures/normal1.eps} & \includegraphics[scale=.55]{./Figures/normal2.eps} \\
(a) & (b) \\[6pt]
\includegraphics[scale=.55]{./Figures/normal_phase1.eps} & \includegraphics[scale=.55]{./Figures/normal_phase2.eps} \\
(c) & (d) \\[6pt]
\end{tabular}
\caption{Before and after the change in stability using the choice model induced by a \textbf{normal distribution} with $\alpha = 1$ and $\sigma = 1$ with constant history function on $[-\Delta, 0]$ with $q_1 = 4.99$ and $q_2 = 5.01$, $N = 2, \lambda = 10$, $\mu = 1$. The top two plots are queue length versus time with $\Delta = .05$ (a) and $\Delta = .1$ (b). The bottom two plots are phase plots of the queue length derivative with respect to time against queue length for $\Delta = .05$ (c) and $\Delta = .1$ (d). The critical delay is $\Delta_{\text{cr}} = .0767$.}
\label{CHOICE_MODEL_fig_normal}
\end{figure}
\begin{figure
\hspace{-10mm} \includegraphics[scale=.6]{./Figures/Normal_Variance1_lam10_mu1_N2.eps} \includegraphics[scale=.6]{./Figures/PDF_Normal_Variance1_lam10_mu1_N2.eps}
\caption{Left: The critical delay plotted against the mean of the \textbf{normal distribution} that induces the choice model with a fixed variance of 1. Right: A plot of the probability density function used for some selected values of the mean.}
\label{CHOICE_MODEL_fig_normal_mean}
\end{figure}
\begin{figure
\hspace{-10mm} \includegraphics[scale=.6]{./Figures/Normal_Mean1_lam10_mu1_N2.eps} \includegraphics[scale=.6]{./Figures/PDF_Normal_Mean1_lam10_mu1_N2.eps}
\caption{Left: The critical delay plotted against the variance of the \textbf{normal distribution} that induces the choice model with a fixed mean of 1. Right: A plot of the probability density function used for some selected values of the variance.}
\label{CHOICE_MODEL_fig_normal_variance}
\end{figure}
\subsection{Log-Normal Distribution}
In this section, we assume that the complementary cumulative distribution function $\bar{G}$ that characterizes the choice model is given by a log-normal distribution. Below we provide some useful quantities relating to the log-normal distribution.
\begin{align}
X &\sim \text{Log-Normal}(\alpha, \sigma^2)\\
g(x) &= \frac{1}{x \sigma \sqrt{2 \pi}} \exp \left( - \frac{1}{2} \left( \frac{ \ln(x) - \alpha}{\sigma} \right)^2 \right)\\
\bar{G}(x) &= \frac{1}{2} \left[ 1 - \text{erf} \left( \frac{\ln(x) - \alpha}{\sigma \sqrt{2}} \right) \right]\\
\mathbb{E}[X] &= \exp \left( \alpha + \frac{\sigma^2}{2} \right)\\
\text{Var}(X) &= (\exp(\sigma^2) - 1) \exp(2\alpha + \sigma^2)\\
C &= - \sqrt{\frac{2}{\pi}}\frac{\alpha}{\sigma} \frac{\exp \left( - \frac{1}{2} \left( \frac{ \ln \left(\frac{\lambda}{N \mu} \right) - \alpha}{\sigma} \right)^2 \right)}{\left[ 1 - \text{erf} \left( \frac{ \ln \left( \frac{\lambda}{N \mu} \right) - \alpha}{\sigma \sqrt{2}} \right) \right] }
\end{align}
The log-normal distribution is a continuous probability distribution such that the natural logarithm applied to a log-normal random variable with parameters $\alpha$ and $\sigma$ results in a random variable that has a normal distribution with mean $\alpha$ and standard deviation $\sigma$. In Figure \ref{CHOICE_MODEL_fig_lognormal} we show queue length plots and phase diagrams on each side of the critical delay. In Figure \ref{CHOICE_MODEL_fig_lognormal_mean} we see how the value of the critical delay varies as the mean is varied with a fixed variance and in Figure \ref{CHOICE_MODEL_fig_lognormal_variance} we see how it changes as the variance is varied with a fixed mean. As the mean varies, there is a critical value of the mean such that the critical delay can be made arbitrarily large when approached. This is similar to the case shown for the normal distribution, but it differs in that we see that in the case of the log-normal distribution, the critical delay gets larger as the mean approaches zero as well. Additionally, the log-normal distribution appears to have a larger critical variance than the normal distribution does for unit mean, but the critical delay is larger in the log-normal case for small variance than it is in the normal case.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[scale=.55]{./Figures/lognormal1.eps} & \includegraphics[scale=.55]{./Figures/lognormal2.eps} \\
(a) & (b) \\[6pt]
\includegraphics[scale=.55]{./Figures/lognormal_phase1.eps} & \includegraphics[scale=.55]{./Figures/lognormal_phase2.eps} \\
(c) & (d) \\[6pt]
\end{tabular}
\caption{Before and after the change in stability using the choice model induced by a \textbf{log-normal distribution} with $\alpha = - \frac{1}{2} \log(2)$ and $ \sigma = \sqrt{\log(2)}$ (which results in mean 1 and variance 1) with constant history function on $[-\Delta, 0]$ with $q_1 = 4.99$ and $q_2 = 5.01$, $N = 2, \lambda = 10$, $\mu = 1$. The left two plots are queue length versus time with $\Delta = .5$ (Left) and $\Delta = 1$ (Right). The right two plots are phase plots of the queue length derivative with respect to time against queue length for $\Delta = .5$ (Left) and $\Delta = 1$ (Right). The critical delay is $\Delta_{\text{cr}} = .6148$.}
\label{CHOICE_MODEL_fig_lognormal}
\end{figure}
\begin{figure
\hspace{-10mm} \includegraphics[scale=.6]{./Figures/LogNormal_Variance1_lam10_mu1_N2.eps} \includegraphics[scale=.6]{./Figures/PDF_LogNormal_Variance1_lam10_mu1_N2.eps}
\caption{Left: The critical delay plotted against the mean of the \textbf{log-normal distribution} that induces the choice model with a fixed variance of 1. Right: A plot of the probability density function used for some selected values of the mean.}
\label{CHOICE_MODEL_fig_lognormal_mean}
\end{figure}
\begin{figure
\hspace{-10mm} \includegraphics[scale=.6]{./Figures/LogNormal_Mean1_lam10_mu1_N2.eps} \includegraphics[scale=.6]{./Figures/PDF_LogNormal_Mean1_lam10_mu1_N2.eps}
\caption{Left: The critical delay plotted against the variance of the \textbf{log-normal distribution} that induces the choice model with a fixed mean of 1. Right: A plot of the probability density function used for some selected values of the variance.}
\label{CHOICE_MODEL_fig_lognormal_variance}
\end{figure}
\subsection{Weibull Distribution}
In this section, we assume that the complementary cumulative distribution function $\bar{G}$ that characterizes the choice model is given by a Weibull distribution. Below we provide some useful quantities relating to the Weibull distribution.
\begin{align}
X &\sim \text{Weibull}(\alpha, \beta)\\
g(x) &= \beta \alpha x^{\alpha-1}e^{-\beta x^\alpha}, \hspace{5mm} x, \alpha, \beta > 0 \\
\bar{G}(x) &= e^{-\beta x^\alpha}\\
\mathbb{E}[X] &= \beta^{-1/\alpha}\Gamma(1+1/\alpha)\\
\text{Var}(X) &= \frac{1}{\beta^{2\alpha}}\left[\Gamma\left(1+\frac{2}{\alpha}\right) - \left(\Gamma\left(1+\frac{1}{\alpha}\right)\right)^2\right]\\
C &= - \frac{\lambda \beta \alpha \left( \frac{\lambda}{\mu N}\right)^{\alpha-1} }{N }
\end{align}
The Weibull distribution is a continuous probability distribution with two positive parameters $\alpha$ and $\beta$. Some of the above quantities are defined in terms of the gamma function \begin{eqnarray}
\Gamma(x) := \int_{0}^{\infty} z^{x-1} e^{-z} dz, \hspace{5mm} \text{Re}(x) > 0
\label{CHOICE_MODEL_gamma_function_def}
\end{eqnarray} which has the property that $\Gamma(n) = (n-1)!$ when $n$ is a positive integer and can be viewed as a smooth interpolation of the factorial function. The Weibull distribution has a polynomial hazard function \begin{eqnarray}
h(x) = \beta \alpha x^{\alpha - 1}
\end{eqnarray} so that when $\alpha < 1$ it is a decreasing function and when $\alpha > 1$ it is an increasing function. We note that when $\alpha = 1$, the hazard rate is a constant $\beta$, just like that of the exponential distribution. Furthermore, the Weibull distribution reduces to an exponential distribution with parameter $\beta > 0$ when $\alpha = 1$.
In Figure \ref{CHOICE_MODEL_fig_weibull} we show queue length plots and phase diagrams on each side of the critical delay. In Figures \ref{CHOICE_MODEL_fig_weibull_mean} and \ref{CHOICE_MODEL_fig_weibull_mean_a_point5} we see how the value of the critical delay varies as the mean is varied with a fixed variance and in Figures \ref{CHOICE_MODEL_fig_weibull_variance} and \ref{CHOICE_MODEL_fig_weibull_variance_a_point5} we see how it changes as the variance is varied with a fixed mean. We see that in the cases considered, there is a critical mean value and a critical variance value at which the critical delay is unbounded. The shape of the critical delay against variance plot appears similar to the one in the log-normal case, noting the initial concave down section for small variance.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[scale=.55]{./Figures/weibull1.eps} & \includegraphics[scale=.55]{./Figures/weibull2.eps} \\
(a) & (b) \\[6pt]
\includegraphics[scale=.55]{./Figures/weibull_phase1.eps} & \includegraphics[scale=.55]{./Figures/weibull_phase2.eps} \\
(c) & (d) \\[6pt]
\end{tabular}
\caption{Before and after the change in stability using the choice model induced by a \textbf{Weibull distribution} with $\alpha = 2$ and $\beta = \left( \Gamma(1 + \frac{1}{\alpha}) \right)^{\alpha}$ with constant history function on $[-\Delta, 0]$ with $q_1 = 4.99$ and $q_2 = 5.01$, $N = 2, \lambda = 10$, $\mu = 1$. The top two plots are queue length versus time with $\Delta = .03$ (a) and $\Delta = .06$ (b). The bottom two plots are phase plots of the queue length derivative with respect to time against queue length for $\Delta = .03$ (c) and $\Delta = .06$ (d). The critical delay is $\Delta_{\text{cr}} = .0407$.}
\label{CHOICE_MODEL_fig_weibull}
\end{figure}
\begin{figure
\hspace{-10mm} \includegraphics[scale=.6]{./Figures/Weibull_a2_lam10_mu1_N2_mean_varied.eps} \includegraphics[scale=.6]{./Figures/PDF_Weibull_a2_lam10_mu1_N2_mean_varied.eps}
\caption{Left: The critical delay plotted against the mean of the \textbf{Weibull distribution} that induces the choice model with fixed $\alpha = 2$. Right: A plot of the probability density function used for some selected values of the mean.}
\label{CHOICE_MODEL_fig_weibull_mean}
\end{figure}
\begin{figure
\hspace{-10mm} \includegraphics[scale=.6]{./Figures/Weibull_a2_lam10_mu1_N2_variance_varied.eps} \includegraphics[scale=.6]{./Figures/PDF_Weibull_a2_lam10_mu1_N2_variance_varied.eps}
\caption{Left: The critical delay plotted against the variance of the \textbf{Weibull distribution} that induces the choice model with fixed $\alpha = 2$. Right: A plot of the probability density function used for some selected values of the variance.}
\label{CHOICE_MODEL_fig_weibull_variance}
\end{figure}
\begin{figure
\hspace{-10mm} \includegraphics[scale=.6]{./Figures/Weibull_a0.5_lam10_mu1_N2_mean_varied.eps} \includegraphics[scale=.6]{./Figures/PDF_Weibull_a0.5_lam10_mu1_N2_mean_varied.eps}
\caption{Left: The critical delay plotted against the mean of the \textbf{Weibull distribution} that induces the choice model with fixed $\alpha = \frac{1}{2}$. Right: A plot of the probability density function used for some selected values of the mean.}
\label{CHOICE_MODEL_fig_weibull_mean_a_point5}
\end{figure}
\begin{figure
\hspace{-10mm} \includegraphics[scale=.6]{./Figures/Weibull_a0.5_lam10_mu1_N2_variance_varied.eps} \includegraphics[scale=.6]{./Figures/PDF_Weibull_a0.5_lam10_mu1_N2_variance_varied.eps}
\caption{Left: The critical delay plotted against the variance of the \textbf{Weibull distribution} that induces the choice model with fixed $\alpha = \frac{1}{2}$. Right: A plot of the probability density function used for some selected values of the variance.}
\label{CHOICE_MODEL_fig_weibull_variance_a_point5}
\end{figure}
\subsection{Gamma Distribution}
In this section, we assume that the complementary cumulative distribution function $\bar{G}$ that characterizes the choice model is given by a gamma distribution. Below we provide some useful quantities relating to the gamma distribution.
\begin{align}
X &\sim \text{Gamma}(\alpha, \beta), \hspace{5mm} \theta > 0\\
g(x) &= \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha - 1} e^{-\beta x }\\
\bar{G}(x) &= 1- \frac{1}{\Gamma(\alpha)} \gamma(\alpha, \beta x)\\
\mathbb{E}[X] &= \frac{\alpha}{\beta}\\
\text{Var}(X) &= \frac{\alpha}{\beta^2}\\
C &= - \frac{\lambda \frac{\beta^\alpha}{\Gamma(\alpha)} \left( \frac{\lambda}{N \mu} \right)^{\alpha - 1} e^{-\beta \frac{\lambda}{N \mu} }}{N \left( 1- \frac{1}{\Gamma(\alpha)} \gamma(\alpha, \beta \frac{\lambda}{N \mu}) \right) }
\end{align}
We consider the gamma distribution which can be viewed as a generalization of the exponential distribution in some sense. In particular, if the parameter $\alpha$ is a positive integer, then the gamma distribution reduces to an Erlang distribution which reduces to an exponential distribution if $\alpha = 1$. Some of the above quantities are defined in terms of the gamma function defined in Equation \ref{CHOICE_MODEL_gamma_function_def}, but we note that some quantities also depend on the lower incomplete gamma function \begin{eqnarray}
\gamma(x, t) := \int_{0}^{t} z^{x - 1} e^{-z} dz, \hspace{5mm} \text{Re}(x) > 0.
\end{eqnarray} Alternatively, one could choose to express quantities involving the lower incomplete gamma function in terms of the upper incomplete gamma function \begin{eqnarray}
\Gamma(x, t) := \int_{t}^{\infty} z^{x - 1} e^{-z} dz, \hspace{5mm} \text{Re}(x) > 0
\end{eqnarray} as we have the apparent relation \begin{eqnarray}
\Gamma(x) = \gamma(x, t) + \Gamma(x, t)
\end{eqnarray} so that one could instead write \begin{eqnarray}
\bar{G}(x) = \frac{1}{\Gamma(\alpha)} \Gamma(\alpha, \beta x) .
\end{eqnarray}
With this in mind, one could alternatively express the eigenvalue as \begin{eqnarray}
C = - \frac{\lambda}{N} \frac{\beta^{\alpha} \left( \frac{\lambda}{N \mu} \right)^{\alpha - 1} e^{ - \beta \frac{\lambda}{N \mu}}}{\Gamma(\alpha, \beta \frac{\lambda}{N \mu})}.
\end{eqnarray}
In Figure \ref{CHOICE_MODEL_fig_gamma} we show queue length plots and phase diagrams on each side of the critical delay. In Figure \ref{CHOICE_MODEL_fig_gamma_mean} we see how the value of the critical delay varies as the mean is varied with a fixed variance and in Figure \ref{CHOICE_MODEL_fig_gamma_variance} we see how it changes as the variance is varied with a fixed mean. For unit variance, we see that there appear to be two critical mean values at which the critical delay becomes unbounded. For unit mean, there appears to be a single critical value of the variance.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[scale=.55]{./Figures/gamma1.eps} & \includegraphics[scale=.55]{./Figures/gamma2.eps} \\
(a) & (b) \\[6pt]
\includegraphics[scale=.55]{./Figures/gamma_phase1.eps} & \includegraphics[scale=.55]{./Figures/gamma_phase2.eps} \\
(c) & (d) \\[6pt]
\end{tabular}
\caption{Before and after the change in stability using the choice model induced by a \textbf{Gamma distribution} with $\alpha = \beta = 1$ (which reduces to an exponential distribution) with constant history function on $[-\Delta, 0]$ with $q_1 = 4.99$ and $q_2 = 5.01$, $N = 2, \lambda = 10$, $\mu = 1$. The top two plots are queue length versus time with $\Delta = .3$ (a) and $\Delta = .7$ (b). The bottom two plots are phase plots of the queue length derivative with respect to time against queue length for $\Delta = .3$ (c) and $\Delta = .7$ (d). The critical delay is $\Delta_{\text{cr}} = .3617$.}
\label{CHOICE_MODEL_fig_gamma}
\end{figure}
\begin{figure
\hspace{-10mm} \includegraphics[scale=.6]{./Figures/Gamma_Variance1_lam10_mu1_N2.eps} \includegraphics[scale=.6]{./Figures/PDF_Gamma_Variance1_lam10_mu1_N2.eps}
\caption{Left: The critical delay plotted against the mean of the \textbf{gamma distribution} that induces the choice model with a fixed variance of 1. Right: A plot of the probability density function used for some selected values of the mean.}
\label{CHOICE_MODEL_fig_gamma_mean}
\end{figure}
\begin{figure
\hspace{-10mm} \includegraphics[scale=.6]{./Figures/Gamma_Mean1_lam10_mu1_N2.eps} \includegraphics[scale=.6]{./Figures/PDF_Gamma_Mean1_lam10_mu1_N2.eps}
\caption{Left: The critical delay plotted against the variance of the \textbf{gamma distribution} that induces the choice model with a fixed mean of 1. Right: A plot of the probability density function used for some selected values of the variance.}
\label{CHOICE_MODEL_fig_gamma_variance}
\end{figure}
\subsection{Phase Type Distribution}
In this section, we assume that the complementary cumulative distribution function $\bar{G}$ that characterizes the choice model is given by a phase-type distribution. Below we provide some useful quantities relating to phase-type distributions.
\begin{align}
X &\sim \text{Ph}(\boldsymbol{\alpha}, \mathbf{S^{0}}, S)\\
g(x) &= \boldsymbol{\alpha}\exp({S}x)\mathbf{S^{0}} \\
\bar{G}(x) &= \boldsymbol{\alpha}\exp({S}x)\mathbf{1} \\
\mathbb{E}[X^n] &= (-1)^{n}n!\boldsymbol{\alpha}{S}^{-n}\mathbf{1}\\
\text{Var}(X) &= 2\boldsymbol{\alpha}{S}^{-2}\mathbf{1}-(\boldsymbol{\alpha}{S}^{-1}\mathbf{1})^{2}\\
C &= - \frac{ \lambda \boldsymbol{\alpha}\exp \left({S} \frac{\lambda}{N \mu} \right)\mathbf{S^{0}} }{N \boldsymbol{\alpha}\exp \left({S}\frac{\lambda}{N \mu} \right)\mathbf{1}}
\end{align}
Phase-type distributions describe the distribution of time spent to reach an absorbing state in a continuous-time Markov chain with a finite state space with one absorbing state and the remaining states being transient. Consider a continuous-time Markov chain with $p + 1$ states, $p \in \mathbb{Z}^+$ of which are transient states and the remaining state is an absorbing state. Let the state space be $\{0, 1, ..., p \}$ with state $0$ the absorbing state and let $\boldsymbol{\alpha} \in \mathbb{R}^{1 \times p}$ be a vector of probabilities where the $i^{\text{th}}$ entry of $\boldsymbol{\alpha}$ corresponds to the probability of the continuous-time Markov chain starting at the $i^{\text{th}}$ state. The continuous-time Markov chain has transition-rate matrix \begin{eqnarray}
Q = \begin{bmatrix}
0 & \boldsymbol{0} \\
\boldsymbol{S}^{0} & S
\end{bmatrix}
\end{eqnarray} where $\boldsymbol{0} \in \mathbb{R}^{1 \times p}$ is a vector with each entry $0$, $S \in \mathbb{R}^{p \times p}$, and $\boldsymbol{S}^{0} = - S \boldsymbol{1}$ where $\boldsymbol{1} \in \mathbb{R}^{p \times 1}$ is a vector where each entry is $1$. Special cases of phase-type distributions include the Erlang distribution, which is the distribution of a sum of exponential random variables, and the hyperexponential distribution, which is a mixture of exponential distributions.
For example, a hyperexponential distribution has probability density function and complementary cumulative distribution function
\begin{align}
g(x) &= \sum_{k=1}^{m} p_k \theta_k e^{- \theta_k x}\\
\bar{G}(x) &= \sum_{k=1}^{m} p_k e^{-\theta_k x}
\end{align}
with $\sum_{k=1}^{m} p_k = 1$ and $p_k \in [0, 1]$ for $k=1,...,m$. It follows that, under the corresponding choice model, the probability of joining the $i^{\text{th}}$ queue is
\begin{eqnarray}
\frac{\bar{G}(q_i(t - \Delta))}{ \sum_{j=1}^{N} \bar{G}(q_j(t - \Delta)) } = \frac{\sum_{k=1}^{m} p_k e^{-\theta_k q_i(t - \Delta)} }{ \sum_{j=1}^{N} \sum_{k=1}^{m} p_k e^{-\theta_k q_j(t - \Delta) }}
\end{eqnarray}
for $i=1, ..., N$. It turns out that we can view this choice of $\bar{G}$ as the moment-generating function of a discrete random variable $Y < 0$ that takes on the value $Y = \theta_k$ with probability $p_k$ for $k=1, ..., m$.
In Figure \ref{CHOICE_MODEL_fig_hyperexponential} we show queue length plots and phase diagrams on each side of the critical delay when considering the special case of a hyperexponential distribution. In Figure \ref{CHOICE_MODEL_fig_hyperexponential_mean} we see how the value of the critical delay varies as the mean is varied with a fixed variance and in Figure \ref{CHOICE_MODEL_fig_hyperexponential_variance} we see how it changes as the variance is varied with a fixed mean, both for the case of a hyperexponential distribution. For the cases considered, there appears to be a critical value of the variance as well as a few critical values for the mean.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[scale=.55]{./Figures/hyperexponential1_p=.3.eps} & \includegraphics[scale=.55]{./Figures/hyperexponential2_p=.3.eps} \\
(a) & (b) \\[6pt]
\includegraphics[scale=.55]{./Figures/hyperexponential_phase1_p=.3.eps} & \includegraphics[scale=.55]{./Figures/hyperexponential_phase2_p=.3.eps} \\
(c) & (d) \\[6pt]
\end{tabular}
\caption{Before and after the change in stability using the choice model induced by a \textbf{phase-type distribution} with $\boldsymbol{\alpha} = (.3, .7)$ and $S = \text{diag}(-1.8367, -.8367)$ (which results in a hyperexponential distribution with mean 1 and variance 1) with constant history function on $[-\Delta, 0]$ with $q_1 = 4.99$ and $q_2 = 5.01$, $N = 2, \lambda = 10$, $\mu = 1$. The top two plots are queue length versus time with $\Delta = .3$ (a) and $\Delta = .5$ (b). The bottom two plots are phase plots of the queue length derivative with respect to time against queue length for $\Delta = .3$ (c) and $\Delta = .5$ (d). The critical delay is $\Delta_{\text{cr}} = .4443$.}
\label{CHOICE_MODEL_fig_hyperexponential}
\end{figure}
\begin{figure
\hspace{-10mm} \includegraphics[scale=.6]{./Figures/Hyperexponential_Mean1_lam10_mu1_N2.eps} \includegraphics[scale=.6]{./Figures/PDF_Hyperexponential_Variance1_lam10_mu1_N2.eps}
\caption{Left: The critical delay plotted against the mean of a \textbf{hyperexponential distribution} (in this case, composed of two evenly-weighted exponential distributions) that induces the choice model with a fixed variance of 1. Right: A plot of the probability density function used for some selected values of the mean.}
\label{CHOICE_MODEL_fig_hyperexponential_mean}
\end{figure}
\begin{figure
\hspace{-10mm} \includegraphics[scale=.6]{./Figures/Hyperexponential_Variance1_lam10_mu1_N2.eps} \includegraphics[scale=.6]{./Figures/PDF_Hyperexponential_Mean1_lam10_mu1_N2.eps}
\caption{Left: The critical delay plotted against the variance of the \textbf{hyperexponential distribution} (in this case, composed of two evenly-weighted exponential distributions) that induces the choice model with a fixed mean of 1. Right: A plot of the probability density function used for some selected values of the variance.}
\label{CHOICE_MODEL_fig_hyperexponential_variance}
\end{figure}
\section{Conclusion}
\label{CHOICE_MODEL_conclusion_section}
In this paper, we examine choice models informed by utilities that are functions of complementary cumulative distribution functions for some probability distribution and consider an infinite-server fluid queueing system where customers are informed by delayed queue length information. We determine the critical delay of this queueing system in terms of the hazard function of the given probability distribution. We consider how using choice models with functional forms based on various different probability distributions can impact the dynamics of the queueing system. In particular, we see that it is often possible to choose a probability distribution with specific mean and variance to make the critical delay arbitrarily large. This information is useful because such probability distributions can result in a queueing system that is robust to large delays in information. Naturally, there is room for extending this work by considering other information than standard delayed queue length information, such as delayed queue velocity information or updating queue length information. Additionally, other probability distributions could be considered and for larger ranges of parameters. One could also consider other classes of decreasing functions to base the functional form of the choice model on. It could be interesting to focus more attention on choice models induced by an exponential distribution where the distribution parameter has uncertainty and could potentially be viewed as a random variable according to various distributions.
\section{Acknowledgements}
We would like to thank the Center for Applied Mathematics at Cornell University for sponsoring Philip Doldo’s research. Finally, we acknowledge the gracious support of the National Science Foundation (NSF) for Jamol Pender's Career Award CMMI \# 1751975.
\bibliographystyle{plainnat}
|
1,477,468,750,679 | arxiv | \section{Introduction}
A thin superconductor film in a perpendicular magnetic field is the configuration typical
of experiments with superconducting materials, and is employed in various physical devices
(SQUIDs, magnetic traps for cold atoms, etc.). Macroscopically, the magnetization of
type-II superconductors is well described by eddy current models with critical state
\cite{Bean,Kim} or power law \cite{Rhyner} current-voltage relations.
Solving these highly nonlinear eddy current problems helps to understand the
peculiarities of magnetic flux penetration into thin films,
and is necessary for the design of superconductor-based electronic devices.
Analytically, the sheet current density is known for
the Bean critical state model in both the thin disk \cite{MikhKuz,ClemSanchez} and strip
\cite{BrandtIndenbomForkl} geometries. Numerical methods for modeling magnetization
in flat films of arbitrary shapes were derived, for the power law model,
by Brandt and co-workers in \cite{Brandt95,SchusterB96};
see also \cite{VSGJ07,VSGJ08,VSGJ12} and the references therein.
For the critical state model
a numerical scheme, based on a variational formulation of the thin film magnetization
problems, has been proposed in \cite{P98}; see also \cite{NSDVCh}.
Common to these numerical algorithms is the use
of a scalar magnetization (stream) function as the main variable. The sheet current density
in the film is obtained as the 2D curl of this function;
the magnetic field can then be computed from the current density via the Biot-Savart law
and compared to magneto-optical imaging results.
The electric field in a superconductor is also of much interest: it is needed to find the
distribution of the energy dissipation, which is often very nonuniform and can cause
thermal instabilities. Computing the electric field $\bi{e}$ by means of existing numerical
schemes can, however, be difficult for the power law model,
\beq \bi{e}=e_0(j/j_{\rm c})^{p-1}\bi{j}/j_{\rm c}, \label{power}\eeq
where $\bi{j}$ is the sheet current density, $j=|\bi{j}|$, $e_0$ is a constant, $j_{\rm c}$
is the critical sheet current density, and the power $p$ is, typically, between 10 and 100.
Indeed, even if the magnetization function is found with good accuracy, its numerical
derivatives determining the sheet current density in the film are, inevitably,
much less accurate. Computing the electric field via the constitutive relation (\ref{power})
increases the error further and makes it unacceptably large if the power $p$ is high.
As is well-known, the critical state model current-voltage relation can be regarded as
the $p\rightarrow \infty$ limit of the power law (\ref{power}); see \cite{BP00}
for a rigorous proof. The limit can be described as
\beq |\bi{j}|\leq j_{\rm c};\quad \mbox{if }|\bi{j}|< j_{\rm c}\mbox{ then }
\bi{e}=\bf{0};\quad\mbox{if }\bi{e}\neq \bf{0}\mbox{ then } \bi{e}\,\|\,\bi{j}\label{crit}.\eeq
The electric field in this model can be nonzero only in a region where
the current density is critical; there the field is parallel to current density and is
determined by the eddy current problem
with the constitutive relation (\ref{crit}). Note that even if the current density was computed,
e.g., by means of the numerical scheme \cite{P98}, the multi-valued relation (\ref{crit})
alone is not sufficient for the reconstruction of the electric field.
Approximating the electric field in a critical state model is relatively straightforward only
for an infinite strip or a long superconducting cylinder in a perpendicular field \cite{P11}.
For cylinders of an arbitrary cross-section in a parallel field, the magnetic field in
the superconductor can be expressed via the distance to the boundary function
(see \cite{BP10}) or, in more complicated cases, found numerically.
The current density is computed as the 2D curl of this field.
Computing the electric field, however, remains non-trivial. A numerical algorithm for the
electric field reconstruction, requiring integration along the paths of the magnetic flux
penetration, has been proposed in \cite{BL}. A dual/mixed variational formulation of
magnetization problems served as a basis for the efficient computation of the electric field
in \cite{BP06,BP10}.
\begin{comment} the key feature of the numerical algorithms introduced
there was the employment
of the divergence-conforming Raviart-Thomas finite elements \cite{Carst} to approximate the
$90^{\circ}$-rotated electric field in the cylinder cross-section
(so that the approximate electric field itself was curl-conforming,
in accordance with the Faraday equation).
\end{comment}
Determining the electric field in thin film problems is more difficult. Under the simplifying
assumption that the time derivative of the normal to the film magnetic field is,
in the flux penetrated region, close to the ramping rate of the external field,
approximate analytical expressions
for the electric field were found for
the Bean critical state model for the
rectangular and related film shapes in \cite{SchusterB96}.
Here we extend the approach
\cite{BP06,BP10} and derive for thin film magnetization problems a convenient mixed variational
formulation in terms of two variables: the electric field and a scalar auxiliary variable
analogous to the magnetization function.
We use Raviart-Thomas elements of the lowest order \cite{Carst} to approximate
the (rotated) electric field
and a continuous piecewise linear approximation for the auxiliary
variable. Based on this approximation of the variational problem, our iterative numerical
algorithm suffers no accuracy loss of the computed electric field even for very high values of
the power $p$ in (\ref{power}). Hence, the algorithm can be used to find the electric and
magnetic fields, and the current density for both the power and critical state model
problems.
\begin{comment}In this work we assume the isotropic current-voltage relations (\ref{power})
and (\ref{crit});
however, our approach is easily generalized to the anisotropic relations considered in
\cite{Schuster97}. The method can be applied also to problems with field-dependent
critical current density. Finally, we note that thin film transport current problems
are also of much interest,
and we are going to consider these in a separate publication.
\end{comment}
In this work we focus on the derivation of the mixed variational
formulation, describe the numerical algorithm, and present simulation results.
Rigorous mathematical arguments, including the exact function space set up, and a proof of the
algorithm convergence, etc., will be presented elsewhere \cite{BPmath}.
\section{Magnetization model: a mixed formulation}
Let $\Omega\subset \mathbb{R}^2$ be a domain
and, in the infinitely thin approximation, the superconducting film occupies
the set $\{\overline{\Omega}\times 0\}\subset \mathbb{R}^3$. By $\bi{e}_{\rm i}(x_1,x_2,t)$,
where $(x_1,x_2)\in \Omega$, we denote the tangential to the film (and continuous on it)
component of the electric field $\bi{e}(x_1,x_2,x_3,t)$, and assume it is related to
the film sheet current density $\bi{j}(x_1,x_2,t)$
by the power law (\ref{power}). This law can be re-written as
\beq \bi{j}=j_{\rm c}(e_{\rm i}/e_0)^{r-1} \bi{e}_{\rm i}/e_0,\label{power1}\eeq
where $r=1/p$ and $e_{\rm i}=|\bi{e}_{\rm i}|$. The critical current density $j_{\rm c}$ may depend
only on $(x_1,x_2)\in \Omega$ (the Bean model for an inhomogeneous film)
or also on the normal to the film component of the magnetic field (the Kim model).
It is convenient to assume that $\Omega$ is simply connected. If it contains holes, these can
simply be filled in, with the sheet critical current density
in the holes set to be zero or very small.
In the outer space $\omega:=\mathbb{R}^3\setminus \{\overline{\Omega}\times 0\}$
we have the Faraday and Ampere laws,
$$\mu_0\,\partial_t\bi{h}+\nabla\times \bi{e}=\bm{0},\qquad \nabla\times\bi{h}=\bm{0}$$
with $\bi{h}|_{t=0}=\bi{h}_0$ and $\bi{h}\rightarrow \bi{h}_{\rm e}(t)$ at infinity.
Here $\mu_0$ is the permeability of
vacuum, $\bi{e}$ and $\bi{h}$ are the electric and magnetic fields, respectively,
the given external magnetic field is uniform and normal to the film,
$\bi{h}_{\rm e}=(0,0,h_{\rm e}(t))$. We assume the initial magnetic field $\bi{h}_0$ has zero divergence,
$\nabla\cdot \bi{h}_0=0$, in $\omega$, its normal to the film component is continuous on $\{\Omega\times0\}$,
and $\bi{h}_0-\bi{h}_{\rm e}(0)=\Or(|x|^{-1})$ at infinity.
We now relate the exterior space and film problems, then use the magnetic scalar potential and
derive a 2D variational formulation,
written for the electric field $\bi{e}_{\rm i}$ and the jump of magnetic potential on the film,
which is convenient for the numerical approximation.
The jump of the tangential component of the magnetic field across
the cut $\{\Omega\times 0\}$
and the film current are related,
\beq \bi{j}=\bi{n}^+\times[\bi{h}],
\label{jnh}
\eeq
where $\bi{n}^+=(0,0,1)$. Here and below, $[\bi{f}]$ means the jump,
$\bi{f}|_{\Omega^+}-\bi{f}|_{\Omega^-}$, where $\Omega^{\pm}=\Omega\times
\{\pm 0\}$ are the two sides of $\{\Omega\times 0\}$.
Although the electric field $\bi{e}$ is not uniquely determined in $\omega$ by this model,
its tangential component on the film, $\bi{e}_{\tau}$, is and has to be continuous:
\beq \bi{e}_{\tau}|_{\Omega^+}=\bi{e}_{\tau}|_{\Omega^-}=\bi{e}_{\rm i}.
\label{econt}
\eeq
Since in the outer space $\nabla\times\bi{h}=\nabla\times (\bi{h}-\bi{h}_{\rm e})=\bm{0}$
and $\Omega$ is
assumed to be simply connected, there exists a magnetic scalar potential $w(x,t)$ such that
$\bi{h}-\bi{h}_{\rm e}=-\nabla w$.
Furthermore, since $\nabla\cdot (\bi{h}-\bi{h}_{\rm e})=\nabla\cdot \bi{h}=0$,
the scalar potential is a harmonic function in $\omega$ for any $t$,
\beq \Delta w=0.
\label{Delw}
\eeq
Integrating the Faraday law in time we obtain
\beq \mu_0\,(\bi{h}-\bi{h}_0)+\nabla\times \bi{U}=\bm{0},
\label{iFLt}
\eeq
where $\bi{U}:=\int_0^t\bi{e}\,dt'$ has the continuous tangential component
$\bi{U}_{\tau}|_{\Omega^+}=\bi{U}_{\tau}|_{\Omega^-}=\bi{U}_{\rm i}:=\int_0^t\bi{e}_{\rm i}\, dt'$.
Noting that the normal component of magnetic field is continuous on the film and also that
$\bi{n}^+\!\cdot\nabla\times \bi{U}=\mbox{Curl}\,\bi{U}_{\rm i}$,
where $\mbox{Curl}\, \bi{f}=\partial_{x_1}f_{2}- \partial_{x_2}f_{1}$,
we obtain that
\begin{comment}Hence, for any differentiable function $v$ in $\omega$
vanishing for large $|x|$,
it follows that
$$0=\int_{\omega}\left\{\mu_0\,(\bi{h}-\bi{h}_0)+\nabla\times \bi{U}\right\}\cdot\nabla v
=-\int_{\Omega}\bi{n}^+\!\cdot
\left[\,v\,(\mu_0\,(\bi{h}-\bi{h}_0)+\nabla\times \bi{U}) \right] \,dx .$$
Noting that $\bi{n}^+\!\cdot[\,v\,\nabla\times \bi{U}]=[v] \,\mbox{Curl}\,\bi{U}_{\rm i}$,
where $\mbox{Curl}\, \bi{f}=\partial_{x_1}f_{2}- \partial_{x_2}f_{1}$,
we obtain that
\beq \int_{\Omega} (\,\bi{n}^+\!\cdot[\,v\,\mu_0\,(\bi{h}-\bi{h}_0)\,]+[v] \,\mbox{Curl}\,
\bi{U}_{\rm i}\,)
\,dx
=0.\label{jump}\eeq
The normal component of $\bi{h}$ on the film should be continuous. Indeed, since we assumed
$\bi{n}^+\cdot[\bi{h}_0]=0$, choosing
test functions $v$
continuous on the film
in (\ref{jump}) yields that $\bi{n}^+\!\cdot[\bi{h}]=0$.
Hence it follows that
\end{comment}
$$\mu_0\,\bi{n}^+\!\cdot(\bi{h}-\bi{h}_0)+\mbox{Curl}\, \bi{U}_{\rm i}=0.$$
Substituting $\bi{h}=\bi{h}_{\rm e}-\nabla w$, we finalize our choice of the scalar potential $w$
as the solution to the following exterior problem:
$$\Delta w=0\qquad \mbox{in}\ \omega,$$
\beq \frac{\partial w}{\partial \bi{n}^+}=\frac{1}{\mu_0}\,\mbox{Curl}\,
\left(\int_0^t\bi{e}_{\rm i}\,dt'\right)+{\cal H}\qquad \mbox{on}\ \Omega^+\ \mbox{and}\
\Omega^-,\label{exter}\eeq
$$w =\Or(|x|^{-1}) \qquad \mbox{as }|x| \rightarrow \infty,$$
with ${\cal H}=\bi{n}^+ \cdot\,(\bi{h}_{\rm e}-\bi{h}_0)$.
We set ${g}=[w]$ and note that if ${g}=0$ on the domain boundary $\partial\Omega$
and is sufficiently regular (belongs to the space $S_0=H^{1/2}_{00}(\Omega)$,
see \cite{BP00,BPmath}),
the unique solution to the following problem,
\begin{eqnarray*}
&\Delta w=0\qquad \mbox{in } \omega,\\
&[w]={g},\qquad \left[\frac{\partial w}{\partial \bi{n}^+}\right]=0,\\
&w =\Or(|x|^{-1}) \qquad \mbox{as }|x| \rightarrow \infty
\end{eqnarray*}
is the double
layer potential (\cite{Nedelec}, Ch. 3, \S3.3 in the case of a closed surface $\Omega$,
and \cite{BPmath} for the present choice of $\Omega$)
$$w(x)=\frac{1}{4\pi}\int_{\Omega}
{g}(y)\,\frac{\partial}{\partial
\bi{n}^+_y}\!\left(\frac{1}{|x-y|}\right)dy \qquad \mbox{for } x \in \omega,$$
where $\partial/\partial \bi{n}^+_y=\bi{n}^+\cdot\nabla_y$.
The normal derivative of
this function, $\partial w/\partial \bi{n}^+$, is continuous across the
cut $\{\Omega\times 0\}$ and
satisfies the variational equation
\beq\int_{\Omega}\frac{\partial w}{\partial
\bi{n}^+}\,\psi \, dx =-a({g},\psi)\label{dwdn}\eeq
for any test function $\psi\in S_0$. Here the bilinear form
\begin{eqnarray}a({g},\psi)&=
\frac{1}{4\pi}\int_{\Omega}\int_{\Omega}
\frac{{\mbox{\bf Curl}}\,{g}(x)\cdot
{\mbox{\bf Curl}}\,\psi(y)}{|x-y|}\,dx\,dy\nonumber \\ &\equiv
\frac{1}{4\pi}\int_{\Omega}\int_{\Omega}
\frac{{\mbox{Grad}}\,{g}(x)\cdot {\mbox{Grad}}\,\psi(y)}{|x-y|}
\,dx\,dy\label{a_form}
\end{eqnarray} is symmetric, and $\mbox{\bf Curl}\,\phi=(\partial_{x_2}\phi,
-\partial_{x_1}\phi)$
and ${\rm Grad}\,\phi=(\partial_{x_1}\phi,\partial_{x_2}\phi)$ are 2D operators.
We note that $\frac{1}{2}a({g},{g})$ is the energy of the magnetic field
induced by the film current
\begin{eqnarray*}\bi{j}&=\bi{n}^+ \times[\bi{h}]=\bi{n}^+\times[\bi{h}_{\rm e}-\nabla w]\\&
=\mbox{\bf Curl}\,[w]=\mbox{\bf Curl}\,{g}.\end{eqnarray*}
Substituting the normal derivative of $w$ from (\ref{exter})
into the variational equation (\ref{dwdn}) we obtain
\beq a({g},\psi) +\frac{1}{\mu_0}
\left( \mbox{Curl} \left(\int_0^t
\bi{e}_{\rm i}\,dt' \right) ,\psi\right)_{\Omega} =
-\left({\cal H},\psi\right)_{\Omega}\label{i_form}\eeq
for any $\psi\in S_0$; here $(u,v)_{\Omega}=\int_{\Omega}u\,v\,dx$
is the inner product (or duality pairing) of two functions on $\Omega$.
Differentiating with respect to time, we arrive at a more convenient form of this equation,
\beq a(\partial_t{g},\psi) +\frac{1}{\mu_0}
\left( \mbox{Curl}\,
\bi{e}_{\rm i} ,\psi\right)_{\Omega} =
-\left(\partial_th_{\rm e},\psi\right)_{\Omega
\label{one0}\eeq
for any $\psi \in S_0$,
with ${g}|_{t=0}={g}_0$ determined by (\ref{i_form}) as
\beq
a({g}_0,\psi)=-(\,(\bi{h}_{\rm e}(0)-\bi{h}_0)\cdot\bi{n}^+,\psi)_\Omega
\label{inid}
\eeq for any $\psi \in S_0$.
Finally, since $\bi{j}=\mbox{\bf Curl}\,{g}$, we rewrite the current
voltage relation (\ref{power1}) as
\beq \mbox{\bf Curl}\,{g}= j_{\rm c}({e}_{\rm i}/e_0)^{r-1} \bi{e}_{\rm i}/e_0\label{two0}\eeq
and arrive at the mixed variational formulation (\ref{one0})--(\ref{two0})
of the magnetization problem
written for two variables, $\bi{e}_{\rm i}$ and ${g}$, defined on $\Omega$ for any $t>0$.
It is convenient to use dimensionless variables, assuming
\begin{eqnarray*}&x=\frac{x'}{L},\ t= \frac{t'}{t_0},\
\bi{e}_{\rm i}=\frac{\bi{e}_{\rm i}'}{e_0},\\& \bi{j}=\frac{\bi{j}'}{j_{\rm c0}},\
\bi{h}=\frac{\bi{h}'}{j_{\rm c0}},\ {g}=\frac{{g}'}{j_{\rm c0}L},
\end{eqnarray*}
where $'$ denotes dimensional physical quantities, $2L$ is the length of the
projection of $\Omega$ onto the $x_1$-axis, the
time scale $t_0= \mu_0j_{\rm c0}L/e_0$, and $j_{\rm c0}$ is a characteristic value of
the sheet critical current density.
For homogeneous films with the field independent critical density $j_{\rm c}$ we choose
in our simulations $j_{\rm c}/j_{\rm c0}=1$.
If this density depends on the normal to the film component
of the magnetic field,
$j_{\rm c}=j_{\rm c}(h_3)$ on $\{\Omega\times 0\}$, one can take $j_{\rm c0}=j_{\rm c}(0)$.
The dimensionless form of the equations (\ref{one0}), (\ref{two0}) is
\beq a(\partial_t{g},\psi) +
\left( \mbox{Curl}\,
\bi{e}_{\rm i} ,\psi\right)_{\Omega} =
-\left(\partial_th_{\rm e},\psi\right)_{\Omega}
\label{one}\eeq
for any $\psi \in S_0$, and
\beq \mbox{\bf Curl}\,{g}= \frac{j_{\rm c}}{j_{\rm c0}}e_{\rm i}^{r-1} \bi{e}_{\rm i}.\label{two}\eeq
Computing the normal to the film magnetic field component
$h_3$ is needed in problems with field dependent critical sheet current densities,
and also for the comparison of numerical simulation results to magneto-optical imaging.
Noting that $h_3-h_{\rm e}=-\partial w/\partial \bi{n}^+$ on $\{\Omega\times 0\}$,
we can use (\ref{dwdn}) for determining the magnetic field component $h_3$ from the equation
\beq \left(h_3-h_{\rm e},\psi\right)_{\Omega}=a({g},\psi)\label{h3}\eeq
for all $\psi\in S_0$. Alternatively, the explicit expression for $\partial w/\partial
\bi{n}^+$ in (\ref{exter}) yields, in dimensionless variables,
\beq h_3=h_{03}-{\rm Curl}\,\left(\int_0^t\bi{e}_{\rm i}\,dt'\right).
\label{h3div}\eeq
Yet another possibility \cite{P98} is to express the normal magnetic field component
via the potential jump (magnetization function) ${g}$ using the Biot-Savart law,
\begin{eqnarray} h_3(x,t)&=h_{\rm e}(t)+\bi{n}^+\cdot\frac{1}{4\pi}\int_{\Omega}\nabla_y
\left(\frac{1}{|x-y|}\right)\times\bi{j}(y,t)\,dy\nonumber \\
&=h_{\rm e}(t)-\frac{1}{4\pi}\int_{\Omega}{\rm Grad}_{\,y} \left(\frac{1}{|x-y|}\right)
\cdot{\rm Grad}_{\,y}\,{g}(y,t)\,dy.\label{h398}\end{eqnarray}
These three approaches are further discussed in Sec. \ref{NS}.
\section{Numerical scheme}\label{NS}
It is important to approximate the electric field $\bi{e}_{\rm i}$ in problem
(\ref{one})--(\ref{two}) using curl conforming finite elements.
In 2D problems, a simple change of variables leads to a formulation
where curls are replaced by divergences; the divergence conforming
Raviart-Thomas elements (see below) are an appropriate choice for such formulations.
Let us substitute $\bi{e}_{\rm i}=R\,\bi{v}$, where $R$ is the rotation matrix
$$ \left( \begin{array}{rr}0 &\ 1 \\
-1 &\ 0 \end{array} \right).$$
Taking into account that $|\bi{v}|=|\bi{e}_{i}|$, $\mbox{Curl}\, R = -{\rm Div}$
and $R^T\,\mbox{\bf Curl}={\rm Grad}$, we rewrite (\ref{one})--(\ref{two})
as
\begin{eqnarray}
&a(\partial_t{g},\psi)-\left({\rm Div}\,\bi{v},\psi\right)_{\Omega}
=-\left(\partial_th_{\rm e},\psi\right)_{\Omega}
\label{one_v}\\
& {\rm Grad}\,{g}=\frac{j_{\rm c}}{j_{\rm c0}}|\bi{v}|^{r-1} \bi{v}\label{two_v}\end{eqnarray}
for any $\psi \in S_0$.
Here $\mbox{Div}\,\bi{v}=\partial_{x_1}v_1+\partial_{x_2}v_2$ is the 2D divergence.
Multiplying equation (\ref{two_v}) by a vector test function $\bm{\eta}$ and using Green's formula,
we rewrite this equation as
\beq j_{\rm c0}^{-1}(j_{\rm c}|\bi{v}|^{r-1}\bi{v},\bm{\eta})_{\Omega}+({g},{\rm Div}\,\bm{\eta})_{\Omega}=0.
\label{two_v1}\eeq
Equation (\ref{h3div}) should also be rewritten:
\beq h_3=h_{03}+{\rm Div}\left(\int_0^t\bi{v}\,dt'\right).\label{h3divv}\eeq
We approximate $\Omega$ by a polygonal domain $\Omega^h$. Let ${\cal T}^h$
be a regular partitioning of $\Omega^h$ into triangles
$\kappa$
and $h=\max_{\kappa \in {\cal T}^h}{\rm diam}(\kappa)$ be their maximal size.
Here vertices of ${\cal T}^h$ lying on $\partial \Omega^h$, the boundary of $\Omega^h$,
also lie on $\partial \Omega$. If $\Omega$ contains subdomains with different critical
current density values, the mesh is fitted in a similar way to the subdomain boundaries.
By ${\cal N}^h$ and ${\cal E}^h$ we denote the sets of nodes and edges of this triangulation,
respectively, with ${\cal N}_{\rm i}^h$ being the subset of the internal and ${\cal N}_{\rm b}^h$
of the boundary nodes. Below,
$|{\cal X}|$ will denote the number of elements in the set ${\cal X}$.
Let $S_0^h$ be the space of continuous functions, linear on each triangle, and zero in
the boundary nodes ${\cal N}^h_{\rm b}$. We define also the finite dimensional space of vectorial
functions ${\cal V}^h$, linear on each triangle,
$\bi{v}^h|_{\kappa}=\bi{a}_{\kappa}+b_{\kappa}(x_1,x_2)$, $\bi{a}_{\kappa}\in \mathbb{{R}}^2,$
$b_{\kappa}\in \mathbb{{R}}^1$ and such that the normal component of $\bi{v}^h$
is continuous across any edge separating two adjacent triangles in ${\cal T}^h$.
This is the space of divergence conforming Raviart-Thomas elements of the lowest order;
see \cite{Carst} for a detailed description of the edge related basis for ${\cal V}^h$.
In addition, let
$0= t_0 < t_1 < \ldots < t_{N-1} < t_N = T$ be a
partitioning of $[0,T]$
into possibly variable time steps $\tau_n = t_n -
t_{n-1}$, $n=1\to N$.
\\ Our approximation of the problem (\ref{one_v}), (\ref{two_v1}) is:
Given ${G}^0 \in S^h_0$, for $n = 1 \rightarrow N$,
find ${G}^n \in S^h_0$
and $\bi{V}^n \in
{\cal V}^h$ such that
\begin{eqnarray}&a^h({G}^n,\psi^h) - {\tau_n}
\left({\rm Div}\,\bi{V}^n,
\psi^h\right)_{\Omega^h} =
a^h({G}^{n-1},\psi^h)
-\left(h_{\rm e}^n - h_{\rm e}^{n-1},\psi^h\right)_{\Omega^h},
\label{Qaeh}\\
&j_{\rm c0}^{-1}\left(j_{\rm c}\,|\bi{V}^n|^{r-1}\,\bi{V}^n,
\bm{\eta}^h\right)^h+
\left({G}^n,{\rm Div}\,\bm{\eta}^h\right)_{\Omega^h}= 0 \label{Qbeh}
\end{eqnarray}
for all $\psi^h \in S^h_0$ and $\bm{\eta}^h \in {\cal
V}^h\,$. Here $h_{\rm e}^n$ denotes $h_{\rm e}(t_n)$, $a^h(.,.)$ is defined
by (\ref{a_form}) with $\Omega$ replaced by $\Omega^h$, and $(\bi{f},\bi{u})^h
=\sum_{\kappa\in{\cal T}^h}(\bi{f},\bi{u})^h_{\kappa}$ averages the integrand $\bi{f}\cdot
\bi{u}$ over each triangle $\kappa$
at its vertices:
$$ (\bi{f},\bi{u})^h_{\kappa}=\frac{1}{3}\, |\kappa|\, \sum_{m=1}^3 \,
\bi{f}(P_m^{\kappa})\cdot\bi{u}(P_m^{\kappa}),$$
where $\{P_m^{\kappa}\}_{m=1}^3$ are the vertices of
triangle $\kappa$ and $|\kappa|$ its area.
Furthermore, ${G}^0 \in S^h_0$ solves the corresponding approximation
of (\ref{inid}). We note that it is not necessary to solve explicitly for ${G}^0$
as we can just replace the first term on the right-hand side of (\ref{Qaeh}) for $n=1$
by $({h}_{03}-{h}_{\rm e}(0),\psi^h)_{\Omega^h}$.
It is easy to show the existence and uniqueness of a solution to the
nonlinear algebraic system (\ref{Qaeh})--(\ref{Qbeh}),
see \cite{BPmath}. To solve this system at each time level,
we set $\bi{V}^{n,0}=\bi{V}^{n-1}$, denote $|b|_{\epsilon}=\sqrt{|b|^2+\epsilon^2}$
and approximate $|\bi{V}^n|^{r-1}\,\bi{V}^n$ at the $j^{\rm th}$ iteration by
$$|\bi{V}^{n,j-1}|^{r-1}\,\bi{V}^{n,j-1}+(|\bi{V}^{n,j-1}|_{\epsilon})^{r-1}\,(\bi{V}^{n,j}-\bi{V}^{n,j-1});$$
and find ${G}^{n,j} \in S^h_0$
and $\bi{V}^{n,j} \in
{\cal V}^h$ such that
\begin{eqnarray}& a^h({G}^{n,j},\psi^h) -{\tau_n}
\left(\mbox{Div}\,\bi{V}^{n,j},
\,\psi^h\right)_{\Omega^h}\nonumber\\ &
\ \ \ = a^h({G}^{n-1},\psi^h)
-\left(h_{\rm e}^n - h_{\rm e}^{n-1})
,\psi^h\right)_{\Omega^h}
\label{Qaehj}\end{eqnarray}
\begin{eqnarray}& j_{\rm c0}^{-1}\left(j_{\rm c}\,|\bi{V}^{n,j-1}|^{r-1}_{\epsilon}\,
\bi{V}^{n,j},\bm{\eta}^h \right)^h+\left(
{G}^{n,j}, \mbox{Div}\,\bm{\eta}^h\right)_{\Omega^h}\nonumber\\ &
\ \ \
=j_{\rm c0}^{-1}\left(j_{\rm c}\left( \,|\bi{V}^{n,j-1}|_{\epsilon}^{r-1}-|\bi{V}^{n,j-1}|^{r-1}\,\right)
\bi{V}^{n,j-1}\, ,\bm{\eta}^h \right)^h
\label{Qbehj}
\end{eqnarray}
for all $\psi^h \in S^h_0$ and $\bm{\eta}^h \in {\cal
V}^h\,$.
At each iteration, we need to solve the following linear system
\begin{eqnarray*}&
A \,\underline{{G}}^{j} - \tau_n B \,\underline{V}^{j} = \underline{d}\,,\\
&B^T\,\underline{{G}}^{j}+M^{j-1} \,\underline{V}^{j}
=\underline{f}^{j-1}
\end{eqnarray*}
to determine ${G}^{n,j}=\sum_{k=1\rightarrow |{\cal N}^h_{\rm i}|}\underline{{G}}^j_k\,\psi_k$
and $\bi{V}^{n,j}=\sum_{k=1\rightarrow |{\cal E}^h|}\underline{V}^j_k\,\bm{\eta}_k;$
here $\{\psi_k\}$ and $\{\bm{\eta}_k\}$ are the standard bases for $S^h_0$ and ${\cal V}^h$,
respectively, and the time index $n$ is omitted for simplicity.
Here $A$ is a symmetric positive definite full $|{\cal N}^h_{\rm i}|
\times
|{\cal N}^h_{\rm i}|$ matrix with elements $A_{k,l}=a^h(\psi_k,\psi_l)$;
$M^{j-1}$ is a symmetric positive definite sparse
$|{\cal E}^h| \times |{\cal E}^h|$ matrix with elements
\beq M^{j-1}_{k,l}=j_{\rm c0}^{-1}\,
(j_{\rm c}\,|\bi{V}^{n,j-1}|_{\epsilon}^{r-1}\bm{\eta}_k,\bm{\eta}_l)^h;\label{M_el}\eeq
and $B$ is a sparse $|{\cal N}^h_{\rm i}| \times|{\cal E}^h|$
matrix with elements $B_{k,l}=(\psi_k,{\rm Div}\,\bm{\eta}_l)_{\Omega^h}$.
We found that convergence of these iterations can be accelerated by supplementing them
with an over-relaxation, i.e., by recalculating $\bi{V}^{n,j}$
as $\alpha \bi{V}^{n,j}+(1-\alpha)\bi{V}^{n,j-1}$ with $\alpha>1$
after each iteration. In all the examples below we chose
$\epsilon=10^{-6}$ and $\alpha=1.2$.
We also note that only the sparse matrix $M^{j-1}$ must be recalculated at each iteration.
The full matrix $A$ is calculated only once for the chosen finite element mesh.
Since gradients of the basis functions $\psi_k$ are constant on each triangle,
to calculate these matrix elements one should find, see (\ref{a_form}),
the double surface integrals
$$\int_{\kappa_l}\int_{\kappa_m}
\frac{dx\,dy}{|x-y|}
$$
for every pair of triangles $\kappa_l,\,\kappa_m\in {\cal T}^h$.
We note that some of these integrals are singular. To accurately approximate this matrix,
we followed the approach in the
appendix of \cite{SP10}; in particular, we used the exact analytical value \cite{Arcioni}
for the most singular cases $l=m$.
To compare simulation results with the magneto-optical measurements of the
normal to the film component of the magnetic field
$h_3$, an approximation $H_3$ to this component can be computed
at the inner mesh nodes ${\cal N}^h_{\rm i}$ using a
discretized form of the variational equation (\ref{h3}),
\beq \underline{H}_3^n=h_{\rm e}(t_n)\,\underline{\it 1}+\Lambda^{-1}\,A\,\underline{{G}}^n,\label{H3}\eeq
where $\underline{\it 1}$ is the $|{\cal N}^h_{\rm i}|\times 1$ vector $(1,1,...,1)^T$
and $\Lambda$ is the diagonal $|{\cal N}^h_{\rm i}|
\times
|{\cal N}^h_{\rm i}|$ matrix with $\Lambda_{k,k}=\int_{\Omega}\psi_k\, dx$. Note that,
due to the infinitely thin film approximation employed in our model,
this magnetic field component becomes
infinite on the domain boundary; see, e.g., the thin strip solution \cite{BrandtIndenbomForkl}.
If the critical current density depends on the normal to the
film magnetic field component,
$j_{\rm c}=j_{\rm c}(h_3)$, it is necessary to substitute
$j_{\rm c}$ by $j_{\rm c}(H_3^{n,j-1})$ in (\ref{M_el}) and to update
the approximation $H_3$
after each iteration. The inner node $H_3$ values (\ref{H3}) are not convenient
for approximating $j_{\rm c}$ in all triangles, as is needed in (\ref{M_el}).
The piecewise constant approximation $\widetilde{\underline{H}}_3$
resulting from a discretization of (\ref{h3divv}), can be written as
\beq \widetilde{\underline{H}}_3^{\,j}=\widetilde{\underline{H}}_{03}
+C\,(\underline{U}^{n-1}+\tau_n\,\underline{V}^j),\eeq
where $\widetilde{H}_{03}$ is a piecewise constant approximation of $h_{03}$, $C$
is the sparse $|{\cal T}^h|\times|{\cal E}^h|$ matrix with elements $C_{m,l}
={\rm Div}(\bm{\eta}_l)|_{\kappa_m}$,
and $\underline{U}^{n-1}$ denotes the coefficients of
$\bi{U}^{n-1}=\sum_{m=1}^{n-1} \tau_m\,\bi{V}^m$
for the standard basis of the Raviart-Thomas space ${\cal V}^h$.
We found, however, that such an approximation, based on the integrated over time electric
field, can be inaccurate, especially in and around the film holes (in which the electric field
remains undetermined in our model).
A better piecewise constant approximation was obtained using a discretized form of
(\ref{h398}): we set
$\widetilde{\underline{H}}_3^{\,j}|_{\kappa}:=\widetilde{{H}}_3^{\,j}(p_{\kappa})$, where
$p_{\kappa}$ is the center of triangle ${\kappa}$ and
\beq\widetilde{{H}}_3^{\,j}(p_{\kappa})=h_{\rm e}^n-\frac{1}{4\pi}
\sum_{\kappa'\in{\cal T}^h}\left( {\rm Grad}\,
{G}^{\,j}\right)|_{\kappa'}\cdot\oint_{\partial \kappa'}\frac{\bm{\nu}_{\kappa'}}{|p_{\kappa}-s|}\,ds.
\label{Biot}\eeq
Here $\partial\kappa'$ is the boundary of $\kappa'$, $\bm{\nu}_{\kappa'}$
is the unit outward normal to this boundary,
and the integral over each side of triangle $\kappa'$ is computed
numerically (as in \cite{P98} we simply used Simpson's quadrature rule).
\section{Simulation results}
The simulations have been performed in Matlab R2011a (64 bit) on a PC with
Intel Core i5-2400 3.10Hz processor and 4Gb RAM. All film magnetization problems
were solved for a growing external field $h_{\rm e}(t)=t$ and a zero field initial state.
First, to test our method, we solved numerically the thin disk problem. Let $\Omega$
be a circle of radius one. For the Bean critical state model the exact distribution of
the sheet current density $\bi{j}=j(\rho,t)\,\widehat{\bm{\phi}}$ is known \cite{MikhKuz,ClemSanchez}.
Here $\widehat{\bm{\phi}}$ is the unit azimuthal vector in polar coordinates $(\rho,\phi)$.
In our dimensionless variables,
$$j(\rho,t)=\left\{\begin{array}{lr}
-1 & a(t)\leq \rho\leq 1,\\
-\frac{2}{\pi}\arctan\left\{\rho\sqrt{\frac{1-a^2(t)}{a^2(t)-\rho^2}}\right\} & 0\leq \rho<a(t),\end{array}\right.
$$
where $a(t)=1/\cosh\left(2h_{\rm e}(t)\right).$ The normal to the film component of the magnetic
field can be found by means of a 1D numerical integration using the equation
$$h_3(\rho,t)=h_{\rm e}(t)+\frac{1}{2\pi}\int_0^1G(\rho,\rho')\,j(\rho',t)\,d\rho',$$
where
$G(\rho,\rho')=K(k)/(\rho+\rho')-E(k)/(\rho-\rho')$,
$k=2\sqrt{\rho\,\rho'}/(\rho+\rho')$ and $K$ and $E$
are complete elliptic integrals of the first and second kind.
Furthermore, the electric field $\bi{e}=e(\rho,t)\,\widehat{\bm{\phi}}$,
where ${\rho}^{-1}\partial_\rho(\rho\,e)=-\partial_t h_3,\quad e|_{\rho=0}=0.$
Approximating $\partial_t h_3$ by $\{h_3(\rho,t_n)-h_3(\rho,t_{n-1})\}/\tau_n$ and
integrating numerically, we calculate an approximation to the
electric field distribution averaged over the time interval
$(t_{n-1},t_n)$.
To compare with this semi-analytical solution of the Bean model,
we set $p=1000$ in the power law model, and used our
numerical algorithm (\ref{Qaeh})--(\ref{Qbeh})
with $r=1/p$.
Our numerical experiments confirmed that for a monotonically growing external field
the current density and the magnetic field can be computed in one time step
without any accuracy loss. The electric field $\bi{e}_{\rm i}$ is, however,
determined by time derivatives of the external magnetic field and the magnetization function;
in the discretized formulation (\ref{Qaeh})--(\ref{Qbeh}) the field $\bi{E}^n=R\,\bi{V}^n$
can be considered as an approximation to the field $\bi{e}_{\rm i}$ at some time moment
in the interval $(t_{n-1},t_n)$ or to the average field in this time interval.
Hence, our numerical strategy was to make a large time step $\tau_1$
followed by a much smaller step $\tau_2$ to obtain accurate approximations to all variables,
including the electric field, at time $t=\tau_1+\tau_2$.
The simulation results for $t=0.5$ (Fig. \ref{Fig1}) were obtained with
$\tau_1=0.45,\tau_2=0.05$ and two finite element meshes, with 4200 elements ($h=0.05$)
and 12000 elements ($h=0.03$). Solution for these two meshes took, respectively,
5 and 57 minutes of
CPU time, not including the time for computing the full matrix $A$.
Comparing to the solution of the Bean model described above,
we found that the critical current zone was $0.65\leq \rho\leq 1$,
where $a(0.5)=1/\coth(1)\approx 0.65$,
and the relative errors for the current density, the electric field,
and the normal magnetic field component were, respectively,
1\%, 4.9\%, and 2.5\% for the crude mesh and
0.6\%, 3.1\%, 1.4\% for the fine mesh. Here the electric field in the
semi-analytical solution for the Bean model was calculated
using the $h_3$ distributions at the same two time moments, $t_1=0.45$, $t_2=0.5$.
At each time level, the approximate current density was computed as ${\bf Curl}\, {G}^n$,
constant in each
triangle. However, for the comparison we used the node values calculated, at each node,
as the weighted-by-areas mean of the values in triangles to which the node belongs;
such averaging increased the accuracy. We note also that the magnetic field $h_3$
was determined using equation (\ref{H3}). Hence, the field was found and compared to the
exact solution at the internal nodes only (at the boundary nodes the exact field is infinite).
In the next two examples we also assumed $p=1000$, so the numerical solutions
obtained should be close to solutions to the Bean model; the magnetic field
was computed using the discretized Biot-Savart formula (\ref{Biot}).
The electric field is known to be strong near the film boundary indentations and, especially,
in the vicinity of concave film corners (see Fig. \ref{Fig_cross_ind}).
Although similar problems
have been solved by other authors before,
this was done for $p=9$ and $p=19$ in \cite{SchusterB96}
and \cite{VSGJ07,VSGJ08}, respectively (and also for the critical state models in \cite{P98},
but there without computing the electric field).
Here we took time steps $0.3+0.08+0.02=0.4$.
In inhomogeneous films the electric field near the boundaries between regions of different
critical current densities can be orders of magnitude higher
than in other parts of the film
(see Fig. \ref{Fig_inh}).
Here $j_{\rm c}/j_{\rm c0}=0.5$ inside the rectangle, and $j_{\rm c}/j_{\rm c0}=1$ outside.
We took time steps $0.2+0.2+0.1=0.5$.
The magnetic flux penetrates deeper into the lower critical current
density area.
To solve a problem with a multiply connected film (Fig. \ref{Fig_3holes}) we filled the holes
in and set $j_{\rm c}/j_{\rm c0}=0.002$ there, while keeping $j_{\rm c}/j_{\rm c0}=1$
in the film itself; we recall the electric field in
the holes is not determined. This example was solved for $p=100$
with time steps $0.47+0.03=0.5$.
A strong electric field is
generated along the paths of flux penetration into the holes. For $p=19$ such problems
were solved by a different method in \cite{VSGJ08}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=12cm]{exactJEH.eps}
\end{center}
\caption{Thin disk in the perpendicular field, $h_{\rm e}(t)=t$. The Bean model solution
\cite{MikhKuz,ClemSanchez} (black line) and the numerical solution (red dots) obtained
with $p=1000$, $h=0.03$. Shown for $t=0.5$:
top -- the modulus of the current density $j$; middle -- the modulus of the electric field
$e_{\rm i}$; bottom -- the normal component of the magnetic field, $h_3$.}
\label{Fig1}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=12cm]{Emod_cross_ind_Arch.eps}\\
\includegraphics[width=6cm]{Jcross_ind.eps}\hspace{.6cm}\includegraphics[width=6cm]{Hz_cross_ind.eps}
\end{center}
\caption{A film with corners and boundary indentation, $h_{\rm e}(t)=t$;
numerical solution for $p=1000$. Shown for $t=0.4$:
top -- the modulus of the electric field $e_{\rm i}$, bottom -- current streamlines (left) and
levels of the normal to the film magnetic field component $h_3$ (right).
The mesh (about 9000
triangles) was refined near the film boundary.}
\label{Fig_cross_ind}
\end{figure}
\begin{figure}[h!]
\begin{center}
\begin{minipage}[h]{7.6cm}\includegraphics[width=7.6cm,height=12cm]{E_inh_fig_Arch.eps}\end{minipage}
\begin{minipage}[h]{7.5cm} \includegraphics[width=7.5cm,height=6cm]{current_lines_inh.eps}\\
\includegraphics[width=7.5cm,height=6cm]{Hz_levels_inh.eps}
\end{minipage}
\end{center}
\caption{Inhomogeneous film of elliptic shape in a growing external field; $p=1000$.
Sheet critical current density $j_{\rm c}/j_{\rm c0}=0.5$ in the rectangle
and $j_{\rm c}/j_{\rm c0}=1$
outside of it. The finite element mesh contained 10,600
triangles and was refined
near the boundary between the two regions (the blue line). Shown for $t=0.5$:
left -- the modulus of the electric field $e_{\rm i}$; right --
current streamlines (top) and levels of the normal to the film magnetic field
component $h_3$ (bottom).}
\label{Fig_inh}
\end{figure}
\begin{figure}[h!]
\begin{center}
\begin{minipage}[h]{7.6cm}\includegraphics[width=7.6cm,height=12cm]{3holes_E_Arch.eps}\end{minipage}
\begin{minipage}[h]{7.5cm} \includegraphics[height=6cm]{3holes_J.eps}\\
\includegraphics[height=6cm]{3holes_Hz.eps}
\end{minipage}
\end{center}
\caption{Circular film with three holes in a growing external field; $p=100$. The finite
element mesh contained 10,400
triangles and was refined near the domain and hole
boundaries (blue lines). Shown for $t=0.5$: left -- the modulus of the electric field
$e_{\rm i}$; right -- current streamlines (top) and levels of the normal to the film
magnetic field component $h_3$ (bottom).}
\label{Fig_3holes}
\end{figure}
\section{Conclusion}
Existing numerical methods for thin film magnetization problems in type-II superconductivity
are based on formulations written for one main variable: the magnetization function.
The sheet current density, determined numerically as the curl of this function,
is prone to numerical inaccuracy. The inaccuracy, usually tolerable in the current density
itself, inhibits evaluation of the electric field by substituting this density into
a power current-voltage relation if the power is high.
For critical state models such an approach for computing the electric field
is not applicable.
The new variational formulation of thin film magnetization problems proposed in this work
is written for two variables, the electric field and the magnetization function.
The formulation serves as a basis for the approximation and computation of all variables of
interest: the sheet current density and both the electric and magnetic fields.
Our numerical algorithm remains accurate for any value of the power in the power
law current-voltage relation. For high powers we obtain a good approximation to
the solution of
the Bean model. Evaluation of the local heat dissipation distribution in a film for both
the power law and critical state models becomes straightforward.
In this paper, we presented numerical simulation results for isotropic models
with field independent critical sheet current density.
However, our approach can be generalized to thin film problems with field-dependent \cite{Kim}
and anisotropic \cite{Schuster97} sheet critical current densities.
\section*{\bf Acknowledgement} L.P.\ appreciates helpful discussions with V.\ Sokolovsky.
\section*{References}
|
1,477,468,750,680 | arxiv | \section{Introduction}
In the upcoming \emph{Internet of Things} (IoT) an immense number of devices will be connected to each cellular station--forecasts predict 1 million devices per station \cite{report}. IoT connectivity is primarily aimed at establishing central authentication, security, and management of those devices. However, fine-tuned coordination functionalities (transmit power selection, transmission scheduling, code assignment, etc) are considered very expensive to be handled centrally, since the cellular station would need to collect a bulky state information for each device and solve large-scale optimization problems. For these reasons, it is anticipated that IoT communications will rely on \emph{uncoordinated access}, i.e., a channel will be dedicated to IoT access and each IoT transmitter will decide individually which transmission pattern to use. Here, we study the use of Online Learning methods for transmission pattern selection.
We consider $K$ transmitters scattered in a geographical area, all wanting to transmit to the cellular station (e.g. a common sink), as shown in \figurename{ \ref{fig:SystemModel}}. We further assume that a) for reasons of overhead reduction, there is no coordination between a transmitter and the cellular station, and b) for reasons of security there is no coordination among different transmitters. Each transmitter must decide on its own when and how to transmit.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{model}
\caption{IoT transmitters share a common wireless medium in an uncoordinated manner.}
\label{fig:SystemModel}
\end{figure}
\subsection{Random access protocols}
Traditional protocols that can operate in this setting are based on random access. Historically, pure ALOHA was the first such protocol, where a user transmits with a probability $p$ \cite{pure_aloha}. This was later extended to slotted-ALOHA \cite{aloha_slotted}, which used synchronization to double user throughput.
A more mature random access protocol is the \emph{Carrier Sense Multiple Access} (CSMA), where the transmitter checks whether the medium is idle before sending. Also, in the enhanced version with collision avoidance (CSMA/CA) the transmitter ``backs-off'' (selects a smaller probability of access) every time there is a collision, while also uses ready-to-transmit (RTS) and clear-to-transmit (CTS) signals to reduce the impact of a collision on throughput \cite{bianchi00}.
Random access protocols suffer from collisions and idle time, and therefore they achieve lower throughput than the maximum possible.
In an effort to improve the throughput achievable by uncoordinated access, many exciting algorithmic ideas have been proposed. For example, Q-CSMA \cite{NiSrikant} is a protocol where the transmitters avoid collisions by finding efficient schedules in a distributed manner (see also \cite{JiangWalrand}). Although Q-CSMA is shown to asymptotically achieve 100$\%$ throughput (maximum possible), it suffers from large delays.
Another interesting direction is the idea of successive cancellation and replica transmission \cite{SuccesiveCanc}. In this enhanced random access protocol, each transmitter sends multiple replicas of the same packet within a frame. Normally, a large number of collisions occur, but with the assumption that the Signal-to-Interference-plus-Noise (SINR) levels of transmitters are relatively different, the receiver can decode the strongest one, subtract it from the next, etc, and eventually decode correctly all signals. This protocol achieves high throughput, but at the cost of excessive energy usage, which is a concern in IoT applications.
\subsection{Communication requirements for IoT}
We list our requirements for IoT communications.
\subsubsection{URLLC}
The \emph{Ultra Reliable Low Latency Communications} (URLLC) class is a popular 5G definition for communications of high fidelity, seen as an enabler for remote control of vehicles, and other demanding applications. In URLLC, a given amount of bits must be received before a strict deadline (in periods) with a very high probability (often 0.99999). This reliability guarantee is extremely important in automation and remote control, as well as in applications where freshness of information is essential, and the operation of some IoT applications will rely on such guarantees. For this reason, we depart from pure throughput considerations, and we define below the \emph{latent throughput}, which suffices to meet URLLC requirements.
Time is split in frames and within each frame there are $N$ slots. A frame is then called ``successful for transmitter $k$'' if it contains $L$ or more successful transmissions of transmitter $k$.
Successful transmissions in previous frames do not count towards the success criterion of the current frame. The latent (URLLC) throughput is the empirical frequency of successful frames.
We note that no existing random access protocol provides latent throughput guarantees, as all of them are designed for maximizing pure throughput which is different from latent throughput.
For example, $L-1$ successful transmissions within a frame provide $\frac{L-1}{N}$ pure throughput, but amount to $0$ latent throughput.
More generally, latent throughput optimization is a difficult problem even with centralized coordination \cite{Apostolos2}, and has strong ties to the theory of Markov Decision Processes \cite{Apostolos1}.
\subsubsection{Energy consumption}
Since the majority of IoT devices will work on batteries, energy consumption must be minimized. In this work we assume that energy is proportional to the number of transmissions.
\subsection{Our contribution}
In this paper we propose a protocol for uncoordinated medium access, which is based on the theory of Online Learning \cite{Shalev}. First, we restrict our transmitter to choose transmission patterns in the beginning of the frame, and in particular, we further restrict its options to a randomized dictionary of patterns. During operation, the transmitter first chooses a pattern from the dictionary at random, and then implements the pattern within the frame. The learning operation amounts to progressively adjust the probability distribution of pattern selection using an online exponentiated gradient descent algorithm. Our simulations show that the resulting Learn2MAC scheme:
\begin{itemize}
\item Achieves high URLLC throughput and low energy consumption, when faced against (i) TDMA interference, or (ii) Random access interference.
\item Multiple Learn2MAC users can outperform, in terms of latent throughput, the ALOHA users by as much as 100\%.
\end{itemize}
\section{Problem formulation}
\subsection{System model and assumptions}
There are $K$ transmitters sharing the uplink of our system.
Time is split in frames of $N$ slots. At the beginning of frame $t$, transmitter $k$ decides a \emph{pattern of transmissions} to be used within the frame; we denote this decision with $x_k(t)\in \{0,1\}^N$, where $x_{k,n}(t)=1$ indicates transmission in slot $n$, and $x_{k,n}(t)=0$ indicates idling.
Therefore, at each frame a transmitter chooses its pattern as a binary vector of length $N$ from the set $\mathcal{X}=\{0,1\}^N.$
Our pattern selection setting is very general, as the next example suggests.
\begin{example}[ALOHA]
Consider $N=2$, where all possible transmission patterns are $\mathcal{X}=\{(0,0),(1,0),(0,1),(1,1)\}$. A simple protocol could be: ``choose one pattern at random with probability 1/4 independently of past events''. Incidentally, this corresponds to a slotted-ALOHA with $p=1/2$.
\end{example}
We make the following assumptions about our system.
\begin{itemize}
\item[\textbf{(A.1)}] If two or more transmitters have selected to transmit at the same slot, we have a collision and all transmitted information in this slot is lost.\footnote{In this paper we study the ``hard interference'' scenario for simplicity. We mention, however, that our work can be extended to other interference models.}
\item[\textbf{(A.2)}] At the end of frame $t$, the cellular station provides feedback information about the occupancy of each slot (idle/success/collision) to all transmitters.
\end{itemize}
We reserve $x_k(t)$ to denote the pattern selected by user $k$ in frame $t$, and $\pi$ to index patterns in the set $\mathcal{X}$.
Because of \textbf{(A.1)},
a pattern $\pi \in \mathcal{X}$ produces \emph{a successful transmission for user $k$} in slot $n$ (an event denoted with $s_{k,n}(\pi)=1$) only if $\pi_{\ell,n}=0,~~\forall \ell\neq k$. Equivalently, we write:
\begin{equation}\label{eq:success}
s_{k,n}(\pi)= \pi_{k,n}\prod_{\ell\neq k}(1-\pi_{\ell,n}), ~~\forall \pi\in\mathcal{X}.
\end{equation}
\subsection{Performance metrics}
Our protocol design is driven by certain objectives, which are used to form the utility function of each transmitter.
\textbf{URLLC throughput.}
In frame $t$ a pattern $\pi\in\mathcal{X}$ is called \emph{successful}, denoted with $R_t(\pi)=1$, if it contains at least $L$ successful transmissions.\footnote{$L$ in this case is an application-specific parameter that captures the amount of successful transmissions required within a frame in order for user $k$ to achieve its URLLC requirement. In 5G standardization $L$ takes small values for reasonable signal strengths, i.e., for $\text{SNR}>0\text{dB}$ it is $L = 3$.}
Using \eqref{eq:success}, $R_t(\pi)$ can be computed as follows:
\[
R_t(\pi)=\mathbbm{1}\left\{\sum_{n=1}^N s_{k,n}(\pi)\geq L\right\}, ~~\forall \pi\in\mathcal{X}.
\]
To increase URLLC reliability, transmitter $k$ wants to maximize \emph{URLLC throughput} $\frac{1}T\sum_{t=1}^TR_t(x_k(t))$, where $T$ is some large integer that represents the horizon of interest for the application.
\textbf{Energy.} We assume that the consumed energy is proportional to the rate of transmissions per frame, given by $\frac{1}T\sum_{t=1}^T\sum_{n=1}^N x_{k,n}(t)$.
In summary, the \emph{instantaneous utility} obtained by transmitter $k$ in frame $t$ is given by:
\begin{equation}\label{eq:obj}
U_{k,t}(x_k)=\underbrace{R_t(x_k(t))}_\text{URLLC thr.}-\eta_k \cdot \underbrace{\sum_{n=1}^N x_{k,n}(t)}_\text{energy cost},
\end{equation}
where scalar $\eta_k>0$ is a transmitter-selected weight that balances the importance of URLLC throughput and energy consumption. We mention that $U_{k,t}$ is unknown to transmitter $k$ since it depends on the patterns of all other users, via $R_k(x_k(t))$.
\subsection{Problem formulation}
We would like to design a distributed protocol where each transmitter decides its pattern based only on the feedback of \textbf{(A.2)} in order to optimize the long-term average utility at some horizon $T$:
\begin{align*}
\displaystyle{\maximize_{x_k(1),\dots,x_k(T)\in \mathcal{X}^T} \frac{1}T\sum_{t=1}^T U_{k, t}(x_k(t))}
\end{align*}
Random access protocols are expected to perform poorly w.r.t. this objective due to their following limitations. By design they do not ensure high latent throughput $R_t(x_k(t))$--as the number of transmitters increases the total latent throughput approaches zero--, they suffer from collisions and thus high energy levels per achieved throughput, and
finally, they have limited flexibility and they are not adaptive to circumstances.
These considerations lead us to design a novel architecture, where each device performs an online learning algorithm in order to determine the most appropriate pattern for maximizing the obtained utility.
\section{Architecture based on online learning}
We take the \emph{individual viewpoint of transmitter $k$} and optimize the utility $U_{k,t}(x_k(t))$ assuming that the rest transmitters are uncooperative, and their transmissions are seen as interference. In particular, to design an adaptive and robust algorithm, we will further assume that the other transmitters are adversaries that are choosing their patterns in order to lower $U_{k,t}(x_k(t))$. This worst-case approach will allow us to design an algorithm that is sensitive to interference and quickly adapts to changes in the environment.
\subsection{Restricting the design space}\label{sec:restr}
As in most learning problems, restricting the dimensions is essential for constructing an efficient solution. In our problem, the number of possible patterns for transmitter $k$ is equal to the number of all possible binary vectors of length $N$, i.e., equal to $2^N$. For values encountered in practice (e.g.~$N=100$) this creates an enormous action space.
We introduce the concept \emph{dictionary of patterns}, i.e., a preselected subset of patterns $\mathcal{D}_k=\{\pi^1,\dots,\pi^d\}\subset \mathcal{X}$ of cardinality $d\ll 2^N$, to which transmitter $k$ will be restricted. The dictionary of patterns mimics the idea of the codebook in communications, where a subset of codes is designed off-line, and at runtime the transmitter selects a code from the codebook.
\subsubsection{Basic rules for creating dictionaries}
We provide some practical directions into creating pattern dictionaries.
\begin{itemize}
\item The zero pattern $(0,0,\dots,0)$ should always be included in the dictionary, since on many occasions a good action for user $k$ will be to remain silent within a frame.
\item Non-zero patterns with $\sum_{n=1}^N \pi_{k,n}<L$ should not be used, since they can not guarantee a successful frame and they consume more energy than the zero pattern.
\item Patterns with different values $\sum_{n=1}^N \pi_{k,n}\geq L$ should be used to allow exploration of protocols with different levels of energy and redundancy of transmissions.
\item For purposes of learning acceleration, the cardinality of the dictionary $d$ should be kept small, e.g. $d\leq d_{\max}$.
\item To avoid excessive number of collisions, it is preferable if different transmitters have different dictionaries. This can be achieved by generating the dictionaries in a random manner. However, we mention that having the same dictionary allows transmitters to share learned models, therefore the best approach would be to use groups od pseudo-random transmission patterns.
\end{itemize}
\subsubsection{Pattern dictionary design}
It is interesting to formulate the dictionary design as an optimization problem. However, we mention a few caveats. First, the optimization depends on the protocol of transmitters other than $k$, therefore this problem makes sense mostly when the rest of the transmitters have fixed and known protocols. Second, this is a combinatorial problem with non-convex objective and large dimensions, therefore a highly non-trivial optimization to solve.
Instead, we will take a very simple approach which appears to work in practice.
We propose to use a simple, \emph{randomized}, and \emph{fully distributed} dictionary design algorithm. In particular, transmitter $k$ chooses its dictionary $\mathcal{D}_k$ by (i) including the zero pattern, (ii) excluding every pattern with less than $L$ transmissions, (iii) and then choosing the remaining $d-1$ patterns at random. Specifically, fix $d$ to be a large value which, however, will not slow down our algorithmic computations. For instance, a typical value could be between $100$ and $1000$. Start with an empty dictionary, i.e., $\mathcal{D}_k=\emptyset$. Also, recall that $L$ is determined by the URLLC application. Then repeat the following steps:
\noindent \textbf{Randomized Dictionary Algorithm:}
\begin{enumerate}
\item Initialize dictionary with the zero pattern, i.e. $\mathcal{D}_k = \{\mathbf{0}\}$.
\item Choose a number $\ell$ uniformly at random in $\{L, \dots, N\}$ (the number of transmitting slots in a pattern).
\item Choose a random binary vector $\pi$ with $\ell$ ones (i.e. with $\ell$ transmitting slots).
\item If $\pi\notin \mathcal{D}_k$, then add it to the dictionary $\mathcal{D}_k\leftarrow \mathcal{D}_k\cup \{\pi\}$.
\item If $|\mathcal{D}_k|=d$ stop.
\end{enumerate}
In the remaining we will assume that the dictionary of our transmitter is chosen with the above algorithm, and remains fixed for the playout of our protocol.
\subsection{Learning the best pattern in the dictionary}
Consider a probability distribution $p=(p^1,\dots,p^d)$, where $p^i$ is a quality metric of pattern $\pi^i\in \mathcal{D}_k$. Learning the quality of patterns in the dictionary consists in estimating a ``good'' probability distribution $p^*$ that would maximize the expected instantaneous utility:
\[
\overline{U}_t(p) = \sum_i p^i U_t(\pi^i).
\]
However, a complication arising in this paper is that the precise form of the utility $U_t(\pi^i)$ depends on the transmissions of all other users, and therefore it is unknown to the decision maker
We will take the standard approach in the literature of Online Learning \cite{Shalev}. The idea is to allow $p$ to evolve over time, and at each iteration, to update it in a direction that improves the observed utility from the previous frame. The idea is that the previous frame serves as a ``prediction'' of what will happen in the next frame.
Here, because the constraint for $p$ has the form of a simplex (a constraint $\sum_ip^i=1$), it is favorable to use the exponentiated gradient, instead of the classical gradient, see \cite{exp_num}.
Therefore, our update mechanism is as follows:
\[
p^i(t)=\frac{p^i(t-1)e^{-\alpha v^i}}{\sum_{j=1}^dp^j(t-1)e^{-\alpha v^j}},
\]
where the vector $v=(v^1,\dots,v^d)$ is a subgradient of $\overline{U}_{t-1}(p)$ at $p(t-1)$, and $\alpha$ is the learning rate.
Notice that the subgradient $v$ at frame $t$ is computed based on feedback obtained from the previous frame $t-1$. Specifically,
the subgradient element $v^i$ has a very intuitive explanation as it is equal to the marginal benefit we would have in our expected utility (in the previous frame) if we would increase the probability of selecting pattern $\pi^i$. More simply, recall that $R_{t}(\pi)=1$ means that pattern $\pi$ achieves the URLLC objective in frame $t$, then we have $\forall i$:
\begin{equation}\label{eq:subgrad}
v^i = \left\{\begin{array}{rl}
-\eta_k\sum_n \pi_n^i & \text{ if } R_{t-1}(\pi^i)=0,\\
1-\eta_k\sum_n \pi_n^i & \text{ if } R_{t-1}(\pi^i)=1.
\end{array}\right.
\end{equation}
The learning rate $\alpha$ can be controlled to tradeoff how quickly and how accurately we learn. A typical choice in Online Learning is to optimize $\alpha$ for the horizon $T$, in which case we should choose:
\[
\alpha = \frac{\sqrt{2}}{G\sqrt{T}},
\]
where $G$ is an upper bound for each subgradient element. Hence, $G=\max\{1,\eta N\}$. Alternatively, the learning rate can be chosen larger to accelerate convergence (but discount the accuracy of convergence), or smaller to extend the convergence beyond the horizon (but make it more accurate).
Some remarks are in order:
\begin{itemize}
\item The above algorithm is a variation of the online gradient algorithm of Zinkevich \cite{zinkevich}. At each iteration, the utility $\overline{U}_t(p(t))$ is considered unknown (due to random or strategic transmissions of the other transmitters), and it is predicted using
\[\overline{U}_{t-1}(p(t-1))+\nabla\overline{U}_{t-1}(p(t-1))^T\left( p(t)-p(t-1)\right)
,\]
which can be computed using the obtained feedback.
\item Specifically, our algorithm belongs to the category of \emph{Online Mirror Descent} algorithms (see \cite{Belmega,Shalev,exp_num}), which use gradient exponentiation. Such algorithms achieve the optimal learning rate in geometries with simplex constraints (such as in our case), while they do not require projection.
\end{itemize}
A common metric used to quantify the quality of a learning algorithm is its \emph{regret}, which is defined as
\[
\text{Regret}(T)=\sum_{t=1}^T \overline{U}_t(p^*)-\sum_{t=1}^T \overline{U}_t(p(t)),
\]
where $p(t)$ is the distribution chosen by a candidate algorithm, and $p^*$ is the best distribution if we would know the entire sequence of transmissions of all other transmitters over the entire horizon $T$. Standard results from the literature of online learning tell us that our algorithm minimizes the worst-case regret and achieves $\text{Regret}(T)=o(T)$, i.e., (1) our algorithm is the best learner in the case that the other transmitters are trying to hurt us, and (2) as frames evolve, we learn the best static distribution $p^*$.
At this point, we mention that although the other transmitters \emph{are not really manipulated by an adversary}, our algorithm is so sensitive to changes in the interference that it can optimally adapt to many different scenarios, and in particular to situations that the interference fluctuate in a very abrupt and non-stationary way.
\section{The Learn2MAC Access Protocol}
In this section we summarize the design of our online learning-based multiple access protocol. The procedure is shown as Algorithm 1.
\begin{algorithm}[h!]
\caption{Learn2MAC}\label{alg:Qlearning}
\begin{algorithmic}[1]
\State {Choose a $d$ (typically as large as possible while the algorithm runs efficiently).}
\State {Choose the dictionary $\mathcal{D}_k\subseteq \mathcal{X}$ with $|\mathcal{D}_k|=d$ using the ``randomized dictionary algorithm'' above.}
\State {Initialize $\alpha=\sqrt{2/(T\max\{1,\eta^2N^2\})}$, $p_k(0)=(\frac{1}d,\dots,\frac{1}d)$.}
\For {every frame $t=1,\dots,T$}
\State {Update the probability distribution $p_k(t)$ using:
\[
p^i_k(t)=\frac{p^i_k(t-1)e^{-\alpha v^i_k}}{\sum_{j=1}^dp^j_k(t-1)e^{-\alpha v^j_k}},\quad i=1,\dots,d.
\]
}
\State {Choose a pattern from $\mathcal{D}_k$ at random according to the distribution $p_k(t)$.}
\State {Transmit according to the chosen pattern.}
\EndFor
\end{algorithmic}
\end{algorithm}
Above, we use the following notation:
\begin{itemize}
\item $\mathcal{D}_k$ is the dictionary of patterns, see Sec.~\ref{sec:restr},
\item $d$ is the size of the dictionary.
\item $\alpha$ is the learning rate,
\item $p_k(t)$ is a probability distribution over the patterns of the dictionary, and
\item $v_k$ is the subgradient vector in frame $t$, see \eqref{eq:subgrad}.
\end{itemize}
As a final remark, note that Learn2MAC exploits the fact that the feedback received is the occupancy of the medium at each slot within the frame, therefore can be used to deduce the performance of \emph{every} transmission pattern (and not the one just used) in the previous frame. This helps significantly speed up the learning process, and therefore the adaptability of the algorithm in changing environments.
\section{Numerical Analysis}
In this Section we illustrate the performance of Learn2MAC and its superiority with respect to baseline random access schemes via simulations. All simulations lasted for $T=30000$ frames. The setting here is that each frame has length of $N = 20$ slots, a URLLC packet of device $k$ is delivered if at least $L=2$ transmissions in the frame were successful, and a device using Learn2MAC has a dictionary of $d=100$ transmission patterns. The weight balancing the importance or latent throughput vs. energy consumption is set to $\eta_k=0.05$ for each device. Finally, the learning rate is set independently of the simulation horizon (which is quite relevant in practice since it may not be easy/possible to know how many frames a user will be active i advance) to $\alpha=0.001$. We compare Learn2MAC vs. the use of a standard random access scheme, where the device transmits at each slot independently at random with a probability $\overline{q}$.
We first verify that a single device using Learn2MAC can adapt to an environment with devices using a pre-existing protocol. For this, we examine two cases: (i)"Static Interference", where half of the slots of a frame are pre-allocated in a fixed TDMA fashion, and (ii) "Dynamic Interference" where pre-existing terminals access each slot of the frame randomly, each with a probability that is periodic in time. For a fair comparison, the access probability $\overline{q}$ of the baseline random access scheme is configured so that the energy expenditure is the same in both cases.
Results on the running average URLLC throughput are shown in Figures 2 and 3, respectively. Regarding the first case, Fig. 2 illustrates clearly that Learn2MAC learns to use the pattern which corresponds to transmissions in slots left idle by the background TDMA schedule; we also observed that, moreover, Learn2MAC learns the most efficient such code (i.e. the one with $2$ transmissions in idle slots). By contrast, the random access baseline performs very poorly. Regarding the case with dynamic background user activity, Fig. 3 illustrates that Learn2MAC achieves a higher throughput than the random access baseline for the same energy expenditure, therefore adapting transmissions to economically use energy in this case as well.
We then compared the two protocols for the case of uncoordinated medium access; herein, we have $K$ devices using Learn2MAC in one case and a random access protocol with transmission probability $\overline{q}=0.2$ at each slot \footnote{This value was chosen because it provides a good balance between not even attempting to transmit at least $L=2$ times (due to access probability being too low), thus losing latent throughput, and transmitting too aggressively, thus leading to many collisions.} in the other. We run the simulations for $T=30000$ frames as above and measure the total URLLC throughput obtained by the system (by summing up the URLLC throughput obtained by each device) at the end of each run for different number of devices $K$. These results are shown in Fig. 4. Remark that, since $L=2$ and there are $N=20$ slots in each frame, the maximun number of devices (scheduled by a centralized controller in non-overlapping slots) successfully transmitting a URLLC packet is $10$ per frame, which is the upper bound on the total latent throughput. From Fig. 4 we can observe that, at relatively low and medium load (up to $K=7$ devices), the total latent throughput scales almost \emph{linearly} with $K$: this means that Learn2MAC enables the devices to learn to use transmission patterns with no or little overlap with respect to each other, thus had few collisions and the devices were able to all coexist and transmit their URLLC packets in almost every frame. By contrast, when random access is used, the throughput obtained is still low due to collisions. When the number of devices approaches $10$, which is the maximum that can be supported, Learn2MAC exhibits the classical behaviour of uncoordinated medium access algorithms - namely rapid decrease in the latent throughput of the system due to collisions (while still outperforming the random access baseline though). This is the regime where admission control is really needed, since the available resources are very close to (or less) than the total needed by the devices and Learn2MAC still leads to many collisions in this case. This result suggests that Learn2MAC should be augmented by a mechanism where devices learn if the system is in the high- or low- load regime, and some devices must learn to completely disconnect from the system if the former is the case. This direction is very interesting from both the algorithmic/theoretical and the practical perspective and we leave it as future work.
\section{Conclusion}
\begin{figure}
\centering
\includegraphics[scale = 0.25]{thputTDMA_v2.png}
\caption*{\small {\bfseries Fig. 2 (Latent Throughput under TDMA Interference):} Running average of the URLLC throughput obtained from a single device using Learn2MAC and a baseline (ALOHA) random access scheme with a TDMA background schedule. The access probability of the baseline scheme is such that it results to the same energy expenditure as Learn2MAC.} \label{fig:vsTDMA}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale = 0.25]{thputDynamic_v2.png}
\caption*{\small {\bfseries Fig. 3 (Latent Throughput under ALOHA Interference):} Running average of the URLLC throughput obtained from a single device using Learn2MAC and a baseline (ALOHA) random access scheme with a background ALOHA scheme with periodic access probabilities.The access probability of the baseline scheme is such that it results to the same energy expenditure as Learn2MAC.} \label{fig:vsDynamic}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale = 0.25]{satPlot_v2.png}
\caption*{\small {\bfseries Fig. 4 (Saturation Latent Throughput Analysis):} Comparison of the system's performance between the cases where (i) all devices user Learn2MAC and (ii) all devices use a baseline (ALOHA) random access protocol with transmission probability $\overline{q}=0.2$. The maximum number of users that can be scheduled in a way that achieves their URLLC transmission requirement in this simulation setting is $10$.}
\label{fig:satPlot}
\end{figure}
In this paper, we proposed Learn2MAC, an Online Learning-based Multiple Access schemes that allows users to decide in a distributed manner which transmission pattern to choose. It is shown that Learn2MAC can provide URLLC guarantees, which is an important limitation of other uncoordinated access schemes, and outperform standard random access both in cases where a single device needs to adapt, in an energy-efficient manner, to an environment with users following pre-existing and in cases where multiple devices need to coordinate using the same protocol. In the latter case, it can enable devices to learn to coordinate with almost $100\%$ latent throughput in cases with high and medium number of resources. Therefore, Learn2MAC is a strong candidate for IoT applications that require at the same time latency guarantees, energy efficiency, and low coordination overhead.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.